--- license: apache-2.0 language: - ar - he - vi - id - jv - ms - tl - lv - lt - eu - ml - ta - te - hy - bn - mr - hi - ur - af - da - en - de - sv - fr - it - pt - ro - es - el - os - tg - fa - ja - ka - ko - th - bxr - xal - mn - sw - yo - be - bg - ru - uk - pl - my - uz - ba - kk - ky - tt - az - cv - tr - tk - tyv - sax - et - fi - hu pipeline_tag: text-generation tags: - multilingual - PyTorch - Transformers - gpt3 - gpt2 - Deepspeed - Megatron datasets: - mc4 - wikipedia thumbnail: "https://github.com/sberbank-ai/mgpt" --- # mGPT 1.3B We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on 61 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages. ## Code The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt) ## Paper **mGPT: Few-Shot Learners Go Multilingual** Published at TACL 2024 (MIT Press). Presented at EMNLP 2023. [Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf) ``` @article{shliazhko-etal-2024-mgpt, title = "m{GPT}: Few-Shot Learners Go Multilingual", author = "Shliazhko, Oleh and Fenogenova, Alena and Tikhonova, Maria and Kozlova, Anastasia and Mikhailov, Vladislav and Shavrina, Tatiana", journal = "Transactions of the Association for Computational Linguistics", volume = "12", year = "2024", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2024.tacl-1.4", doi = "10.1162/tacl_a_00633", pages = "58--79", abstract = "This paper introduces mGPT, a multilingual variant of GPT-3, pretrained on 61 languages from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We detail the design and pretraining procedure. The models undergo an intrinsic and extrinsic evaluation: language modeling in all languages, downstream evaluation on cross-lingual NLU datasets and benchmarks in 33 languages, and world knowledge probing in 23 languages. The in-context learning abilities are on par with the contemporaneous language models while covering a larger number of languages, including underrepresented and low-resource languages of the Commonwealth of Independent States and the indigenous peoples in Russia. The source code and the language models are publicly available under the MIT license.", } ``` ## Languages Model supports 61 languages: ISO codes: ```ar he vi id jv ms tl lv lt eu ml ta te hy bn mr hi ur af da en de sv fr it pt ro es el os tg fa ja ka ko th bxr xal mn sw yo be bg ru uk pl my uz ba kk ky tt az cv tr tk tyv sax et fi hu``` Languages: ```Arabic, Hebrew, Vietnamese, Indonesian, Javanese, Malay, Tagalog, Latvian, Lithuanian, Basque, Malayalam, Tamil, Telugu, Armenian, Bengali, Marathi, Hindi, Urdu, Afrikaans, Danish, English, German, Swedish, French, Italian, Portuguese, Romanian, Spanish, Greek, Ossetian, Tajik, Persian, Japanese, Georgian, Korean, Thai, Buryat, Kalmyk, Mongolian, Swahili, Yoruba, Belarusian, Bulgarian, Russian, Ukrainian, Polish, Burmese, Uzbek, Bashkir, Kazakh, Kyrgyz, Tatar, Azerbaijani, Chuvash, Turkish, Turkmen, Tuvan, Yakut, Estonian, Finnish, Hungarian``` ## Training Data Statistics - Size: 488 Billion UTF characters "General training corpus statistics" ## Details The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in 61 languages. The model has seen 440 billion BPE tokens in total. Total training time was around 14 days on 256 Nvidia V100 GPUs.