File size: 3,604 Bytes
2ba65e8
 
f397ab4
10c4104
cb99dd4
 
 
 
 
 
 
 
 
 
 
 
10c4104
cb99dd4
 
 
 
 
10c4104
cb99dd4
10c4104
cb99dd4
10c4104
 
cb99dd4
 
 
 
 
 
 
10c4104
 
 
cb99dd4
 
 
10c4104
cb99dd4
 
 
 
10c4104
 
cb99dd4
 
10c4104
cb99dd4
 
 
 
 
 
 
 
 
 
 
 
 
 
f397ab4
 
9f49a85
f397ab4
 
2bb66cb
 
 
 
10c4104
 
 
f397ab4
2ba65e8
f397ab4
 
 
cb99dd4
2bb66cb
b4dce31
2bb66cb
 
 
 
 
cc20058
 
 
2bb66cb
cc20058
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bb66cb
cc20058
 
 
2bb66cb
 
 
cb99dd4
b4dce31
cc20058
cb99dd4
b4dce31
 
cc20058
b4dce31
cb99dd4
db42fcc
2bb66cb
 
0fb1ca9
14ac632
db42fcc
666ef9e
db42fcc
 
2bb66cb
 
cb99dd4
f397ab4
cb99dd4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: apache-2.0
language:
- ar
- he
- vi
- id
- jv
- ms
- tl
- lv
- lt
- eu
- ml
- ta
- te
- hy
- bn
- mr
- hi
- ur
- af
- da
- en
- de
- sv
- fr
- it
- pt
- ro
- es
- el
- os
- tg
- fa
- ja
- ka
- ko
- th
- bxr
- xal
- mn
- sw
- yo
- be
- bg
- ru
- uk
- pl
- my
- uz
- ba
- kk
- ky
- tt
- az
- cv
- tr
- tk
- tyv
- sax
- et
- fi
- hu

pipeline_tag: text-generation
tags:
- multilingual
- PyTorch
- Transformers
- gpt3
- gpt2
- Deepspeed
- Megatron
datasets:
- mc4
- wikipedia
thumbnail: "https://github.com/sberbank-ai/mgpt"
---

# Multilingual GPT model

We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on 61 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus. 

We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages. 

## Code
The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt)

## Paper
 mGPT: Few-Shot Learners Go Multilingual
 
 [Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf)

 ![](https://habrastorage.org/webt/1q/ru/yt/1qruytul6m2m-upyk9frq3pgrds.png)

 ```
@misc{https://doi.org/10.48550/arxiv.2204.07580,
  doi = {10.48550/ARXIV.2204.07580},
  
  url = {https://arxiv.org/abs/2204.07580},
  
  author = {Shliazhko, Oleh and Fenogenova, Alena and Tikhonova, Maria and Mikhailov, Vladislav and Kozlova, Anastasia and Shavrina, Tatiana},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2; I.2.7, 68-06, 68-04, 68T50, 68T01},
  
  title = {mGPT: Few-Shot Learners Go Multilingual},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}

 ```


## Languages

Model supports 61 languages: 

ISO codes:
```ar he vi id jv ms tl lv lt eu ml ta te hy bn mr hi ur af da en de sv fr it pt ro es el os tg fa ja ka ko th bxr xal mn sw yo be bg ru uk pl my uz ba kk ky tt az cv tr tk tyv sax et fi hu```


Languages:

```Arabic, Hebrew, Vietnamese, Indonesian, Javanese, Malay, Tagalog, Latvian, Lithuanian, Basque, Malayalam, Tamil, Telugu, Armenian, Bengali, Marathi, Hindi, Urdu, Afrikaans, Danish, English, German, Swedish, French, Italian, Portuguese, Romanian, Spanish, Greek, Ossetian, Tajik, Persian, Japanese, Georgian, Korean, Thai, Buryat, Kalmyk, Mongolian, Swahili, Yoruba, Belarusian, Bulgarian, Russian, Ukrainian, Polish, Burmese, Uzbek, Bashkir, Kazakh, Kyrgyz, Tatar, Azerbaijani, Chuvash, Turkish, Turkmen, Tuvan, Yakut, Estonian, Finnish, Hungarian```

## Training Data Statistics

 - Size: 488 Billion UTF characters


<img style="text-align:center; display:block;" src="https://huggingface.co/sberbank-ai/mGPT/resolve/main/stats.png">
"General training corpus statistics"


## Details
The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in 61 languages. The model has seen 440 billion BPE tokens in total.

Total training time was around 14 days on 256 Nvidia V100 GPUs.