ai-forever
commited on
Commit
•
cb99dd4
1
Parent(s):
9f49a85
Update README.md
Browse files
README.md
CHANGED
@@ -1,68 +1,68 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
-
- en
|
5 |
-
- az
|
6 |
-
- sw
|
7 |
-
- af
|
8 |
- ar
|
9 |
-
-
|
10 |
-
-
|
11 |
-
-
|
12 |
-
-
|
13 |
-
-
|
14 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
- hy
|
|
|
|
|
|
|
|
|
|
|
16 |
- da
|
|
|
17 |
- de
|
18 |
-
-
|
19 |
-
- es
|
20 |
-
- eu
|
21 |
-
- fa
|
22 |
-
- fi
|
23 |
- fr
|
24 |
-
- he
|
25 |
-
- hi
|
26 |
-
- hu
|
27 |
-
- kk
|
28 |
-
- id
|
29 |
- it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
- ja
|
31 |
- ka
|
32 |
-
- ky
|
33 |
- ko
|
34 |
-
-
|
35 |
-
-
|
|
|
36 |
- mn
|
37 |
-
-
|
38 |
-
-
|
39 |
-
-
|
40 |
-
-
|
41 |
-
- my
|
42 |
-
- nl
|
43 |
-
- ro
|
44 |
-
- pl
|
45 |
-
- pt
|
46 |
-
- sah
|
47 |
- ru
|
48 |
-
- tg
|
49 |
-
- sv
|
50 |
-
- ta
|
51 |
-
- te
|
52 |
-
- tk
|
53 |
-
- th
|
54 |
-
- tr
|
55 |
-
- tl
|
56 |
-
- tt
|
57 |
-
- tyv
|
58 |
- uk
|
59 |
-
-
|
60 |
-
-
|
61 |
-
- vi
|
62 |
- uz
|
63 |
-
-
|
64 |
-
-
|
65 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
pipeline_tag: text-generation
|
67 |
tags:
|
68 |
- multilingual
|
@@ -80,7 +80,7 @@ thumbnail: "https://github.com/sberbank-ai/mgpt"
|
|
80 |
|
81 |
# Multilingual GPT model
|
82 |
|
83 |
-
We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on
|
84 |
|
85 |
We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages.
|
86 |
|
@@ -118,15 +118,15 @@ The source code for the mGPT XL model is available on [Github](https://github.co
|
|
118 |
|
119 |
## Languages
|
120 |
|
121 |
-
Model supports
|
122 |
|
123 |
ISO codes:
|
124 |
-
```
|
125 |
|
126 |
|
127 |
Languages:
|
128 |
|
129 |
-
```
|
130 |
|
131 |
## Training Data Statistics
|
132 |
|
@@ -138,6 +138,6 @@ Languages:
|
|
138 |
|
139 |
|
140 |
## Details
|
141 |
-
The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in
|
142 |
|
143 |
-
Total training time was around
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
language:
|
|
|
|
|
|
|
|
|
4 |
- ar
|
5 |
+
- he
|
6 |
+
- vi
|
7 |
+
- id
|
8 |
+
- jv
|
9 |
+
- ms
|
10 |
+
- tl
|
11 |
+
- lv
|
12 |
+
- lt
|
13 |
+
- eu
|
14 |
+
- ml
|
15 |
+
- ta
|
16 |
+
- te
|
17 |
- hy
|
18 |
+
- bn
|
19 |
+
- mr
|
20 |
+
- hi
|
21 |
+
- ur
|
22 |
+
- af
|
23 |
- da
|
24 |
+
- en
|
25 |
- de
|
26 |
+
- sv
|
|
|
|
|
|
|
|
|
27 |
- fr
|
|
|
|
|
|
|
|
|
|
|
28 |
- it
|
29 |
+
- pt
|
30 |
+
- ro
|
31 |
+
- es
|
32 |
+
- el
|
33 |
+
- os
|
34 |
+
- tg
|
35 |
+
- fa
|
36 |
- ja
|
37 |
- ka
|
|
|
38 |
- ko
|
39 |
+
- th
|
40 |
+
- bxr
|
41 |
+
- xal
|
42 |
- mn
|
43 |
+
- sw
|
44 |
+
- yo
|
45 |
+
- be
|
46 |
+
- bg
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
- ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
- uk
|
49 |
+
- pl
|
50 |
+
- my
|
|
|
51 |
- uz
|
52 |
+
- ba
|
53 |
+
- kk
|
54 |
+
- ky
|
55 |
+
- tt
|
56 |
+
- az
|
57 |
+
- cv
|
58 |
+
- tr
|
59 |
+
- tk
|
60 |
+
- tyv
|
61 |
+
- sax
|
62 |
+
- et
|
63 |
+
- fi
|
64 |
+
- hu
|
65 |
+
|
66 |
pipeline_tag: text-generation
|
67 |
tags:
|
68 |
- multilingual
|
|
|
80 |
|
81 |
# Multilingual GPT model
|
82 |
|
83 |
+
We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on 61 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus.
|
84 |
|
85 |
We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages.
|
86 |
|
|
|
118 |
|
119 |
## Languages
|
120 |
|
121 |
+
Model supports 61 languages:
|
122 |
|
123 |
ISO codes:
|
124 |
+
```ar he vi id jv ms tl lv lt eu ml ta te hy bn mr hi ur af da en de sv fr it pt ro es el os tg fa ja ka ko th bxr xal mn sw yo be bg ru uk pl my uz ba kk ky tt az cv tr tk tyv sax et fi hu```
|
125 |
|
126 |
|
127 |
Languages:
|
128 |
|
129 |
+
```Arabic, Hebrew, Vietnamese, Indonesian, Javanese, Malay, Tagalog, Latvian, Lithuanian, Basque, Malayalam, Tamil, Telugu, Armenian, Bengali, Marathi, Hindi, Urdu, Afrikaans, Danish, English, German, Swedish, French, Italian, Portuguese, Romanian, Spanish, Greek, Ossetian, Tajik, Persian, Japanese, Georgian, Korean, Thai, Buryat, Kalmyk, Mongolian, Swahili, Yoruba, Belarusian, Bulgarian, Russian, Ukrainian, Polish, Burmese, Uzbek, Bashkir, Kazakh, Kyrgyz, Tatar, Azerbaijani, Chuvash, Turkish, Turkmen, Tuvan, Yakut, Estonian, Finnish, Hungarian```
|
130 |
|
131 |
## Training Data Statistics
|
132 |
|
|
|
138 |
|
139 |
|
140 |
## Details
|
141 |
+
The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in 61 languages. The model has seen 440 billion BPE tokens in total.
|
142 |
|
143 |
+
Total training time was around 14 days on 256 Nvidia V100 GPUs.
|