๐ฆ๐ฟ Azerbaijan mGPT 1.3B
Language model for Azerbaijan. Model has 1.3B parameters as you can guess from it's name.
Azerbaijan belongs to Turkic language family. It's a very melodious language with approximately 23 million speakers. Here are some facts about it:
- It is closely related to Turkish.
- It uses both Latin and Arabic scripts historically.
- It is the official language of the Republic of Azerbaijan.
Technical details
It's one of the models derived from the base mGPT-XL (1.3B) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. Azerbaijan mGPT 1.3B was trained for another 70000 steps with batch_size=4 and context window of 2048 tokens on 1 A100.
Final perplexity for this model on validation is 5.37.
Chart of the training loss and perplexity:
Other mGPT-1.3B models
- ๐ฆ๐ฒ mGPT-1.3B Armenian
- ๐ฏ mGPT-1.3B Bashkir
- ๐ง๐พ mGPT-1.3B Belorussian
- ๐ง๐ฌ mGPT-1.3B Bulgarian
- ๐ mGPT-1.3B Buryat
- ๐ณ mGPT-1.3B Chuvash
- ๐ฌ๐ช mGPT-1.3B Georgian
- ๐ธ mGPT-1.3B Kalmyk
- ๐ฐ๐ฟ mGPT-1.3B Kazakh
- ๐ฐ๐ฌ mGPT-1.3B Kirgiz
- ๐ป mGPT-1.3B Mari
- ๐ฒ๐ณ mGPT-1.3B Mongol
- ๐ mGPT-1.3B Ossetian
- ๐ฎ๐ท mGPT-1.3B Persian
- ๐ท๐ด mGPT-1.3B Romanian
- ๐น๐ฏ mGPT-1.3B Tajik
- โ mGPT-1.3B Tatar
- ๐น๐ฒ mGPT-1.3B Turkmen
- ๐ mGPT-1.3B Tuvan
- ๐บ๐ฆ mGPT-1.3B Ukranian
- ๐บ๐ฟ mGPT-1.3B Uzbek
- ๐ mGPT-1.3B Yakut
Feedback
If you'll found a bug of have additional data to train model on your language โ please, give us feedback.
Model will be improved over time. Stay tuned!
- Downloads last month
- 32