# TaMillion | |
This is a first attempt at a Tamil language model trained with | |
Google Research's [ELECTRA](https://github.com/google-research/electra). | |
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1GngBFn_Ge5Hd2XI2febBhZyU7GDiqw5w | |
Current V1 is at 100,000 steps | |
## Corpus | |
Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.1GB) and 1 July 2020 dump of ta.wikipedia.org (476MB) | |
## Vocabulary | |
Included as vocab.txt in the upload - vocab_size is 40161 | |