metadata
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- tg
license:
- cc-by-4.0
multilinguality:
- monolingual
After I realised problems with Automatic language identification (LangID), and bad quality of web-crawled text corpora for my Language. I curated my own dataset. Essentially I downloaded multiple versions of the Tajik subset of Leipzig Corpora Collection, which is comprised of texts from diverse sources like news, literature, and Wikipedia.
I had to do some rigorous preprocessing by hard-coding heuristics and regexes and perform the steps below iteratively:
- deduplicating
- removing curse words
- any political bias
- any English character present
- removing words which don't exist in Tajik
- several hundred of non-tajik sentences