You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Hindi Language Pre-Trained LLM Datasets Overview

Welcome to the Hindi Language Pre-Training Datasets repository! This README provides a comprehensive overview of various pre-training datasets available for Hindi, including essential details such as licenses, sources, and statistical information. These datasets are invaluable resources for training and fine-tuning large language models (LLMs) for a wide range of natural language processing (NLP) tasks.

-Data Overview and Statistics

This README provides a summary of various pre-training datasets available along with their licenses, sources, and statistical information. This document serves as a concise guide providing essential information about each dataset, including its license, source, and statistical breakdown.

1.Wikipedia Dataset

License: cc-by-sa-3.0

Source: https://huggingface.co/datasets/wikimedia/wikipedia/viewer/20231101.hi

Total Token Count: 43.67 million

Total Sentence Count: 1.85 million

-The Wikipedia Dataset, with its extensive coverage of general knowledge topics, provides a diverse range of textual data suitable for pre-training large-scale language models in Hindi. Its wide-ranging subject matter offers valuable context for understanding various real-world scenarios, making it an essential resource for building robust and comprehensive pre-trained LLMs.

2.Dialecthindi Dataset

License: Not Applicable

Source: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4839

Total Token Count: 0.46 million

Total Sentence Count: 0.06 million

-The Dialecthindi dataset, consisting of book excerpts or articles, shows the diversity of language use in different contexts. Incorporating this database into pre-service training can improve the model's ability to understand and produce text in different Hindi dialects and registers, increase linguistic flexibility and adapt to different communication styles.

3.ai4bharat IndicParaphrase

License: cc-by-nc-4.0

Source: https://huggingface.co/datasets/ai4bharat/IndicParaphrase

Total Token Count: 55.67 million

Total Sentence Count: 5.86 million

-ai4bharat IndicParaphrase Dataset provides a valuable resource for training language models to understand and generate Indic paraphrases. By using this database during pretraining, LLM models can develop a deeper understanding of semantic equivalence and transformation and improve their performance in tasks such as text generation, summarization, and answering questions.

4.Miracl Corpus

License: apache-2.0

Source: https://huggingface.co/datasets/miracl/miracl-corpus/viewer/hi

Total Token Count: 33.66 million

Total Sentence Count: 2.04 million

-The Miracl Corpus Dataset of medical texts provides a valuable resource for training LLM models focused on medical domain problems. Incorporating knowledge of the medical domain during pretraining can improve the model's understanding of medical terminology and improve performance on understanding medical texts, summarization, and task-answering questions.

5.Oscar

License: cc0-1.0

Source: https://huggingface.co/datasets/oscar/viewer/unshuffled_original_hi

Total Token Count: 745.99 million

Total Sentence Count: 27.12 million

-The Oscar dataset, a multimodal dataset, provides a comprehensive resource for training LLM models aimed at understanding different types of textual and visual data. Using this database during pretraining can help LLM models develop a nuanced understanding of multimodal data and improve their performance in tasks such as image registration, multimodal translation, and multimodal summation.

6.bigscience xP3all

License: apache-2.0

Source: https://huggingface.co/datasets/bigscience/xP3all/viewer/hi

Total Token Count: 395.32 million

Total Sentence Count: 21.86 million

-Big Science xP3all Dataset provides a valuable resource for training LLM models relevant to scientific domain problems, focusing on scientific literature. Incorporating scientific domain knowledge during early preparation can improve the model's understanding of scientific terminology and improve his performance in understanding, summarizing, and answering questions from scientific texts.

#Each dataset provides unique linguistic information suitable for training large-scale language models, contributing to the advancement of Indian language processing problems. With a combined total of 1.27 billion and 58.79 million words, this database provides an extensive resource for researchers and developers to build robust and context-aware LLM models suitable for specific language domains and applications.

Contributors

  • Shantipriya Parida
  • Shakshi Panwar
  • Kusum Lata
  • Sanskruti Mishra
  • Sambit Sekhar

Citation

If you find this repository useful, please consider giving 👏 and citing:

@misc{Hindi_LLM_Corpus,
  author = {Shantipriya Parida and Shakshi Panwar and Kusum Lata and Sanskruti Mishra and Sambit Sekhar},
  title = {BUILDING PRE-TRAIN LLM DATASET FOR THE INDIC LANGUAGES: A CASE STUDY ON HINDI},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Downloads last month
49