Hugging Face TB Research

Enterprise
community

AI & ML interests

Exploring synthetic datasets, generated by Large Language Models (TB is for Textbook, as inspired by the "Textbooks are all your need" paper)

HuggingFaceTB

This is the home for small LLMs (SmolLM) and high quality pre-training datasets, such as Cosmopedia and Smollm-Corpus.

We released:

  • Cosmopedia: the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
  • Cosmo-1B a 1B model trained on Cosmopedia.
  • FineWeb-Edu: a filtered version of FineWeb dataset for educational content
  • Smollm-Corpus: the pre-training corpus of SmolLM models including Cosmopedia v0.2, FineWeb-Edu and Python-Edu.
  • SmolLM models and SmolLM2: a series of strong small models in three sizes: 135M, 360M and 1.7B

For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm