Datasets:
license: odc-by
Zyda2-5T
Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
To construct Zyda2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda2 outperforms all its constituent datasets in resulting model quality.
An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series of models which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as Starcoder.
// TODO Ablation scores key plots
For more information, please see our technical blog (-/TODO LINK)
How to download
// TODO YURY
Breakdown by component
// TODO YURY
Dataset Description
- Curated by: Zyphra
- Language(s) (NLP): Primarily English
- License: Open Data Commons License
Dataset Structure
// TODO IS THIS CORRECT YURY?
Dataset fields:
text
: contains actual text for trainingsource
: component the text is coming fromfiltering_features
: precomputed values of different features that were used for filtering (converted to json string)source_other
: metadata from the source dataset (converted to json string)
Source Data
Zyda2 is comprised of four high quality open-source datasets:
Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
Dolma-1.7-cc https://huggingface.co/datasets/allenai/dolma
DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
FineWeb-Edu-2 https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
// Pie chart of composition -- YURY!
Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
Citation
If you use our dataset to train a model, please cite us at:
@misc{tokpanov2024zyda,
title={Zyda: A 1.3T Dataset for Open Language Modeling},
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
year={2024},
eprint={2406.01981},
archivePrefix={arXiv},
primaryClass={cs.CL}
}