Papers
arxiv:2406.01981

Zyda: A 1.3T Dataset for Open Language Modeling

Published on Jun 4
Authors:
,
,
,
,
,

Abstract

The size of large language models (LLMs) has scaled dramatically in recent years and their computational and data requirements have surged correspondingly. State-of-the-art language models, even at relatively smaller sizes, typically require training on at least a trillion tokens. This rapid advancement has eclipsed the growth of open-source datasets available for large-scale LLM pretraining. In this paper, we introduce Zyda (Zyphra Dataset), a dataset under a permissive license comprising 1.3 trillion tokens, assembled by integrating several major respected open-source datasets into a single, high-quality corpus. We apply rigorous filtering and deduplication processes, both within and across datasets, to maintain and enhance the quality derived from the original datasets. Our evaluations show that Zyda not only competes favorably with other open datasets like Dolma, FineWeb, and RefinedWeb, but also substantially improves the performance of comparable models from the Pythia suite. Our rigorous data processing methods significantly enhance Zyda's effectiveness, outperforming even the best of its constituent datasets when used independently.

Community

I don't understand the "trend" in highlighting the number of tokens:

  • It is never clear what "token" is: a real "tokenized" word, subword token, whitespace-tokenized word
  • It is highly depending on a specific tokenizer (why do you want that?)
  • Does anyone has a good intuition what 1T is? How much B or T has the current English Wikipedia dump?

Why not also reporting the number of actual size, e.g. it was nicely done in BERT and RoBERTa papers (in the good old times).

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.01981 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.