Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
dclm-dedup / README.md
BerenMillidge's picture
Update README.md
b3d4ae4 verified
|
raw
history blame
2.17 kB
metadata
license: cc
pretty_name: DCLM-Deduped
task_categories:
  - text-generation
language:
  - en
size_categories:
  - n>1T

DCLM-Deduped

DCLM is a recently released high quality dataset that uses model-based quality filtering to filter a large subset of common-crawl for similarity to OpenHermes and other instruction-tuning datasets. For reference see the DCLM paper

The original authors of DCLM did not deduplicate their dataset and claimed that deduplication did not improve performance. Nevertheless, when performing our own deduplication of DCLM for Zyda-2, we noticed that DCLM contained a large fraction of duplicates. Specifically, the dataset appears to consist of approximately 80% duplicates.

This lack of impact on downstream performance given this large duplication proportion is perplexing. However, in our own ablations we also replicated this fact. It seems that performing, on average, 5 epochs over the DCLM 'core dataset' is not harmful to language modelling. Nevertheless, the full impacts of this level of duplication on language models are not clear beyond evaluation scores.

As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.

How to download

-/ TODO YURY

Breakdown by component

Dataset Documents (millions) gpt-neox tokens (billions)
DCLM baseline 2949.3 3854.9
DCLM full-deduped 615.2 750.3

Dataset Description

  • Curated by: Zyphra (deduplicated from DCLM)
  • Language(s) (NLP): Primarily English
  • License: CC-BY-4

Licensing Information

We are releasing this dataset under the terms of cc-by-4, the same license as the original DCLM dataset.