Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Deduped dataset across all CC dumps or within each dump?

#26
by riturajj - opened

I want to know if the deduplication and fitering pipeline combined all dumps and then did the thing or was it run separately for each dump?
Let's say if I want to train a model on all 15T tokens then should I use all the dataset as is or should I first run the dedupe pipeline combining all dumps and then use that dataset?

riturajj changed discussion title from Deduped dataset across all CC dumps or within each dump to Deduped dataset across all CC dumps or within each dump?

Following the thread. I'm very interested in this statement:

While we originally intended to deduplicate the dataset as a whole, our ablations showed that training on a sampling of individually deduplicated dumps/crawls outperformed training on a sampling of all the dumps/crawls deduplicated together. We will discuss this further in our technical report.

I was not able to find the mentioned technical report attached to the dataset card

HuggingFaceFW org
guipenedo changed discussion status to closed

Sign up or log in to comment