Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
yury-zyphra commited on
Commit
9ab45af
1 Parent(s): d18591b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -8
README.md CHANGED
@@ -6,36 +6,55 @@ task_categories:
6
  language:
7
  - en
8
  size_categories:
9
- - n>1T
 
 
 
 
 
10
  ---
11
 
12
  # DCLM-Deduped
13
 
14
  [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) is a recently released high quality dataset that uses model-based quality filtering to filter a large subset of common-crawl for similarity to OpenHermes and other instruction-tuning datasets. For reference see the [DCLM paper](https://arxiv.org/pdf/2406.11794).
15
 
16
- The original authors of DCLM did not deduplicate their dataset and claimed that deduplication did not improve performance. Nevertheless, when performing our own deduplication of DCLM for [Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2), we noticed that DCLM contained a large fraction of duplicates. Specifically, the dataset appears to consist of approximately 80% duplicates.
17
 
18
- This lack of impact on downstream performance given this large duplication proportion is perplexing. However, in our own ablations we also replicated this fact. It seems that performing, on average, 5 epochs over the DCLM 'core dataset' is not harmful to language modelling. Nevertheless, the full impacts of this level of duplication on language models are not clear beyond evaluation scores.
 
 
 
 
 
 
19
 
20
  As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
21
 
 
 
 
22
  ## Breakdown by component
23
 
24
  | Dataset | Documents (millions) | gpt-neox tokens (billions) |
25
  | --- | --- | --- |
26
  | DCLM baseline | 2949.3 | 3854.9 |
27
- | DCLM full-deduped | 615.2 | 750.3 |
 
 
28
 
29
  ## How to download
30
 
31
- -/ TODO YURY
 
 
 
 
32
 
33
  ## Deduplication Details
34
 
35
- -/ TODO Yury please add details on the exact process you used to deduplicate
36
-
37
 
38
- We deduplicated DCM using the approximate minhash LSH method with the following parameters: minhash with signature size of 128 computed on character-based 25-grams signatures and split into 8 bands, giving roughly 85% Jaccard similarity threshold. We then constructed an undirected graph with nodes being documents and edges being duplicates, and found connected components in it, which provided us with clusters of duplicates. From each cluster, we selected the top document to keep and removed the rest.
39
 
40
  ### Dataset Description
41
 
 
6
  language:
7
  - en
8
  size_categories:
9
+ - 100B<n<1T
10
+ configs:
11
+ - config_name: default
12
+ data_files:
13
+ - split: train
14
+ path: data/*/*/*
15
  ---
16
 
17
  # DCLM-Deduped
18
 
19
  [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) is a recently released high quality dataset that uses model-based quality filtering to filter a large subset of common-crawl for similarity to OpenHermes and other instruction-tuning datasets. For reference see the [DCLM paper](https://arxiv.org/pdf/2406.11794).
20
 
21
+ The original authors of DCLM did not release fully deduplicated version of their dataset, claiming that full deduplication did not improve performance. The released version was partially deduplicated in shards.
22
 
23
+ Nevertheless, when performing our own deduplication of DCLM for [Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2), we noticed that DCLM contained a large fraction of duplicates. Specifically, the dataset appears to consist of approximately 80% duplicates.
24
+
25
+ We also analyzed clusters of duplicates, and we found there is a big drop off in number of clusters of sizes bigger than 100, although there are still clusters with extreme number of duplicates (up to a million), see figure below.
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65455aca468722e935103b17/0SCG4UnFE2ADQXKl9HCx9.png)
28
+
29
+ The lack of impact on downstream performance given this large duplication proportion is perplexing. However, in our own ablations we also replicated this fact. It seems that performing, on average, 5 epochs over the DCLM 'core dataset' is not harmful to language modelling. Nevertheless, the full impacts of this level of duplication on language models are not clear beyond evaluation scores.
30
 
31
  As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
32
 
33
+
34
+
35
+
36
  ## Breakdown by component
37
 
38
  | Dataset | Documents (millions) | gpt-neox tokens (billions) |
39
  | --- | --- | --- |
40
  | DCLM baseline | 2949.3 | 3854.9 |
41
+ | DCLM full-deduped | 615.2 | 750.3 |
42
+
43
+ Fully downloaded dataset is roughly 2TB in size in parquet format.
44
 
45
  ## How to download
46
 
47
+ To download, one can use `datasets` library directly:
48
+ ```
49
+ import datasets
50
+ ds = datasets.load_dataset("Zyphra/Zyda", split="train")
51
+ ```
52
 
53
  ## Deduplication Details
54
 
55
+ We deduplicated DCLM using the approximate minhash LSH method implemented in NeMo Curator with the following parameters: minhash with signature size of 128 computed on character-based 25-grams signatures and split into 8 bands, giving roughly 85% Jaccard similarity threshold. We then constructed an undirected graph with nodes being documents and edges being duplicates, and found connected components in it, which provided us with clusters of duplicates. From each cluster, we selected a random document to keep and removed the rest.
 
56
 
57
+ The deduplication process is closely related to how we created our [Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2) dataset, for which we released full reproduction [tutorial](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/zyda2-tutorial). Instead of doing careful cross-deduplication between components of Zyda-2, we only focused on DCLM itself, aggressively removing duplicated documents.
58
 
59
  ### Dataset Description
60