Datasets:
BerenMillidge
commited on
Commit
•
91f8bf9
1
Parent(s):
b3d4ae4
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ size_categories:
|
|
11 |
|
12 |
# DCLM-Deduped
|
13 |
|
14 |
-
[DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) is a recently released high quality dataset that uses model-based quality filtering to filter a large subset of common-crawl for similarity to OpenHermes and other instruction-tuning datasets. For reference see the [DCLM paper](https://arxiv.org/pdf/2406.11794)
|
15 |
|
16 |
The original authors of DCLM did not deduplicate their dataset and claimed that deduplication did not improve performance. Nevertheless, when performing our own deduplication of DCLM for [Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2), we noticed that DCLM contained a large fraction of duplicates. Specifically, the dataset appears to consist of approximately 80% duplicates.
|
17 |
|
@@ -19,8 +19,6 @@ This lack of impact on downstream performance given this large duplication propo
|
|
19 |
|
20 |
As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
|
21 |
|
22 |
-
|
23 |
-
|
24 |
## How to download
|
25 |
|
26 |
-/ TODO YURY
|
|
|
11 |
|
12 |
# DCLM-Deduped
|
13 |
|
14 |
+
[DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) is a recently released high quality dataset that uses model-based quality filtering to filter a large subset of common-crawl for similarity to OpenHermes and other instruction-tuning datasets. For reference see the [DCLM paper](https://arxiv.org/pdf/2406.11794).
|
15 |
|
16 |
The original authors of DCLM did not deduplicate their dataset and claimed that deduplication did not improve performance. Nevertheless, when performing our own deduplication of DCLM for [Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2), we noticed that DCLM contained a large fraction of duplicates. Specifically, the dataset appears to consist of approximately 80% duplicates.
|
17 |
|
|
|
19 |
|
20 |
As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
|
21 |
|
|
|
|
|
22 |
## How to download
|
23 |
|
24 |
-/ TODO YURY
|