Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
BerenMillidge commited on
Commit
3cee00c
1 Parent(s): 91f8bf9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -19,10 +19,6 @@ This lack of impact on downstream performance given this large duplication propo
19
 
20
  As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
21
 
22
- ## How to download
23
-
24
- -/ TODO YURY
25
-
26
  ## Breakdown by component
27
 
28
  | Dataset | Documents (millions) | gpt-neox tokens (billions) |
@@ -30,6 +26,11 @@ As such, we release a fully deduplicated version of DCLM in case it is of intere
30
  | DCLM baseline | 2949.3 | 3854.9 |
31
  | DCLM full-deduped | 615.2 | 750.3 |
32
 
 
 
 
 
 
33
  ### Dataset Description
34
 
35
 
 
19
 
20
  As such, we release a fully deduplicated version of DCLM in case it is of interest to the community. DCLM-deduped consists of approximately 750B tokens. If you are planning to pretrain on less than this amount of DCLM tokens it is perhaps safer to use this version than the original DCLM.
21
 
 
 
 
 
22
  ## Breakdown by component
23
 
24
  | Dataset | Documents (millions) | gpt-neox tokens (billions) |
 
26
  | DCLM baseline | 2949.3 | 3854.9 |
27
  | DCLM full-deduped | 615.2 | 750.3 |
28
 
29
+ ## How to download
30
+
31
+ -/ TODO YURY
32
+
33
+
34
  ### Dataset Description
35
 
36