Datasets:
BerenMillidge
commited on
Commit
•
6e2848c
1
Parent(s):
f403b1f
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,101 @@
|
|
1 |
-
---
|
2 |
-
license: odc-by
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
---
|
4 |
+
|
5 |
+
# Zyda2-5T
|
6 |
+
|
7 |
+
<!-- Provide a quick summary of the dataset. -->
|
8 |
+
|
9 |
+
Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
|
10 |
+
|
11 |
+
An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
|
12 |
+
|
13 |
+
Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM.
|
14 |
+
|
15 |
+
According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
|
16 |
+
|
17 |
+
|
18 |
+
## How to download
|
19 |
+
|
20 |
+
// TODO
|
21 |
+
|
22 |
+
## Breakdown by component
|
23 |
+
|
24 |
+
// TODO
|
25 |
+
|
26 |
+
### Dataset Description
|
27 |
+
|
28 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
29 |
+
|
30 |
+
- **Curated by:** Zyphra
|
31 |
+
- **Language(s) (NLP):** Primarily English
|
32 |
+
- **License:** Open Data Commons License
|
33 |
+
|
34 |
+
|
35 |
+
## Dataset Structure
|
36 |
+
|
37 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
38 |
+
|
39 |
+
// TODO IS THIS CORRECT YURY?
|
40 |
+
|
41 |
+
|
42 |
+
Dataset fields:
|
43 |
+
- `text`: contains actual text for training
|
44 |
+
- `source`: component the text is coming from
|
45 |
+
- `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
|
46 |
+
- `source_other`: metadata from the source dataset (converted to json string)
|
47 |
+
|
48 |
+
### Source Data
|
49 |
+
|
50 |
+
Zyda2 is comprised of four high quality open-source datasets.
|
51 |
+
|
52 |
+
Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
|
53 |
+
|
54 |
+
Dolma-1.7-cc https://huggingface.co/datasets/allenai/dolma
|
55 |
+
|
56 |
+
DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
|
57 |
+
|
58 |
+
FineWeb-Edu-2 https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
|
59 |
+
|
60 |
+
|
61 |
+
// Pie chart of composition -- YURY!
|
62 |
+
|
63 |
+
#### Data Collection and Processing
|
64 |
+
|
65 |
+
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
66 |
+
|
67 |
+
Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*.
|
68 |
+
|
69 |
+
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
|
70 |
+
|
71 |
+
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
|
72 |
+
|
73 |
+
For full details on our data processing, see the [Zyda technical report](https://arxiv.org/abs/2406.01981) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing).
|
74 |
+
|
75 |
+
|
76 |
+
#### Personal and Sensitive Information
|
77 |
+
|
78 |
+
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
|
79 |
+
|
80 |
+
## Bias, Risks, and Limitations
|
81 |
+
|
82 |
+
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
|
83 |
+
|
84 |
+
## Licensing Information
|
85 |
+
|
86 |
+
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
|
87 |
+
|
88 |
+
## Citation
|
89 |
+
|
90 |
+
If you use our dataset to train a model, please cite us at:
|
91 |
+
|
92 |
+
```
|
93 |
+
@misc{tokpanov2024zyda,
|
94 |
+
title={Zyda: A 1.3T Dataset for Open Language Modeling},
|
95 |
+
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
|
96 |
+
year={2024},
|
97 |
+
eprint={2406.01981},
|
98 |
+
archivePrefix={arXiv},
|
99 |
+
primaryClass={cs.CL}
|
100 |
+
}
|
101 |
+
```
|