Datasets:
Maurice Weber
commited on
Commit
β’
84a48e2
1
Parent(s):
949be9c
update README.md
Browse files
README.md
CHANGED
@@ -12,40 +12,62 @@ pretty_name: Red Pajama V2 Data Foundation
|
|
12 |
|
13 |
### Getting Started
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
```python
|
16 |
from datasets import load_dataset
|
17 |
|
18 |
-
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="
|
19 |
```
|
20 |
|
21 |
-
|
|
|
|
|
|
|
22 |
|
23 |
```bash
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
wget "$line" -O "$dload_loc"
|
29 |
-
done < urls.txt
|
30 |
-
```
|
31 |
-
|
32 |
-
After downloading the files, you can load the dataset from disk by setting the `RED_PAJAMA_DATA_DIR` environment
|
33 |
-
variable XXXX TODO XXX
|
34 |
|
35 |
-
|
|
|
36 |
|
37 |
-
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="en-sample")
|
41 |
```
|
42 |
|
43 |
-
A full set of scripts to recreate the dataset
|
44 |
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
45 |
|
46 |
### Dataset Summary
|
47 |
|
48 |
-
|
|
|
49 |
|
50 |
### Languages
|
51 |
|
@@ -53,25 +75,93 @@ English, German, French, Italian, Spanish
|
|
53 |
|
54 |
## Dataset Structure
|
55 |
|
56 |
-
The
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
|
58 |
```json
|
59 |
{
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
}
|
62 |
```
|
63 |
|
64 |
-
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
-
TODO
|
70 |
|
71 |
-
To cite RedPajama, please use:
|
72 |
|
73 |
```
|
74 |
-
@software{together2023redpajama,
|
75 |
author = {Together Computer},
|
76 |
title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
|
77 |
month = October,
|
@@ -80,13 +170,11 @@ To cite RedPajama, please use:
|
|
80 |
}
|
81 |
```
|
82 |
|
83 |
-
### License
|
84 |
-
|
85 |
-
TODO: double check this
|
86 |
|
87 |
Please refer to the licenses of the data subsets you use.
|
88 |
|
89 |
-
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use
|
90 |
|
91 |
<!--
|
92 |
### Annotations
|
|
|
12 |
|
13 |
### Getting Started
|
14 |
|
15 |
+
The full RedPajama-V2 dataset is a data foundation that includes a over 100B text documents coming from 84 CommonCrawl
|
16 |
+
snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are
|
17 |
+
30B documents in the corpus that additionally come with quality signals.
|
18 |
+
|
19 |
+
Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
|
20 |
+
|
21 |
+
To familiarize yourself with the dataset, you can load the sample dataset with the following command:
|
22 |
+
|
23 |
```python
|
24 |
from datasets import load_dataset
|
25 |
|
26 |
+
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
27 |
```
|
28 |
|
29 |
+
Alternatively, you can also directly download the files using the following instructions, using english data from the
|
30 |
+
`2023-06` snapshot and the `head_middle` partition as an example. The full set of CC snapshots included in the dataset
|
31 |
+
is given in `_CC_SNAPSHOT_IDS`, and the available partitions are `tail`, `head_middle`. The available language tags are
|
32 |
+
`en`, `de`, `fr`, `es`, `it`.
|
33 |
|
34 |
```bash
|
35 |
+
CC_SNAPSHOT="2023-06"
|
36 |
+
LANG="en"
|
37 |
+
PARTITION="head_middle"
|
38 |
+
BASE_URL="https://data.together.xyz/redpajama-data-v2/v1.0.0/"
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
listings_file="${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
|
41 |
+
wget "${BASE_URL}/listings/${listings_file}"
|
42 |
|
43 |
+
# download documents
|
44 |
+
while read line; do
|
45 |
+
url="${BASE_URL}/documents/${line}.json.gz"
|
46 |
+
dest="documents/${line}.json.gz"
|
47 |
+
mkdir -p $(dirname $dest)
|
48 |
+
wget "$line" -O "$dest"
|
49 |
+
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
|
50 |
+
|
51 |
+
# download other components
|
52 |
+
COMPS=("quality_signals" "minhash" "duplicates")
|
53 |
+
for comp in "${COMPS[@]}"; do
|
54 |
+
while read line; do
|
55 |
+
url="${BASE_URL}/${comp}/${line}.${comp}.json.gz"
|
56 |
+
dest="${comp}/${line}.${comp}.json.gz"
|
57 |
+
mkdir -p $(dirname $dest)
|
58 |
+
wget "$line" -O "$dest"
|
59 |
+
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
|
60 |
+
done
|
61 |
|
|
|
62 |
```
|
63 |
|
64 |
+
A full set of scripts to recreate the dataset including the quality signals can be
|
65 |
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
66 |
|
67 |
### Dataset Summary
|
68 |
|
69 |
+
RedPajama-V2 is a data foundation for which includes over 100B text documents, out of which 30B documents come with
|
70 |
+
quality annotations.
|
71 |
|
72 |
### Languages
|
73 |
|
|
|
75 |
|
76 |
## Dataset Structure
|
77 |
|
78 |
+
The datset is structure into four components, each following the same key structure:
|
79 |
+
|
80 |
+
```
|
81 |
+
βββ documents
|
82 |
+
βββ 2018-43
|
83 |
+
βββ 0000
|
84 |
+
βββ en_head.json.gz
|
85 |
+
βββ ...
|
86 |
+
βββ it_middle.json.gz
|
87 |
+
βββ quality_signals
|
88 |
+
βββ 2018-43
|
89 |
+
βββ 0000
|
90 |
+
βββ en_head.signals.json.gz
|
91 |
+
βββ ...
|
92 |
+
βββ it_middle.json.gz
|
93 |
+
βββ duplicates
|
94 |
+
βββ 2018-43
|
95 |
+
βββ 0000
|
96 |
+
βββ en_head.duplicates.parquet
|
97 |
+
βββ ...
|
98 |
+
βββ it_middle.duplicates.parquet
|
99 |
+
βββ minhash
|
100 |
+
βββ 2018-43
|
101 |
+
βββ 0000
|
102 |
+
βββ en_head.minhash.parquet
|
103 |
+
βββ ...
|
104 |
+
βββ it_middle.minhash.parquet
|
105 |
+
```
|
106 |
+
|
107 |
+
Documents files, which contain the text, folow the schema defined by CCNet, and the quality signals follow the schema
|
108 |
|
109 |
```json
|
110 |
{
|
111 |
+
"id": "2018-43/0000/en_head.json.gz/0",
|
112 |
+
"id_int": 7972430436813205988,
|
113 |
+
"metadata": {
|
114 |
+
"cc_segment": "crawl-data/...",
|
115 |
+
"cc_net_source": "2018-43/0000/en_head.json.gz",
|
116 |
+
"url": "...",
|
117 |
+
"source_domain": "...",
|
118 |
+
"language": "en",
|
119 |
+
"snapshot_id": "2018-43"
|
120 |
+
},
|
121 |
+
"quality_signals": {
|
122 |
+
"ccnet_original_length": [
|
123 |
+
[
|
124 |
+
0,
|
125 |
+
7033,
|
126 |
+
8711.0
|
127 |
+
]
|
128 |
+
],
|
129 |
+
...,
|
130 |
+
"rps_doc_stop_word_fraction": [
|
131 |
+
[
|
132 |
+
0,
|
133 |
+
7033,
|
134 |
+
0.45121107
|
135 |
+
]
|
136 |
+
],
|
137 |
+
"rps_lines_num_words": [
|
138 |
+
[
|
139 |
+
0,
|
140 |
+
25,
|
141 |
+
2
|
142 |
+
],
|
143 |
+
...,
|
144 |
+
[
|
145 |
+
6980,
|
146 |
+
7033,
|
147 |
+
10
|
148 |
+
]
|
149 |
+
]
|
150 |
+
}
|
151 |
}
|
152 |
```
|
153 |
|
154 |
+
where signal scores are encoded as list of tuple `(start, end, score)`, where `start` and `end` are the locations in the
|
155 |
+
`raw_content` string where the `score` applies.
|
156 |
|
157 |
+
## Dataset Creation
|
158 |
|
159 |
+
The dataset is based on 84 snapshots provided by CommonCrawl.
|
|
|
160 |
|
161 |
+
To cite RedPajama-V2, please use:
|
162 |
|
163 |
```
|
164 |
+
@software{together2023redpajama-v2,
|
165 |
author = {Together Computer},
|
166 |
title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
|
167 |
month = October,
|
|
|
170 |
}
|
171 |
```
|
172 |
|
173 |
+
### License ---- TODO ----
|
|
|
|
|
174 |
|
175 |
Please refer to the licenses of the data subsets you use.
|
176 |
|
177 |
+
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use)
|
178 |
|
179 |
<!--
|
180 |
### Annotations
|