Datasets:
File size: 5,324 Bytes
cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae 84a48e2 cb715ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
---
task_categories:
- text-generation
language:
- en
- de
- fr
- es
- it
pretty_name: Red Pajama V2 Data Foundation
---
### Getting Started
The full RedPajama-V2 dataset is a data foundation that includes a over 100B text documents coming from 84 CommonCrawl
snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are
30B documents in the corpus that additionally come with quality signals.
Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
To familiarize yourself with the dataset, you can load the sample dataset with the following command:
```python
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
```
Alternatively, you can also directly download the files using the following instructions, using english data from the
`2023-06` snapshot and the `head_middle` partition as an example. The full set of CC snapshots included in the dataset
is given in `_CC_SNAPSHOT_IDS`, and the available partitions are `tail`, `head_middle`. The available language tags are
`en`, `de`, `fr`, `es`, `it`.
```bash
CC_SNAPSHOT="2023-06"
LANG="en"
PARTITION="head_middle"
BASE_URL="https://data.together.xyz/redpajama-data-v2/v1.0.0/"
listings_file="${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
wget "${BASE_URL}/listings/${listings_file}"
# download documents
while read line; do
url="${BASE_URL}/documents/${line}.json.gz"
dest="documents/${line}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
# download other components
COMPS=("quality_signals" "minhash" "duplicates")
for comp in "${COMPS[@]}"; do
while read line; do
url="${BASE_URL}/${comp}/${line}.${comp}.json.gz"
dest="${comp}/${line}.${comp}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
done
```
A full set of scripts to recreate the dataset including the quality signals can be
found [here](https://github.com/togethercomputer/RedPajama-Data).
### Dataset Summary
RedPajama-V2 is a data foundation for which includes over 100B text documents, out of which 30B documents come with
quality annotations.
### Languages
English, German, French, Italian, Spanish
## Dataset Structure
The datset is structure into four components, each following the same key structure:
```
βββ documents
βββ 2018-43
βββ 0000
βββ en_head.json.gz
βββ ...
βββ it_middle.json.gz
βββ quality_signals
βββ 2018-43
βββ 0000
βββ en_head.signals.json.gz
βββ ...
βββ it_middle.json.gz
βββ duplicates
βββ 2018-43
βββ 0000
βββ en_head.duplicates.parquet
βββ ...
βββ it_middle.duplicates.parquet
βββ minhash
βββ 2018-43
βββ 0000
βββ en_head.minhash.parquet
βββ ...
βββ it_middle.minhash.parquet
```
Documents files, which contain the text, folow the schema defined by CCNet, and the quality signals follow the schema
```json
{
"id": "2018-43/0000/en_head.json.gz/0",
"id_int": 7972430436813205988,
"metadata": {
"cc_segment": "crawl-data/...",
"cc_net_source": "2018-43/0000/en_head.json.gz",
"url": "...",
"source_domain": "...",
"language": "en",
"snapshot_id": "2018-43"
},
"quality_signals": {
"ccnet_original_length": [
[
0,
7033,
8711.0
]
],
...,
"rps_doc_stop_word_fraction": [
[
0,
7033,
0.45121107
]
],
"rps_lines_num_words": [
[
0,
25,
2
],
...,
[
6980,
7033,
10
]
]
}
}
```
where signal scores are encoded as list of tuple `(start, end, score)`, where `start` and `end` are the locations in the
`raw_content` string where the `score` applies.
## Dataset Creation
The dataset is based on 84 snapshots provided by CommonCrawl.
To cite RedPajama-V2, please use:
```
@software{together2023redpajama-v2,
author = {Together Computer},
title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
month = October,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
### License ---- TODO ----
Please refer to the licenses of the data subsets you use.
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use)
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |