Datasets:
task_categories:
- text-generation
language:
- en
- de
- fr
- es
- it
pretty_name: Red Pajama V2 Data Foundation
Getting Started
The full RedPajama-V2 dataset is a data foundation that includes a over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals.
Check out our blog post for more details on the build process, dataset structure and schema.
To familiarize yourself with the dataset, you can load the sample dataset with the following command:
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
Alternatively, you can also directly download the files using the following instructions, using english data from the
2023-06
snapshot and the head_middle
partition as an example. The full set of CC snapshots included in the dataset
is given in _CC_SNAPSHOT_IDS
, and the available partitions are tail
, head_middle
. The available language tags are
en
, de
, fr
, es
, it
.
CC_SNAPSHOT="2023-06"
LANG="en"
PARTITION="head_middle"
BASE_URL="https://data.together.xyz/redpajama-data-v2/v1.0.0/"
listings_file="${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
wget "${BASE_URL}/listings/${listings_file}"
# download documents
while read line; do
url="${BASE_URL}/documents/${line}.json.gz"
dest="documents/${line}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
# download other components
COMPS=("quality_signals" "minhash" "duplicates")
for comp in "${COMPS[@]}"; do
while read line; do
url="${BASE_URL}/${comp}/${line}.${comp}.json.gz"
dest="${comp}/${line}.${comp}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
done
A full set of scripts to recreate the dataset including the quality signals can be found here.
Dataset Summary
RedPajama-V2 is a data foundation for which includes over 100B text documents, out of which 30B documents come with quality annotations.
Languages
English, German, French, Italian, Spanish
Dataset Structure
The datset is structure into four components, each following the same key structure:
βββ documents
βββ 2018-43
βββ 0000
βββ en_head.json.gz
βββ ...
βββ it_middle.json.gz
βββ quality_signals
βββ 2018-43
βββ 0000
βββ en_head.signals.json.gz
βββ ...
βββ it_middle.json.gz
βββ duplicates
βββ 2018-43
βββ 0000
βββ en_head.duplicates.parquet
βββ ...
βββ it_middle.duplicates.parquet
βββ minhash
βββ 2018-43
βββ 0000
βββ en_head.minhash.parquet
βββ ...
βββ it_middle.minhash.parquet
Documents files, which contain the text, folow the schema defined by CCNet, and the quality signals follow the schema
{
"id": "2018-43/0000/en_head.json.gz/0",
"id_int": 7972430436813205988,
"metadata": {
"cc_segment": "crawl-data/...",
"cc_net_source": "2018-43/0000/en_head.json.gz",
"url": "...",
"source_domain": "...",
"language": "en",
"snapshot_id": "2018-43"
},
"quality_signals": {
"ccnet_original_length": [
[
0,
7033,
8711.0
]
],
...,
"rps_doc_stop_word_fraction": [
[
0,
7033,
0.45121107
]
],
"rps_lines_num_words": [
[
0,
25,
2
],
...,
[
6980,
7033,
10
]
]
}
}
where signal scores are encoded as list of tuple (start, end, score)
, where start
and end
are the locations in the
raw_content
string where the score
applies.
Dataset Creation
The dataset is based on 84 snapshots provided by CommonCrawl.
To cite RedPajama-V2, please use:
@software{together2023redpajama-v2,
author = {Together Computer},
title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
month = October,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
License ---- TODO ----
Please refer to the licenses of the data subsets you use.