TxT360 / README.md
victormiller's picture
Update README.md
13ed587 verified
|
raw
history blame
2.4 kB
metadata
license: odc-by

TxT360: a globally deduplicated dataset for LLM pretraining

k2 eval table

We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting and train the most performant models.

TxT360 Compared to Common Pretraining Datasets

Data Source TxT360 FineWeb RefinedWeb PedPajamaV2 C4 Dolma RedPajamaV1 The Pile
CommonCrawl Snapshots 99 96 90 84 1 24 5 0.6% of 74
Papers 5 Sources - - - - 1 Source 1 Source 4 Sources
Wikipedia 310+ Languages - - - - Included Included English Only
FreeLaw Included - - - - - - Included
DM Math Included - - - - - - Included
USPTO Included - - - - - - Included
PG-19 Included - - - - Included Included Included
HackerNews Included - - - - - - Included
Ubuntu IRC Included - - - - - - Included
EuroParl Included - - - - - - Included
StackExchange Included - - - - - - Included
Code ** - - - - Included Included Included

Complete details on the dataset can be found in our blog post here.