victormiller commited on
Commit
ba436a1
1 Parent(s): 9e724bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -24,3 +24,47 @@ license: odc-by
24
 
25
 
26
  Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360-New).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
 
26
  Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360-New).
27
+
28
+
29
+ ## Initial Data Representation
30
+ To produce TxT360, a comprehensive and transparent data processing pipeline was designed to account for the nuances of both web and curated datasets. The pipeline presents a unified framework for processing both data types, making it convenient and easily adaptive for users to revise and fine-tune the pipeline for their own use cases.
31
+
32
+ Web datasets are inherently noisy and varied. The TxT360 pipeline implements sophisticated filtering and deduplication techniques to clean and remove redundancies while preserving data integrity.
33
+
34
+ Curated datasets are typically structured and consistently formatted. TxT360 filters these sources with selective steps to maintain their integrity while providing seamless integration into the larger dataset. Both data source types are globally deduplicated together resulting in 5.7T tokens of high-quality data. The table below shows the source distribution of TxT360 tokens.
35
+
36
+ | Data Source | Raw Data Size | Token Count | Information Cut-Off Date |
37
+ |-----------------|---------------|-------------|--------------------------|
38
+ | CommonCrawl | 11 TB | 5.71T | 2024-30 |
39
+ | Papers | 712 GB | 154.96B | Q4 2023 |
40
+ | Wikipedia | 210 GB | 4.75B | - |
41
+ | Freelaw | 23 GB | 7.34B | Q1 2024 |
42
+ | DM Math | 22 GB | 5.23B | - |
43
+ | USPTO | 45 GB | 4.95B | Q4 2023 |
44
+ | PG-19 | 11 GB | 2.94B | - |
45
+ | HackerNews | 4.1 GB | 1.08B | Q4 2023 |
46
+ | Ubuntu IRC | 4.7 GB | 1.54B | Q4 2023 |
47
+ | Europarl | 6.1 GB | 1.96B | - |
48
+ | StackExchange | 45 GB | 8.37B | Q4 2023 |
49
+
50
+
51
+ ## CommonCrawl Data Filtering
52
+ Follow [this link](https://llm360-txt360-new.hf.space/webdata#section1) to view all steps taken to filter the web data.
53
+
54
+ ## Curated Source Filtering
55
+ Each data source was filtered individually with respect to the underlying data. Full details and discussion on how each source is filter is covered [here](https://llm360-txt360-new.hf.space/curated#section1).
56
+
57
+ ## Global Deduplication
58
+ After the web and curated sources were filtered, they were globally deduplicated to create TxT360. The deduplication process is available [here](https://llm360-txt360-new.hf.space/common#section2).
59
+
60
+ # Citation
61
+
62
+ **BibTeX:**
63
+
64
+ ```bibtex
65
+ @misc{txt360data2024,
66
+ title={TxT360: a globally deduplicated dataset for LLM pretraining},
67
+ author={Liping Tang, Nikhil Ranjan, Omkar Pangarkar, Zhen Wang, An Li, Zhoujun Cheng, Suqi Sun, Cun Mu, Victor Miller, Yue Peng, Eric P. Xing, Zhengzhong Liu},
68
+ year={2024}
69
+ }
70
+ ```