File size: 1,314 Bytes
18a7e0a f4fc789 18a7e0a 2bbe3b0 7a3637a 2bbe3b0 7a3637a 2bbe3b0 7a3637a 2bbe3b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: metadata
struct:
- name: pile_set_name
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 64095383
num_examples: 40338
download_size: 39795200
dataset_size: 64095383
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Description
This dataset is a sampled subset of the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset.
The subset sample distribution is:
```json
{
'Pile-CC': 19767,
'OpenWebText2': 12424,
'FreeLaw': 3752,
'USPTO Backgrounds': 1055,
'Wikipedia (en)': 813,
'PubMed Central': 576,
'PubMed Abstracts': 499,
'BookCorpus2': 285,
'Books3': 266,
'Gutenberg (PG-19)': 228,
'StackExchange': 184,
'PhilPapers': 112,
'YoutubeSubtitles': 91,
'OpenSubtitles': 75,
'ArXiv': 56,
'NIH ExPorter': 47,
'Enron Emails': 39,
'HackerNews': 29,
'Github': 28,
'EuroParl': 12
}
```
The dataset contains ~10M words of text. This can be checked with:
```python
from datasets import load_dataset
ds = load_dataset("PatrickHaller/pile-10M-words")
count = 0
for row in ds["train"]:
count += len(row["text"].split(" "))
print(count)
# Out: 9999894
``` |