crumb's picture
Update README.md
2fe3a56 verified
---
dataset_info:
features:
- name: text
dtype: string
- name: pos
dtype: float64
splits:
- name: train
num_bytes: 5335090828.0
num_examples: 1002630
download_size: 3227201658
dataset_size: 5335090828.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
1,378,234,368 tokens (using the Llama tokenizer, ~1.18b gpt4 tokens) from a deduped pile raw shard, filter len<896, ask-llm ([“How to Train Data-Efficient LLMs”](https://arxiv.org/abs/2402.09668)) w/ [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), keep top 1/4
```
{
"text": "Once upon a time...",
"pos": -5.654354325
}
```