LLaMA-3.1-8B-Infinity3M-Kobo
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B on the https://huggingface.co/datasets/KoboldAI/infinity3m-kobo dataset. With this model we hope to provide a suitable base for further fiction tunes, this tune makes use of the highly mergable alpaca format and was stripped of all writing tasks. Due to the purposeful removal of fiction related tasks this model will be unusable in the usual use cases our community enjoys, but prevents undesirable biases in fiction tunes trained on top of this instruct model.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
---|---|---|---|---|
0.7855 | 0.2797 | 250 | 0.7919 | 262144000 |
0.6871 | 0.5594 | 500 | 0.7598 | 524288000 |
0.7689 | 0.8392 | 750 | 0.7425 | 786432000 |
0.7507 | 1.1189 | 1000 | 0.7350 | 1048576000 |
0.7827 | 1.3986 | 1250 | 0.7286 | 1310720000 |
0.6795 | 1.6783 | 1500 | 0.7241 | 1572864000 |
0.6489 | 1.9580 | 1750 | 0.7199 | 1835008000 |
0.6875 | 2.2378 | 2000 | 0.7206 | 2097152000 |
0.7462 | 2.5175 | 2250 | 0.7195 | 2359296000 |
0.7546 | 2.7972 | 2500 | 0.7188 | 2621440000 |
Framework versions
- Transformers 4.43.4
- Pytorch 2.4.0
- Datasets 2.20.0
- Tokenizers 0.19.1
Special thanks to G4rg for the compute!
- Downloads last month
- 57
Model tree for KoboldAI/LLaMA-3.1-8B-Infinity3M-Kobo
Base model
meta-llama/Llama-3.1-8B