chain-texts-0.1-dolphin-mixtral-8x7b
This model is a fine-tuned version of cognitivecomputations/dolphin-2.2.1-mistral-7b on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 1.6571
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.8452 | 0.1887 | 20 | 1.8520 |
1.6519 | 0.3774 | 40 | 1.7660 |
1.6726 | 0.5660 | 60 | 1.7475 |
1.6545 | 0.7547 | 80 | 1.7325 |
1.7688 | 0.9434 | 100 | 1.7146 |
1.7037 | 1.1321 | 120 | 1.7112 |
1.5269 | 1.3208 | 140 | 1.6965 |
1.4638 | 1.5094 | 160 | 1.6875 |
1.647 | 1.6981 | 180 | 1.6847 |
1.5333 | 1.8868 | 200 | 1.6772 |
1.5194 | 2.0755 | 220 | 1.6854 |
1.5149 | 2.2642 | 240 | 1.6847 |
1.3981 | 2.4528 | 260 | 1.6653 |
1.4842 | 2.6415 | 280 | 1.6612 |
1.4262 | 2.8302 | 300 | 1.6571 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for WHATEVER420/chain-texts-0.1-dolphin-mixtral-8x7b
Base model
mistralai/Mistral-7B-v0.1