--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - trl - dpo - generated_from_trainer model-index: - name: zephyr-7b-dpo-full-magpi-low-bleu-3-epochs results: [] --- # zephyr-7b-dpo-full-magpi-low-bleu-3-epochs This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0004 - Rewards/chosen: -1.8844 - Rewards/rejected: -46.8077 - Rewards/accuracies: 1.0 - Rewards/margins: 44.9232 - Logps/rejected: -5321.5576 - Logps/chosen: -555.4259 - Logits/rejected: 2.7529 - Logits/chosen: -1.2323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 55 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0066 | 0.4739 | 50 | 0.0028 | -1.0908 | -33.9616 | 0.9980 | 32.8709 | -4036.9529 | -476.0595 | -1.2144 | -1.9103 | | 0.0177 | 0.9479 | 100 | 0.0006 | -1.6117 | -43.9541 | 1.0 | 42.3424 | -5036.1978 | -528.1522 | 1.4562 | -2.1299 | | 0.0006 | 1.4218 | 150 | 0.0004 | -1.7244 | -46.1666 | 1.0 | 44.4422 | -5257.4517 | -539.4232 | 1.6969 | -1.9837 | | 0.0002 | 1.8957 | 200 | 0.0005 | -1.7575 | -44.7450 | 1.0 | 42.9875 | -5115.2886 | -542.7341 | 2.1634 | -2.0033 | | 0.0001 | 2.3697 | 250 | 0.0004 | -1.8985 | -46.5225 | 1.0 | 44.6240 | -5293.0405 | -556.8339 | 2.7114 | -1.2429 | | 0.0001 | 2.8436 | 300 | 0.0004 | -1.8844 | -46.8077 | 1.0 | 44.9232 | -5321.5576 | -555.4259 | 2.7529 | -1.2323 | ### Framework versions - Transformers 4.44.0.dev0 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1