Edit model card

openhermes-mistral-dpo-gptq

This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5172
  • Rewards/chosen: 0.2719
  • Rewards/rejected: 0.0027
  • Rewards/accuracies: 0.4375
  • Rewards/margins: 0.2692
  • Logps/rejected: -523.3978
  • Logps/chosen: -559.6996
  • Logits/rejected: -1.8608
  • Logits/chosen: -1.7791

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.8935 0.01 10 0.6254 0.5705 0.2915 0.9375 0.2790 -520.5098 -556.7138 -1.8760 -1.8462
2.7353 0.01 20 1.4708 0.6063 2.5173 0.25 -1.9110 -498.2514 -556.3558 -1.9855 -2.0419
3.4601 0.01 30 0.4799 5.3762 3.1472 0.6875 2.2289 -491.9521 -508.6570 -1.9445 -1.9689
15.4868 0.02 40 0.4790 1.4084 0.7478 0.6875 0.6606 -515.9464 -548.3342 -1.8667 -1.7981
0.4857 0.03 50 0.5172 0.2719 0.0027 0.4375 0.2692 -523.3978 -559.6996 -1.8608 -1.7791

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.7
  • Tokenizers 0.15.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for leonvanbokhorst/openhermes-mistral-dpo-gptq