sfulay's picture
Model save
e8a8caf verified
metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo-gamma-2
    results: []

zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo-gamma-2

This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1135
  • Rewards/chosen: -0.4357
  • Rewards/rejected: -0.9844
  • Rewards/accuracies: 0.75
  • Rewards/margins: 0.5488
  • Logps/rejected: -344.9655
  • Logps/chosen: -328.6565
  • Logits/rejected: -1.1351
  • Logits/chosen: -1.6080

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 55
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.1732 0.1147 50 0.1640 0.0148 -0.1111 0.7069 0.1258 -257.6288 -283.6147 -2.4924 -2.5722
0.1403 0.2294 100 0.1362 -0.1370 -0.4667 0.6940 0.3297 -293.1888 -298.7873 -1.8344 -2.0560
0.1324 0.3440 150 0.1286 -0.4769 -0.9509 0.7371 0.4740 -341.6123 -332.7828 -1.2887 -1.6554
0.1249 0.4587 200 0.1217 -0.2893 -0.7611 0.7241 0.4719 -322.6352 -314.0176 -1.4798 -1.8578
0.1189 0.5734 250 0.1175 -0.4263 -0.9754 0.7629 0.5491 -344.0638 -327.7221 -1.2227 -1.6727
0.1252 0.6881 300 0.1154 -0.4298 -0.9852 0.7543 0.5554 -345.0454 -328.0691 -1.1891 -1.6634
0.1226 0.8028 350 0.1137 -0.4793 -1.0328 0.7543 0.5535 -349.7979 -333.0171 -1.0590 -1.5759
0.1206 0.9174 400 0.1135 -0.4357 -0.9844 0.75 0.5488 -344.9655 -328.6565 -1.1351 -1.6080

Framework versions

  • Transformers 4.44.0.dev0
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1