--- license: gemma base_model: google/gemma-2-2b-it tags: - llama-factory - full - generated_from_trainer model-index: - name: 20240818_gemma-2-2b-it_full-anonymous-617_dpo-sft_BFI results: [] --- # 20240818_gemma-2-2b-it_full-anonymous-617_dpo-sft_BFI This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the BFI-anonymous-617_dpo_no-system_sharegpt_105 dataset. It achieves the following results on the evaluation set: - Loss: 1.8025 - Rewards/chosen: 8.8006 - Rewards/rejected: 3.5406 - Rewards/accuracies: 0.9247 - Rewards/margins: 5.2600 - Logps/rejected: -78.8782 - Logps/chosen: -118.9999 - Logits/rejected: -4.3666 - Logits/chosen: -4.5476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 2.1444 | 0.4365 | 500 | 2.1669 | 6.0677 | 3.1466 | 0.9032 | 2.9211 | -82.8180 | -146.3293 | 0.2298 | 0.0904 | | 1.9865 | 0.8731 | 1000 | 2.0409 | 6.5585 | 2.8550 | 0.8710 | 3.7036 | -85.7341 | -141.4206 | 1.2474 | 1.0619 | | 1.4582 | 1.3096 | 1500 | 1.8728 | 8.1097 | 3.6131 | 0.9247 | 4.4967 | -78.1532 | -125.9085 | -0.5647 | -0.7531 | | 1.4326 | 1.7462 | 2000 | 1.7866 | 8.5668 | 3.5837 | 0.9247 | 4.9831 | -78.4469 | -121.3381 | -1.9024 | -2.0860 | | 1.0857 | 2.1827 | 2500 | 1.8025 | 8.6210 | 3.5507 | 0.9247 | 5.0704 | -78.7771 | -120.7956 | -3.4905 | -3.6840 | | 1.0376 | 2.6192 | 3000 | 1.8047 | 8.7683 | 3.5103 | 0.9247 | 5.2580 | -79.1805 | -119.3225 | -4.2916 | -4.4732 | ### Framework versions - Transformers 4.43.1 - Pytorch 2.3.0a0+ebedce2 - Datasets 2.20.0 - Tokenizers 0.19.1