chchen's picture
Model save
e91c17d verified
|
raw
history blame
No virus
2.65 kB
---
license: gemma
library_name: peft
tags:
- trl
- dpo
- llama-factory
- generated_from_trainer
base_model: google/gemma-7b-it
model-index:
- name: Gemma-7B-It-ORPO-SALT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma-7B-It-ORPO-SALT
This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2657
- Rewards/chosen: -0.1198
- Rewards/rejected: -0.1438
- Rewards/accuracies: 0.5700
- Rewards/margins: 0.0239
- Logps/rejected: -1.4377
- Logps/chosen: -1.1983
- Logits/rejected: 253.9599
- Logits/chosen: 253.6037
- Sft Loss: 1.1983
- Odds Ratio Loss: 0.6746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
| 1.374 | 0.8082 | 500 | 1.3436 | -0.1276 | -0.1503 | 0.5673 | 0.0227 | -1.5033 | -1.2762 | 249.9064 | 249.6123 | 1.2762 | 0.6738 |
| 1.1628 | 1.6165 | 1000 | 1.2833 | -0.1215 | -0.1446 | 0.5618 | 0.0231 | -1.4461 | -1.2153 | 253.1810 | 252.8272 | 1.2153 | 0.6796 |
| 1.1874 | 2.4247 | 1500 | 1.2657 | -0.1198 | -0.1438 | 0.5700 | 0.0239 | -1.4377 | -1.1983 | 253.9599 | 253.6037 | 1.1983 | 0.6746 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1