flydust commited on
Commit
590b033
1 Parent(s): caa244c

Model save

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Magpie-Align/Llama-3.1-8B-Magpie-Mix-300KMT-150KR
3
+ tags:
4
+ - trl
5
+ - dpo
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: Llama-3.1-8B-Magpie-Pro-MTR-UltraDPO-1
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/uw-nsl/huggingface/runs/ro30b4xx)
16
+ # Llama-3.1-8B-Magpie-Pro-MTR-UltraDPO-1
17
+
18
+ This model is a fine-tuned version of [Magpie-Align/Llama-3.1-8B-Magpie-Mix-300KMT-150KR](https://huggingface.co/Magpie-Align/Llama-3.1-8B-Magpie-Mix-300KMT-150KR) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.3298
21
+ - Rewards/chosen: -4.9310
22
+ - Rewards/rejected: -6.7966
23
+ - Rewards/accuracies: 0.8952
24
+ - Rewards/margins: 1.8655
25
+ - Logps/rejected: -878.5105
26
+ - Logps/chosen: -698.1248
27
+ - Logits/rejected: -0.5776
28
+ - Logits/chosen: -0.5622
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 1e-06
48
+ - train_batch_size: 2
49
+ - eval_batch_size: 4
50
+ - seed: 42
51
+ - distributed_type: multi-GPU
52
+ - num_devices: 8
53
+ - gradient_accumulation_steps: 16
54
+ - total_train_batch_size: 256
55
+ - total_eval_batch_size: 32
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 1
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.4439 | 0.4275 | 100 | 0.4168 | -4.9964 | -6.3086 | 0.8145 | 1.3123 | -829.7151 | -704.6570 | -0.5150 | -0.5001 |
66
+ | 0.343 | 0.8549 | 200 | 0.3298 | -4.9310 | -6.7966 | 0.8952 | 1.8655 | -878.5105 | -698.1248 | -0.5776 | -0.5622 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.43.2
72
+ - Pytorch 2.3.1+cu121
73
+ - Datasets 2.20.0
74
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9959925193694897,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.45488236134655996,
5
+ "train_runtime": 10577.9564,
6
+ "train_samples": 59875,
7
+ "train_samples_per_second": 5.66,
8
+ "train_steps_per_second": 0.022
9
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128001,
6
+ "temperature": 0.6,
7
+ "top_p": 0.9,
8
+ "transformers_version": "4.43.2"
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9959925193694897,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.45488236134655996,
5
+ "train_runtime": 10577.9564,
6
+ "train_samples": 59875,
7
+ "train_samples_per_second": 5.66,
8
+ "train_steps_per_second": 0.022
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff