NicholasCorrado commited on
Commit
adf1bdb
1 Parent(s): 1a63694

Model save

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: alignment-handbook/zephyr-7b-sft-full
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-uf-rlced-conifer-dpo-2e
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-uf-rlced-conifer-dpo-2e
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.2471
22
+ - Rewards/chosen: -3.9472
23
+ - Rewards/rejected: -12.0338
24
+ - Rewards/accuracies: 0.8910
25
+ - Rewards/margins: 8.0866
26
+ - Logps/rejected: -1613.5016
27
+ - Logps/chosen: -779.3055
28
+ - Logits/rejected: 4.5606
29
+ - Logits/chosen: 2.4398
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-07
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 8
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 256
56
+ - total_eval_batch_size: 64
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 2
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.1587 | 1.3879 | 1000 | 0.2471 | -3.9472 | -12.0338 | 0.8910 | 8.0866 | -1613.5016 | -779.3055 | 4.5606 | 2.4398 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.44.1
72
+ - Pytorch 2.1.2+cu121
73
+ - Datasets 2.21.0
74
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9986120749479528,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.25082930790053476,
5
+ "train_runtime": 42170.4729,
6
+ "train_samples": 184443,
7
+ "train_samples_per_second": 8.747,
8
+ "train_steps_per_second": 0.034
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.44.1"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9986120749479528,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.25082930790053476,
5
+ "train_runtime": 42170.4729,
6
+ "train_samples": 184443,
7
+ "train_samples_per_second": 8.747,
8
+ "train_steps_per_second": 0.034
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,2233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.9986120749479528,
5
+ "eval_steps": 1000,
6
+ "global_step": 1440,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0013879250520471894,
13
+ "grad_norm": 7.310771886732375,
14
+ "learning_rate": 3.4722222222222217e-09,
15
+ "logits/chosen": -2.658149242401123,
16
+ "logits/rejected": -2.6729652881622314,
17
+ "logps/chosen": -310.6693115234375,
18
+ "logps/rejected": -336.3360595703125,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.013879250520471894,
28
+ "grad_norm": 6.989456718606319,
29
+ "learning_rate": 3.472222222222222e-08,
30
+ "logits/chosen": -2.7207493782043457,
31
+ "logits/rejected": -2.678452968597412,
32
+ "logps/chosen": -329.1288146972656,
33
+ "logps/rejected": -334.9566650390625,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.4236111044883728,
36
+ "rewards/chosen": -0.00011655675189103931,
37
+ "rewards/margins": -0.00021387077867984772,
38
+ "rewards/rejected": 9.731399040902033e-05,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.027758501040943788,
43
+ "grad_norm": 7.28858517103727,
44
+ "learning_rate": 6.944444444444444e-08,
45
+ "logits/chosen": -2.690325975418091,
46
+ "logits/rejected": -2.6607680320739746,
47
+ "logps/chosen": -317.3731384277344,
48
+ "logps/rejected": -331.0643005371094,
49
+ "loss": 0.6928,
50
+ "rewards/accuracies": 0.5375000238418579,
51
+ "rewards/chosen": 0.00018671144789550453,
52
+ "rewards/margins": 0.00033597589936107397,
53
+ "rewards/rejected": -0.00014926446601748466,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.041637751561415685,
58
+ "grad_norm": 7.1293734693123785,
59
+ "learning_rate": 1.0416666666666667e-07,
60
+ "logits/chosen": -2.6790878772735596,
61
+ "logits/rejected": -2.645470142364502,
62
+ "logps/chosen": -351.1817321777344,
63
+ "logps/rejected": -351.5977478027344,
64
+ "loss": 0.6914,
65
+ "rewards/accuracies": 0.690625011920929,
66
+ "rewards/chosen": 0.0023157999385148287,
67
+ "rewards/margins": 0.004029616713523865,
68
+ "rewards/rejected": -0.0017138172406703234,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.055517002081887576,
73
+ "grad_norm": 7.102945109434136,
74
+ "learning_rate": 1.3888888888888888e-07,
75
+ "logits/chosen": -2.698319435119629,
76
+ "logits/rejected": -2.6377086639404297,
77
+ "logps/chosen": -352.13153076171875,
78
+ "logps/rejected": -337.7492370605469,
79
+ "loss": 0.6867,
80
+ "rewards/accuracies": 0.762499988079071,
81
+ "rewards/chosen": 0.010404938831925392,
82
+ "rewards/margins": 0.012554061599075794,
83
+ "rewards/rejected": -0.0021491218358278275,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.06939625260235947,
88
+ "grad_norm": 6.472817798536912,
89
+ "learning_rate": 1.736111111111111e-07,
90
+ "logits/chosen": -2.6700968742370605,
91
+ "logits/rejected": -2.656541347503662,
92
+ "logps/chosen": -325.9363708496094,
93
+ "logps/rejected": -359.7778625488281,
94
+ "loss": 0.6778,
95
+ "rewards/accuracies": 0.7906249761581421,
96
+ "rewards/chosen": 0.026792842894792557,
97
+ "rewards/margins": 0.031667567789554596,
98
+ "rewards/rejected": -0.004874727688729763,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.08327550312283137,
103
+ "grad_norm": 7.812961434750235,
104
+ "learning_rate": 2.0833333333333333e-07,
105
+ "logits/chosen": -2.7250404357910156,
106
+ "logits/rejected": -2.680410861968994,
107
+ "logps/chosen": -325.12628173828125,
108
+ "logps/rejected": -345.4599914550781,
109
+ "loss": 0.6575,
110
+ "rewards/accuracies": 0.815625011920929,
111
+ "rewards/chosen": 0.05238135904073715,
112
+ "rewards/margins": 0.06940947473049164,
113
+ "rewards/rejected": -0.017028113827109337,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.09715475364330327,
118
+ "grad_norm": 8.252964521813333,
119
+ "learning_rate": 2.4305555555555555e-07,
120
+ "logits/chosen": -2.6680197715759277,
121
+ "logits/rejected": -2.6488897800445557,
122
+ "logps/chosen": -336.755615234375,
123
+ "logps/rejected": -372.3600158691406,
124
+ "loss": 0.6231,
125
+ "rewards/accuracies": 0.809374988079071,
126
+ "rewards/chosen": 0.04352443665266037,
127
+ "rewards/margins": 0.14054368436336517,
128
+ "rewards/rejected": -0.0970192551612854,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.11103400416377515,
133
+ "grad_norm": 10.907322040862939,
134
+ "learning_rate": 2.7777777777777776e-07,
135
+ "logits/chosen": -2.6945643424987793,
136
+ "logits/rejected": -2.635817050933838,
137
+ "logps/chosen": -345.4059143066406,
138
+ "logps/rejected": -395.4034423828125,
139
+ "loss": 0.5649,
140
+ "rewards/accuracies": 0.8374999761581421,
141
+ "rewards/chosen": -0.05337335914373398,
142
+ "rewards/margins": 0.348626971244812,
143
+ "rewards/rejected": -0.402000367641449,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.12491325468424705,
148
+ "grad_norm": 14.954457613666033,
149
+ "learning_rate": 3.1249999999999997e-07,
150
+ "logits/chosen": -2.7117972373962402,
151
+ "logits/rejected": -2.659916877746582,
152
+ "logps/chosen": -357.34344482421875,
153
+ "logps/rejected": -424.046875,
154
+ "loss": 0.4833,
155
+ "rewards/accuracies": 0.815625011920929,
156
+ "rewards/chosen": -0.24065232276916504,
157
+ "rewards/margins": 0.6489619016647339,
158
+ "rewards/rejected": -0.8896142840385437,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.13879250520471895,
163
+ "grad_norm": 18.8809663980661,
164
+ "learning_rate": 3.472222222222222e-07,
165
+ "logits/chosen": -2.691723108291626,
166
+ "logits/rejected": -2.676143169403076,
167
+ "logps/chosen": -429.1051330566406,
168
+ "logps/rejected": -508.2496032714844,
169
+ "loss": 0.4648,
170
+ "rewards/accuracies": 0.8062499761581421,
171
+ "rewards/chosen": -0.836562991142273,
172
+ "rewards/margins": 0.7794925570487976,
173
+ "rewards/rejected": -1.6160557270050049,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.15267175572519084,
178
+ "grad_norm": 17.153622207375392,
179
+ "learning_rate": 3.819444444444444e-07,
180
+ "logits/chosen": -2.7029433250427246,
181
+ "logits/rejected": -2.6785738468170166,
182
+ "logps/chosen": -463.09588623046875,
183
+ "logps/rejected": -551.4292602539062,
184
+ "loss": 0.4393,
185
+ "rewards/accuracies": 0.78125,
186
+ "rewards/chosen": -1.1502994298934937,
187
+ "rewards/margins": 0.9367601275444031,
188
+ "rewards/rejected": -2.087059736251831,
189
+ "step": 110
190
+ },
191
+ {
192
+ "epoch": 0.16655100624566274,
193
+ "grad_norm": 17.322428984395128,
194
+ "learning_rate": 4.1666666666666667e-07,
195
+ "logits/chosen": -2.740248203277588,
196
+ "logits/rejected": -2.715928554534912,
197
+ "logps/chosen": -466.61614990234375,
198
+ "logps/rejected": -588.0739135742188,
199
+ "loss": 0.416,
200
+ "rewards/accuracies": 0.746874988079071,
201
+ "rewards/chosen": -1.314789891242981,
202
+ "rewards/margins": 1.0913175344467163,
203
+ "rewards/rejected": -2.4061074256896973,
204
+ "step": 120
205
+ },
206
+ {
207
+ "epoch": 0.18043025676613464,
208
+ "grad_norm": 17.727320951964273,
209
+ "learning_rate": 4.513888888888889e-07,
210
+ "logits/chosen": -2.732884168624878,
211
+ "logits/rejected": -2.693039894104004,
212
+ "logps/chosen": -470.36712646484375,
213
+ "logps/rejected": -637.1500854492188,
214
+ "loss": 0.3922,
215
+ "rewards/accuracies": 0.831250011920929,
216
+ "rewards/chosen": -1.2835638523101807,
217
+ "rewards/margins": 1.5643529891967773,
218
+ "rewards/rejected": -2.847916841506958,
219
+ "step": 130
220
+ },
221
+ {
222
+ "epoch": 0.19430950728660654,
223
+ "grad_norm": 23.308808504878304,
224
+ "learning_rate": 4.861111111111111e-07,
225
+ "logits/chosen": -2.410512924194336,
226
+ "logits/rejected": -2.2582736015319824,
227
+ "logps/chosen": -466.9541931152344,
228
+ "logps/rejected": -714.3003540039062,
229
+ "loss": 0.355,
230
+ "rewards/accuracies": 0.846875011920929,
231
+ "rewards/chosen": -1.4372889995574951,
232
+ "rewards/margins": 2.2647476196289062,
233
+ "rewards/rejected": -3.7020363807678223,
234
+ "step": 140
235
+ },
236
+ {
237
+ "epoch": 0.2081887578070784,
238
+ "grad_norm": 15.792878048667099,
239
+ "learning_rate": 4.999735579817769e-07,
240
+ "logits/chosen": -1.8029206991195679,
241
+ "logits/rejected": -1.4160174131393433,
242
+ "logps/chosen": -489.02960205078125,
243
+ "logps/rejected": -732.9075927734375,
244
+ "loss": 0.3505,
245
+ "rewards/accuracies": 0.856249988079071,
246
+ "rewards/chosen": -1.6447805166244507,
247
+ "rewards/margins": 2.2803826332092285,
248
+ "rewards/rejected": -3.9251632690429688,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.2220680083275503,
253
+ "grad_norm": 27.767234596146235,
254
+ "learning_rate": 4.998119881260575e-07,
255
+ "logits/chosen": -1.729731559753418,
256
+ "logits/rejected": -1.0307753086090088,
257
+ "logps/chosen": -489.806396484375,
258
+ "logps/rejected": -750.17333984375,
259
+ "loss": 0.3419,
260
+ "rewards/accuracies": 0.809374988079071,
261
+ "rewards/chosen": -1.7482877969741821,
262
+ "rewards/margins": 2.432042121887207,
263
+ "rewards/rejected": -4.1803297996521,
264
+ "step": 160
265
+ },
266
+ {
267
+ "epoch": 0.2359472588480222,
268
+ "grad_norm": 16.170525819937215,
269
+ "learning_rate": 4.995036332451857e-07,
270
+ "logits/chosen": -1.9091663360595703,
271
+ "logits/rejected": -0.9273399114608765,
272
+ "logps/chosen": -495.46661376953125,
273
+ "logps/rejected": -758.0303955078125,
274
+ "loss": 0.3289,
275
+ "rewards/accuracies": 0.831250011920929,
276
+ "rewards/chosen": -1.4501034021377563,
277
+ "rewards/margins": 2.5703396797180176,
278
+ "rewards/rejected": -4.020443439483643,
279
+ "step": 170
280
+ },
281
+ {
282
+ "epoch": 0.2498265093684941,
283
+ "grad_norm": 19.933464291596422,
284
+ "learning_rate": 4.990486745229364e-07,
285
+ "logits/chosen": -1.4646329879760742,
286
+ "logits/rejected": -0.523992121219635,
287
+ "logps/chosen": -475.5484924316406,
288
+ "logps/rejected": -712.8101806640625,
289
+ "loss": 0.3238,
290
+ "rewards/accuracies": 0.8218749761581421,
291
+ "rewards/chosen": -1.4877039194107056,
292
+ "rewards/margins": 2.2666311264038086,
293
+ "rewards/rejected": -3.7543349266052246,
294
+ "step": 180
295
+ },
296
+ {
297
+ "epoch": 0.263705759888966,
298
+ "grad_norm": 13.788976405207086,
299
+ "learning_rate": 4.984473792848607e-07,
300
+ "logits/chosen": -1.0008184909820557,
301
+ "logits/rejected": 0.3863092064857483,
302
+ "logps/chosen": -459.58367919921875,
303
+ "logps/rejected": -734.6851196289062,
304
+ "loss": 0.3131,
305
+ "rewards/accuracies": 0.84375,
306
+ "rewards/chosen": -1.3406341075897217,
307
+ "rewards/margins": 2.7795493602752686,
308
+ "rewards/rejected": -4.120182991027832,
309
+ "step": 190
310
+ },
311
+ {
312
+ "epoch": 0.2775850104094379,
313
+ "grad_norm": 20.384550977715232,
314
+ "learning_rate": 4.977001008412112e-07,
315
+ "logits/chosen": -0.7814763188362122,
316
+ "logits/rejected": 0.5796900987625122,
317
+ "logps/chosen": -522.8018798828125,
318
+ "logps/rejected": -799.6663208007812,
319
+ "loss": 0.333,
320
+ "rewards/accuracies": 0.828125,
321
+ "rewards/chosen": -1.9407739639282227,
322
+ "rewards/margins": 2.682741165161133,
323
+ "rewards/rejected": -4.6235151290893555,
324
+ "step": 200
325
+ },
326
+ {
327
+ "epoch": 0.2914642609299098,
328
+ "grad_norm": 18.232716128762664,
329
+ "learning_rate": 4.968072782793435e-07,
330
+ "logits/chosen": -0.9184268116950989,
331
+ "logits/rejected": 0.4480651319026947,
332
+ "logps/chosen": -499.62127685546875,
333
+ "logps/rejected": -834.462890625,
334
+ "loss": 0.3089,
335
+ "rewards/accuracies": 0.846875011920929,
336
+ "rewards/chosen": -1.7784448862075806,
337
+ "rewards/margins": 3.2359156608581543,
338
+ "rewards/rejected": -5.0143609046936035,
339
+ "step": 210
340
+ },
341
+ {
342
+ "epoch": 0.3053435114503817,
343
+ "grad_norm": 19.076527739229473,
344
+ "learning_rate": 4.957694362057149e-07,
345
+ "logits/chosen": -0.9672171473503113,
346
+ "logits/rejected": 0.16848711669445038,
347
+ "logps/chosen": -560.64453125,
348
+ "logps/rejected": -855.81591796875,
349
+ "loss": 0.3149,
350
+ "rewards/accuracies": 0.846875011920929,
351
+ "rewards/chosen": -1.9354803562164307,
352
+ "rewards/margins": 3.0875911712646484,
353
+ "rewards/rejected": -5.0230712890625,
354
+ "step": 220
355
+ },
356
+ {
357
+ "epoch": 0.3192227619708536,
358
+ "grad_norm": 16.258599026908875,
359
+ "learning_rate": 4.945871844376368e-07,
360
+ "logits/chosen": -1.0649207830429077,
361
+ "logits/rejected": 0.31253132224082947,
362
+ "logps/chosen": -538.4542846679688,
363
+ "logps/rejected": -874.3626098632812,
364
+ "loss": 0.3188,
365
+ "rewards/accuracies": 0.8968750238418579,
366
+ "rewards/chosen": -1.9632246494293213,
367
+ "rewards/margins": 3.402304172515869,
368
+ "rewards/rejected": -5.365528583526611,
369
+ "step": 230
370
+ },
371
+ {
372
+ "epoch": 0.3331020124913255,
373
+ "grad_norm": 18.705612187457067,
374
+ "learning_rate": 4.932612176449559e-07,
375
+ "logits/chosen": -0.9302694201469421,
376
+ "logits/rejected": 0.36828285455703735,
377
+ "logps/chosen": -505.27606201171875,
378
+ "logps/rejected": -816.0523681640625,
379
+ "loss": 0.3025,
380
+ "rewards/accuracies": 0.859375,
381
+ "rewards/chosen": -1.567478895187378,
382
+ "rewards/margins": 2.9575300216674805,
383
+ "rewards/rejected": -4.525008678436279,
384
+ "step": 240
385
+ },
386
+ {
387
+ "epoch": 0.3469812630117974,
388
+ "grad_norm": 24.629743659903227,
389
+ "learning_rate": 4.917923149418791e-07,
390
+ "logits/chosen": 0.6533899307250977,
391
+ "logits/rejected": 1.9370393753051758,
392
+ "logps/chosen": -599.1996459960938,
393
+ "logps/rejected": -975.5636596679688,
394
+ "loss": 0.2998,
395
+ "rewards/accuracies": 0.84375,
396
+ "rewards/chosen": -2.498326301574707,
397
+ "rewards/margins": 3.716355562210083,
398
+ "rewards/rejected": -6.214681625366211,
399
+ "step": 250
400
+ },
401
+ {
402
+ "epoch": 0.3608605135322693,
403
+ "grad_norm": 17.05308334694298,
404
+ "learning_rate": 4.901813394291801e-07,
405
+ "logits/chosen": -0.2267475426197052,
406
+ "logits/rejected": 1.061628818511963,
407
+ "logps/chosen": -510.58990478515625,
408
+ "logps/rejected": -825.64892578125,
409
+ "loss": 0.3068,
410
+ "rewards/accuracies": 0.8343750238418579,
411
+ "rewards/chosen": -1.8009761571884155,
412
+ "rewards/margins": 3.0452382564544678,
413
+ "rewards/rejected": -4.846214771270752,
414
+ "step": 260
415
+ },
416
+ {
417
+ "epoch": 0.3747397640527412,
418
+ "grad_norm": 16.458651892780324,
419
+ "learning_rate": 4.884292376870567e-07,
420
+ "logits/chosen": -0.5495095252990723,
421
+ "logits/rejected": 0.9298044443130493,
422
+ "logps/chosen": -518.9110107421875,
423
+ "logps/rejected": -838.7022705078125,
424
+ "loss": 0.3019,
425
+ "rewards/accuracies": 0.8218749761581421,
426
+ "rewards/chosen": -1.7417328357696533,
427
+ "rewards/margins": 3.202885866165161,
428
+ "rewards/rejected": -4.944618225097656,
429
+ "step": 270
430
+ },
431
+ {
432
+ "epoch": 0.3886190145732131,
433
+ "grad_norm": 17.412372040942056,
434
+ "learning_rate": 4.865370392189376e-07,
435
+ "logits/chosen": -1.044090986251831,
436
+ "logits/rejected": 0.6990992426872253,
437
+ "logps/chosen": -521.0147094726562,
438
+ "logps/rejected": -908.5431518554688,
439
+ "loss": 0.3086,
440
+ "rewards/accuracies": 0.8374999761581421,
441
+ "rewards/chosen": -1.7731910943984985,
442
+ "rewards/margins": 3.855863571166992,
443
+ "rewards/rejected": -5.629055023193359,
444
+ "step": 280
445
+ },
446
+ {
447
+ "epoch": 0.4024982650936849,
448
+ "grad_norm": 18.962541699001296,
449
+ "learning_rate": 4.845058558465645e-07,
450
+ "logits/chosen": 0.2687569260597229,
451
+ "logits/rejected": 1.7490098476409912,
452
+ "logps/chosen": -562.8821411132812,
453
+ "logps/rejected": -911.21142578125,
454
+ "loss": 0.2941,
455
+ "rewards/accuracies": 0.862500011920929,
456
+ "rewards/chosen": -2.1526246070861816,
457
+ "rewards/margins": 3.554647445678711,
458
+ "rewards/rejected": -5.707272052764893,
459
+ "step": 290
460
+ },
461
+ {
462
+ "epoch": 0.4163775156141568,
463
+ "grad_norm": 18.283817174579646,
464
+ "learning_rate": 4.823368810567056e-07,
465
+ "logits/chosen": 0.27320951223373413,
466
+ "logits/rejected": 1.6727936267852783,
467
+ "logps/chosen": -508.94744873046875,
468
+ "logps/rejected": -807.9446411132812,
469
+ "loss": 0.3081,
470
+ "rewards/accuracies": 0.862500011920929,
471
+ "rewards/chosen": -1.861353874206543,
472
+ "rewards/margins": 3.0898642539978027,
473
+ "rewards/rejected": -4.951218605041504,
474
+ "step": 300
475
+ },
476
+ {
477
+ "epoch": 0.4302567661346287,
478
+ "grad_norm": 15.422967764865048,
479
+ "learning_rate": 4.800313892998847e-07,
480
+ "logits/chosen": -0.3293236196041107,
481
+ "logits/rejected": 1.3189566135406494,
482
+ "logps/chosen": -509.6499938964844,
483
+ "logps/rejected": -856.3865966796875,
484
+ "loss": 0.2919,
485
+ "rewards/accuracies": 0.856249988079071,
486
+ "rewards/chosen": -1.683760404586792,
487
+ "rewards/margins": 3.3120670318603516,
488
+ "rewards/rejected": -4.995827674865723,
489
+ "step": 310
490
+ },
491
+ {
492
+ "epoch": 0.4441360166551006,
493
+ "grad_norm": 18.804961569512887,
494
+ "learning_rate": 4.775907352415367e-07,
495
+ "logits/chosen": 0.22868101298809052,
496
+ "logits/rejected": 1.9417445659637451,
497
+ "logps/chosen": -556.9364013671875,
498
+ "logps/rejected": -950.5734252929688,
499
+ "loss": 0.2808,
500
+ "rewards/accuracies": 0.824999988079071,
501
+ "rewards/chosen": -1.9100916385650635,
502
+ "rewards/margins": 4.022894859313965,
503
+ "rewards/rejected": -5.932986259460449,
504
+ "step": 320
505
+ },
506
+ {
507
+ "epoch": 0.4580152671755725,
508
+ "grad_norm": 17.099019764148846,
509
+ "learning_rate": 4.7501635296603025e-07,
510
+ "logits/chosen": 1.049659013748169,
511
+ "logits/rejected": 2.4073307514190674,
512
+ "logps/chosen": -531.3202514648438,
513
+ "logps/rejected": -922.6915893554688,
514
+ "loss": 0.292,
515
+ "rewards/accuracies": 0.8500000238418579,
516
+ "rewards/chosen": -1.8814361095428467,
517
+ "rewards/margins": 3.822434902191162,
518
+ "rewards/rejected": -5.7038702964782715,
519
+ "step": 330
520
+ },
521
+ {
522
+ "epoch": 0.4718945176960444,
523
+ "grad_norm": 18.898740164063316,
524
+ "learning_rate": 4.723097551340265e-07,
525
+ "logits/chosen": 1.2799204587936401,
526
+ "logits/rejected": 2.3121683597564697,
527
+ "logps/chosen": -494.0704040527344,
528
+ "logps/rejected": -854.8629760742188,
529
+ "loss": 0.292,
530
+ "rewards/accuracies": 0.8656250238418579,
531
+ "rewards/chosen": -1.7092489004135132,
532
+ "rewards/margins": 3.4398162364959717,
533
+ "rewards/rejected": -5.149065971374512,
534
+ "step": 340
535
+ },
536
+ {
537
+ "epoch": 0.4857737682165163,
538
+ "grad_norm": 20.5834993963944,
539
+ "learning_rate": 4.6947253209366613e-07,
540
+ "logits/chosen": 1.0228136777877808,
541
+ "logits/rejected": 2.397416114807129,
542
+ "logps/chosen": -545.7423706054688,
543
+ "logps/rejected": -912.7175903320312,
544
+ "loss": 0.2827,
545
+ "rewards/accuracies": 0.8500000238418579,
546
+ "rewards/chosen": -2.025628089904785,
547
+ "rewards/margins": 3.6154162883758545,
548
+ "rewards/rejected": -5.6410441398620605,
549
+ "step": 350
550
+ },
551
+ {
552
+ "epoch": 0.4996530187369882,
553
+ "grad_norm": 14.253483616771307,
554
+ "learning_rate": 4.6650635094610966e-07,
555
+ "logits/chosen": 1.0244512557983398,
556
+ "logits/rejected": 2.2805395126342773,
557
+ "logps/chosen": -517.4966430664062,
558
+ "logps/rejected": -885.3656005859375,
559
+ "loss": 0.2767,
560
+ "rewards/accuracies": 0.893750011920929,
561
+ "rewards/chosen": -1.9133589267730713,
562
+ "rewards/margins": 3.5970351696014404,
563
+ "rewards/rejected": -5.510394096374512,
564
+ "step": 360
565
+ },
566
+ {
567
+ "epoch": 0.5135322692574601,
568
+ "grad_norm": 23.570388915292295,
569
+ "learning_rate": 4.6341295456597906e-07,
570
+ "logits/chosen": 0.6538206338882446,
571
+ "logits/rejected": 1.800736665725708,
572
+ "logps/chosen": -486.4225158691406,
573
+ "logps/rejected": -818.6160278320312,
574
+ "loss": 0.2972,
575
+ "rewards/accuracies": 0.840624988079071,
576
+ "rewards/chosen": -1.574295997619629,
577
+ "rewards/margins": 3.247091770172119,
578
+ "rewards/rejected": -4.821388244628906,
579
+ "step": 370
580
+ },
581
+ {
582
+ "epoch": 0.527411519777932,
583
+ "grad_norm": 22.92056805222348,
584
+ "learning_rate": 4.6019416057727577e-07,
585
+ "logits/chosen": 0.5617297291755676,
586
+ "logits/rejected": 1.9416172504425049,
587
+ "logps/chosen": -554.2542724609375,
588
+ "logps/rejected": -1017.8567504882812,
589
+ "loss": 0.2817,
590
+ "rewards/accuracies": 0.856249988079071,
591
+ "rewards/chosen": -1.9359180927276611,
592
+ "rewards/margins": 4.510493278503418,
593
+ "rewards/rejected": -6.4464111328125,
594
+ "step": 380
595
+ },
596
+ {
597
+ "epoch": 0.5412907702984039,
598
+ "grad_norm": 21.96046117437538,
599
+ "learning_rate": 4.5685186028537756e-07,
600
+ "logits/chosen": 0.08615957945585251,
601
+ "logits/rejected": 1.9250423908233643,
602
+ "logps/chosen": -521.3770751953125,
603
+ "logps/rejected": -1026.9183349609375,
604
+ "loss": 0.2752,
605
+ "rewards/accuracies": 0.903124988079071,
606
+ "rewards/chosen": -1.688780426979065,
607
+ "rewards/margins": 4.929998397827148,
608
+ "rewards/rejected": -6.618779182434082,
609
+ "step": 390
610
+ },
611
+ {
612
+ "epoch": 0.5551700208188758,
613
+ "grad_norm": 16.35709631938306,
614
+ "learning_rate": 4.5338801756574185e-07,
615
+ "logits/chosen": 0.8631726503372192,
616
+ "logits/rejected": 2.426805019378662,
617
+ "logps/chosen": -550.8397216796875,
618
+ "logps/rejected": -960.1754760742188,
619
+ "loss": 0.2975,
620
+ "rewards/accuracies": 0.8500000238418579,
621
+ "rewards/chosen": -1.955361008644104,
622
+ "rewards/margins": 4.027670860290527,
623
+ "rewards/rejected": -5.983031749725342,
624
+ "step": 400
625
+ },
626
+ {
627
+ "epoch": 0.5690492713393477,
628
+ "grad_norm": 14.715456208460227,
629
+ "learning_rate": 4.498046677099674e-07,
630
+ "logits/chosen": 0.137635737657547,
631
+ "logits/rejected": 1.9270877838134766,
632
+ "logps/chosen": -470.42767333984375,
633
+ "logps/rejected": -795.9421997070312,
634
+ "loss": 0.2826,
635
+ "rewards/accuracies": 0.8374999761581421,
636
+ "rewards/chosen": -1.4737566709518433,
637
+ "rewards/margins": 3.3337607383728027,
638
+ "rewards/rejected": -4.807517051696777,
639
+ "step": 410
640
+ },
641
+ {
642
+ "epoch": 0.5829285218598196,
643
+ "grad_norm": 15.380893236552625,
644
+ "learning_rate": 4.461039162298939e-07,
645
+ "logits/chosen": 1.0668681859970093,
646
+ "logits/rejected": 2.6232194900512695,
647
+ "logps/chosen": -546.1395874023438,
648
+ "logps/rejected": -970.1839599609375,
649
+ "loss": 0.271,
650
+ "rewards/accuracies": 0.8687499761581421,
651
+ "rewards/chosen": -1.9840848445892334,
652
+ "rewards/margins": 4.236409664154053,
653
+ "rewards/rejected": -6.220494270324707,
654
+ "step": 420
655
+ },
656
+ {
657
+ "epoch": 0.5968077723802915,
658
+ "grad_norm": 20.12909286909629,
659
+ "learning_rate": 4.4228793762044126e-07,
660
+ "logits/chosen": 1.8645107746124268,
661
+ "logits/rejected": 3.282402515411377,
662
+ "logps/chosen": -556.1341552734375,
663
+ "logps/rejected": -999.4012451171875,
664
+ "loss": 0.286,
665
+ "rewards/accuracies": 0.859375,
666
+ "rewards/chosen": -2.260993719100952,
667
+ "rewards/margins": 4.435044288635254,
668
+ "rewards/rejected": -6.696038246154785,
669
+ "step": 430
670
+ },
671
+ {
672
+ "epoch": 0.6106870229007634,
673
+ "grad_norm": 16.373631758502654,
674
+ "learning_rate": 4.3835897408191513e-07,
675
+ "logits/chosen": 1.3662347793579102,
676
+ "logits/rejected": 2.8811306953430176,
677
+ "logps/chosen": -512.532958984375,
678
+ "logps/rejected": -906.1589965820312,
679
+ "loss": 0.2862,
680
+ "rewards/accuracies": 0.875,
681
+ "rewards/chosen": -1.7359873056411743,
682
+ "rewards/margins": 3.9433860778808594,
683
+ "rewards/rejected": -5.679372787475586,
684
+ "step": 440
685
+ },
686
+ {
687
+ "epoch": 0.6245662734212353,
688
+ "grad_norm": 20.352147452552153,
689
+ "learning_rate": 4.34319334202531e-07,
690
+ "logits/chosen": 1.8553016185760498,
691
+ "logits/rejected": 3.245082139968872,
692
+ "logps/chosen": -553.4808349609375,
693
+ "logps/rejected": -1043.968505859375,
694
+ "loss": 0.2671,
695
+ "rewards/accuracies": 0.8374999761581421,
696
+ "rewards/chosen": -2.185727834701538,
697
+ "rewards/margins": 4.84250545501709,
698
+ "rewards/rejected": -7.028233528137207,
699
+ "step": 450
700
+ },
701
+ {
702
+ "epoch": 0.6384455239417072,
703
+ "grad_norm": 14.530740198443842,
704
+ "learning_rate": 4.301713916019286e-07,
705
+ "logits/chosen": 1.8998327255249023,
706
+ "logits/rejected": 3.121591091156006,
707
+ "logps/chosen": -545.904296875,
708
+ "logps/rejected": -980.6796875,
709
+ "loss": 0.277,
710
+ "rewards/accuracies": 0.8500000238418579,
711
+ "rewards/chosen": -1.9527307748794556,
712
+ "rewards/margins": 4.239319324493408,
713
+ "rewards/rejected": -6.192049980163574,
714
+ "step": 460
715
+ },
716
+ {
717
+ "epoch": 0.6523247744621791,
718
+ "grad_norm": 14.543324773787445,
719
+ "learning_rate": 4.2591758353647643e-07,
720
+ "logits/chosen": 1.0112075805664062,
721
+ "logits/rejected": 2.648533821105957,
722
+ "logps/chosen": -551.0354614257812,
723
+ "logps/rejected": -958.7569580078125,
724
+ "loss": 0.2778,
725
+ "rewards/accuracies": 0.8656250238418579,
726
+ "rewards/chosen": -1.892492651939392,
727
+ "rewards/margins": 4.073439598083496,
728
+ "rewards/rejected": -5.965932846069336,
729
+ "step": 470
730
+ },
731
+ {
732
+ "epoch": 0.666204024982651,
733
+ "grad_norm": 20.665751040711644,
734
+ "learning_rate": 4.2156040946718343e-07,
735
+ "logits/chosen": 1.5330281257629395,
736
+ "logits/rejected": 3.1015706062316895,
737
+ "logps/chosen": -588.2686157226562,
738
+ "logps/rejected": -1030.510498046875,
739
+ "loss": 0.2843,
740
+ "rewards/accuracies": 0.878125011920929,
741
+ "rewards/chosen": -2.3880248069763184,
742
+ "rewards/margins": 4.504977226257324,
743
+ "rewards/rejected": -6.893001556396484,
744
+ "step": 480
745
+ },
746
+ {
747
+ "epoch": 0.6800832755031229,
748
+ "grad_norm": 15.05601087138497,
749
+ "learning_rate": 4.1710242959106056e-07,
750
+ "logits/chosen": 0.5176582336425781,
751
+ "logits/rejected": 2.4557127952575684,
752
+ "logps/chosen": -507.008544921875,
753
+ "logps/rejected": -886.51416015625,
754
+ "loss": 0.294,
755
+ "rewards/accuracies": 0.846875011920929,
756
+ "rewards/chosen": -1.5845837593078613,
757
+ "rewards/margins": 3.810807704925537,
758
+ "rewards/rejected": -5.395391464233398,
759
+ "step": 490
760
+ },
761
+ {
762
+ "epoch": 0.6939625260235948,
763
+ "grad_norm": 16.718450087742482,
764
+ "learning_rate": 4.125462633367959e-07,
765
+ "logits/chosen": 1.4314930438995361,
766
+ "logits/rejected": 2.9996209144592285,
767
+ "logps/chosen": -532.1755981445312,
768
+ "logps/rejected": -1021.0213623046875,
769
+ "loss": 0.266,
770
+ "rewards/accuracies": 0.878125011920929,
771
+ "rewards/chosen": -2.0497827529907227,
772
+ "rewards/margins": 4.718562126159668,
773
+ "rewards/rejected": -6.768344879150391,
774
+ "step": 500
775
+ },
776
+ {
777
+ "epoch": 0.7078417765440667,
778
+ "grad_norm": 19.85458205442182,
779
+ "learning_rate": 4.0789458782562435e-07,
780
+ "logits/chosen": 1.293492317199707,
781
+ "logits/rejected": 2.812124013900757,
782
+ "logps/chosen": -551.9437866210938,
783
+ "logps/rejected": -1100.38720703125,
784
+ "loss": 0.2609,
785
+ "rewards/accuracies": 0.887499988079071,
786
+ "rewards/chosen": -2.3401436805725098,
787
+ "rewards/margins": 5.148820400238037,
788
+ "rewards/rejected": -7.4889631271362305,
789
+ "step": 510
790
+ },
791
+ {
792
+ "epoch": 0.7217210270645386,
793
+ "grad_norm": 14.477360180818952,
794
+ "learning_rate": 4.031501362983007e-07,
795
+ "logits/chosen": 0.7156480550765991,
796
+ "logits/rejected": 2.6717166900634766,
797
+ "logps/chosen": -510.4054260253906,
798
+ "logps/rejected": -1019.923828125,
799
+ "loss": 0.2805,
800
+ "rewards/accuracies": 0.887499988079071,
801
+ "rewards/chosen": -1.7749239206314087,
802
+ "rewards/margins": 4.909750938415527,
803
+ "rewards/rejected": -6.6846747398376465,
804
+ "step": 520
805
+ },
806
+ {
807
+ "epoch": 0.7356002775850105,
808
+ "grad_norm": 17.058981872372126,
809
+ "learning_rate": 3.9831569650909553e-07,
810
+ "logits/chosen": 0.01914766989648342,
811
+ "logits/rejected": 2.122251510620117,
812
+ "logps/chosen": -551.1067504882812,
813
+ "logps/rejected": -933.5851440429688,
814
+ "loss": 0.2736,
815
+ "rewards/accuracies": 0.846875011920929,
816
+ "rewards/chosen": -2.0334556102752686,
817
+ "rewards/margins": 3.904972791671753,
818
+ "rewards/rejected": -5.9384284019470215,
819
+ "step": 530
820
+ },
821
+ {
822
+ "epoch": 0.7494795281054824,
823
+ "grad_norm": 19.755034017893955,
824
+ "learning_rate": 3.933941090877615e-07,
825
+ "logits/chosen": -0.34956273436546326,
826
+ "logits/rejected": 2.1510236263275146,
827
+ "logps/chosen": -553.6760864257812,
828
+ "logps/rejected": -1035.0694580078125,
829
+ "loss": 0.2598,
830
+ "rewards/accuracies": 0.8500000238418579,
831
+ "rewards/chosen": -2.195675849914551,
832
+ "rewards/margins": 4.830143928527832,
833
+ "rewards/rejected": -7.025820732116699,
834
+ "step": 540
835
+ },
836
+ {
837
+ "epoch": 0.7633587786259542,
838
+ "grad_norm": 15.194278158742483,
839
+ "learning_rate": 3.883882658704306e-07,
840
+ "logits/chosen": -0.1008995771408081,
841
+ "logits/rejected": 2.1417980194091797,
842
+ "logps/chosen": -562.4782104492188,
843
+ "logps/rejected": -1068.9832763671875,
844
+ "loss": 0.2769,
845
+ "rewards/accuracies": 0.8656250238418579,
846
+ "rewards/chosen": -2.3539323806762695,
847
+ "rewards/margins": 5.010300636291504,
848
+ "rewards/rejected": -7.364233493804932,
849
+ "step": 550
850
+ },
851
+ {
852
+ "epoch": 0.7772380291464261,
853
+ "grad_norm": 17.553015958549693,
854
+ "learning_rate": 3.833011082004228e-07,
855
+ "logits/chosen": -0.8681015968322754,
856
+ "logits/rejected": 1.4696462154388428,
857
+ "logps/chosen": -573.9002685546875,
858
+ "logps/rejected": -1092.654296875,
859
+ "loss": 0.2772,
860
+ "rewards/accuracies": 0.828125,
861
+ "rewards/chosen": -2.370056390762329,
862
+ "rewards/margins": 5.011393070220947,
863
+ "rewards/rejected": -7.381448268890381,
864
+ "step": 560
865
+ },
866
+ {
867
+ "epoch": 0.7911172796668979,
868
+ "grad_norm": 15.00893179617878,
869
+ "learning_rate": 3.781356251999663e-07,
870
+ "logits/chosen": -1.1506744623184204,
871
+ "logits/rejected": 0.8982523679733276,
872
+ "logps/chosen": -535.85595703125,
873
+ "logps/rejected": -979.8224487304688,
874
+ "loss": 0.2904,
875
+ "rewards/accuracies": 0.862500011920929,
876
+ "rewards/chosen": -2.1066572666168213,
877
+ "rewards/margins": 4.214187145233154,
878
+ "rewards/rejected": -6.320844650268555,
879
+ "step": 570
880
+ },
881
+ {
882
+ "epoch": 0.8049965301873698,
883
+ "grad_norm": 17.833191557593317,
884
+ "learning_rate": 3.728948520138426e-07,
885
+ "logits/chosen": -0.45408257842063904,
886
+ "logits/rejected": 1.915353775024414,
887
+ "logps/chosen": -540.576171875,
888
+ "logps/rejected": -994.28564453125,
889
+ "loss": 0.2725,
890
+ "rewards/accuracies": 0.859375,
891
+ "rewards/chosen": -2.1601402759552,
892
+ "rewards/margins": 4.492788314819336,
893
+ "rewards/rejected": -6.652928829193115,
894
+ "step": 580
895
+ },
896
+ {
897
+ "epoch": 0.8188757807078417,
898
+ "grad_norm": 16.37664073810551,
899
+ "learning_rate": 3.6758186802599064e-07,
900
+ "logits/chosen": 0.19857454299926758,
901
+ "logits/rejected": 2.2812840938568115,
902
+ "logps/chosen": -550.7555541992188,
903
+ "logps/rejected": -1052.941162109375,
904
+ "loss": 0.2681,
905
+ "rewards/accuracies": 0.878125011920929,
906
+ "rewards/chosen": -2.186647415161133,
907
+ "rewards/margins": 4.857481956481934,
908
+ "rewards/rejected": -7.044129848480225,
909
+ "step": 590
910
+ },
911
+ {
912
+ "epoch": 0.8327550312283136,
913
+ "grad_norm": 15.489252159692187,
914
+ "learning_rate": 3.6219979505011555e-07,
915
+ "logits/chosen": 0.13169285655021667,
916
+ "logits/rejected": 2.4862585067749023,
917
+ "logps/chosen": -545.8336181640625,
918
+ "logps/rejected": -1015.2984619140625,
919
+ "loss": 0.2599,
920
+ "rewards/accuracies": 0.8843749761581421,
921
+ "rewards/chosen": -2.0449271202087402,
922
+ "rewards/margins": 4.806704521179199,
923
+ "rewards/rejected": -6.851631164550781,
924
+ "step": 600
925
+ },
926
+ {
927
+ "epoch": 0.8466342817487855,
928
+ "grad_norm": 16.034098237394367,
929
+ "learning_rate": 3.5675179549536786e-07,
930
+ "logits/chosen": 1.026650309562683,
931
+ "logits/rejected": 2.891331672668457,
932
+ "logps/chosen": -562.756103515625,
933
+ "logps/rejected": -1112.7884521484375,
934
+ "loss": 0.2655,
935
+ "rewards/accuracies": 0.871874988079071,
936
+ "rewards/chosen": -2.344050884246826,
937
+ "rewards/margins": 5.390646934509277,
938
+ "rewards/rejected": -7.7346978187561035,
939
+ "step": 610
940
+ },
941
+ {
942
+ "epoch": 0.8605135322692574,
943
+ "grad_norm": 15.964353907588587,
944
+ "learning_rate": 3.512410705081684e-07,
945
+ "logits/chosen": 0.5978251695632935,
946
+ "logits/rejected": 2.6756691932678223,
947
+ "logps/chosen": -596.0946655273438,
948
+ "logps/rejected": -1164.9000244140625,
949
+ "loss": 0.2654,
950
+ "rewards/accuracies": 0.890625,
951
+ "rewards/chosen": -2.3005499839782715,
952
+ "rewards/margins": 5.751313209533691,
953
+ "rewards/rejected": -8.051862716674805,
954
+ "step": 620
955
+ },
956
+ {
957
+ "epoch": 0.8743927827897293,
958
+ "grad_norm": 15.534990200131912,
959
+ "learning_rate": 3.4567085809127245e-07,
960
+ "logits/chosen": 0.7654634714126587,
961
+ "logits/rejected": 2.563817262649536,
962
+ "logps/chosen": -553.0380859375,
963
+ "logps/rejected": -1015.8709106445312,
964
+ "loss": 0.2837,
965
+ "rewards/accuracies": 0.8343750238418579,
966
+ "rewards/chosen": -2.105071544647217,
967
+ "rewards/margins": 4.519524097442627,
968
+ "rewards/rejected": -6.624594688415527,
969
+ "step": 630
970
+ },
971
+ {
972
+ "epoch": 0.8882720333102012,
973
+ "grad_norm": 15.473999576589652,
974
+ "learning_rate": 3.400444312011776e-07,
975
+ "logits/chosen": 1.6810909509658813,
976
+ "logits/rejected": 3.29026460647583,
977
+ "logps/chosen": -547.7821044921875,
978
+ "logps/rejected": -979.19580078125,
979
+ "loss": 0.2808,
980
+ "rewards/accuracies": 0.871874988079071,
981
+ "rewards/chosen": -2.2526745796203613,
982
+ "rewards/margins": 4.376776695251465,
983
+ "rewards/rejected": -6.629450798034668,
984
+ "step": 640
985
+ },
986
+ {
987
+ "epoch": 0.9021512838306731,
988
+ "grad_norm": 18.65677766855292,
989
+ "learning_rate": 3.343650958249935e-07,
990
+ "logits/chosen": 1.9508390426635742,
991
+ "logits/rejected": 3.384152889251709,
992
+ "logps/chosen": -611.1751708984375,
993
+ "logps/rejected": -1097.154541015625,
994
+ "loss": 0.2717,
995
+ "rewards/accuracies": 0.875,
996
+ "rewards/chosen": -2.623706340789795,
997
+ "rewards/margins": 4.866621494293213,
998
+ "rewards/rejected": -7.49032735824585,
999
+ "step": 650
1000
+ },
1001
+ {
1002
+ "epoch": 0.916030534351145,
1003
+ "grad_norm": 15.172341090722101,
1004
+ "learning_rate": 3.286361890379034e-07,
1005
+ "logits/chosen": 1.418536901473999,
1006
+ "logits/rejected": 2.9168481826782227,
1007
+ "logps/chosen": -542.9044799804688,
1008
+ "logps/rejected": -998.8190307617188,
1009
+ "loss": 0.2649,
1010
+ "rewards/accuracies": 0.878125011920929,
1011
+ "rewards/chosen": -2.128884792327881,
1012
+ "rewards/margins": 4.480809688568115,
1013
+ "rewards/rejected": -6.6096954345703125,
1014
+ "step": 660
1015
+ },
1016
+ {
1017
+ "epoch": 0.9299097848716169,
1018
+ "grad_norm": 21.03897492647871,
1019
+ "learning_rate": 3.2286107704235875e-07,
1020
+ "logits/chosen": 0.6511715650558472,
1021
+ "logits/rejected": 2.4751362800598145,
1022
+ "logps/chosen": -516.9407348632812,
1023
+ "logps/rejected": -994.5736083984375,
1024
+ "loss": 0.2687,
1025
+ "rewards/accuracies": 0.856249988079071,
1026
+ "rewards/chosen": -1.8717918395996094,
1027
+ "rewards/margins": 4.618001937866211,
1028
+ "rewards/rejected": -6.489793300628662,
1029
+ "step": 670
1030
+ },
1031
+ {
1032
+ "epoch": 0.9437890353920888,
1033
+ "grad_norm": 23.147160211659624,
1034
+ "learning_rate": 3.1704315319015936e-07,
1035
+ "logits/chosen": 0.42685967683792114,
1036
+ "logits/rejected": 2.4200854301452637,
1037
+ "logps/chosen": -569.0222778320312,
1038
+ "logps/rejected": -1023.0499877929688,
1039
+ "loss": 0.2637,
1040
+ "rewards/accuracies": 0.856249988079071,
1041
+ "rewards/chosen": -2.241736650466919,
1042
+ "rewards/margins": 4.569226264953613,
1043
+ "rewards/rejected": -6.8109636306762695,
1044
+ "step": 680
1045
+ },
1046
+ {
1047
+ "epoch": 0.9576682859125607,
1048
+ "grad_norm": 19.074650599712797,
1049
+ "learning_rate": 3.1118583598858094e-07,
1050
+ "logits/chosen": 0.3251928687095642,
1051
+ "logits/rejected": 2.361527442932129,
1052
+ "logps/chosen": -536.3818969726562,
1053
+ "logps/rejected": -1084.29150390625,
1054
+ "loss": 0.269,
1055
+ "rewards/accuracies": 0.8999999761581421,
1056
+ "rewards/chosen": -2.159126043319702,
1057
+ "rewards/margins": 5.291468620300293,
1058
+ "rewards/rejected": -7.450594425201416,
1059
+ "step": 690
1060
+ },
1061
+ {
1062
+ "epoch": 0.9715475364330326,
1063
+ "grad_norm": 16.46319504999634,
1064
+ "learning_rate": 3.052925670917219e-07,
1065
+ "logits/chosen": 0.3188991844654083,
1066
+ "logits/rejected": 2.1961517333984375,
1067
+ "logps/chosen": -590.7283935546875,
1068
+ "logps/rejected": -1070.7264404296875,
1069
+ "loss": 0.2754,
1070
+ "rewards/accuracies": 0.8500000238418579,
1071
+ "rewards/chosen": -2.466348171234131,
1072
+ "rewards/margins": 4.865278720855713,
1073
+ "rewards/rejected": -7.33162784576416,
1074
+ "step": 700
1075
+ },
1076
+ {
1077
+ "epoch": 0.9854267869535045,
1078
+ "grad_norm": 22.275614803003936,
1079
+ "learning_rate": 2.9936680927824935e-07,
1080
+ "logits/chosen": 0.7669705152511597,
1081
+ "logits/rejected": 2.4150214195251465,
1082
+ "logps/chosen": -574.384033203125,
1083
+ "logps/rejected": -1084.837646484375,
1084
+ "loss": 0.2543,
1085
+ "rewards/accuracies": 0.856249988079071,
1086
+ "rewards/chosen": -2.393491268157959,
1087
+ "rewards/margins": 5.025046348571777,
1088
+ "rewards/rejected": -7.4185380935668945,
1089
+ "step": 710
1090
+ },
1091
+ {
1092
+ "epoch": 0.9993060374739764,
1093
+ "grad_norm": 22.856869601641986,
1094
+ "learning_rate": 2.934120444167326e-07,
1095
+ "logits/chosen": 0.018332133069634438,
1096
+ "logits/rejected": 1.9890985488891602,
1097
+ "logps/chosen": -554.123291015625,
1098
+ "logps/rejected": -1072.498046875,
1099
+ "loss": 0.2651,
1100
+ "rewards/accuracies": 0.8656250238418579,
1101
+ "rewards/chosen": -2.201577663421631,
1102
+ "rewards/margins": 5.162604331970215,
1103
+ "rewards/rejected": -7.3641815185546875,
1104
+ "step": 720
1105
+ },
1106
+ {
1107
+ "epoch": 1.0131852879944483,
1108
+ "grad_norm": 16.323336520132766,
1109
+ "learning_rate": 2.8743177141975993e-07,
1110
+ "logits/chosen": 0.2875978350639343,
1111
+ "logits/rejected": 2.0843756198883057,
1112
+ "logps/chosen": -598.0647583007812,
1113
+ "logps/rejected": -1214.6568603515625,
1114
+ "loss": 0.1848,
1115
+ "rewards/accuracies": 0.921875,
1116
+ "rewards/chosen": -2.478050708770752,
1117
+ "rewards/margins": 6.1301589012146,
1118
+ "rewards/rejected": -8.608209609985352,
1119
+ "step": 730
1120
+ },
1121
+ {
1122
+ "epoch": 1.0270645385149202,
1123
+ "grad_norm": 17.717617068699322,
1124
+ "learning_rate": 2.814295041880407e-07,
1125
+ "logits/chosen": 1.5749400854110718,
1126
+ "logits/rejected": 3.2800605297088623,
1127
+ "logps/chosen": -720.4405517578125,
1128
+ "logps/rejected": -1425.5921630859375,
1129
+ "loss": 0.1738,
1130
+ "rewards/accuracies": 0.9156249761581421,
1131
+ "rewards/chosen": -3.6376540660858154,
1132
+ "rewards/margins": 6.913857460021973,
1133
+ "rewards/rejected": -10.55151081085205,
1134
+ "step": 740
1135
+ },
1136
+ {
1137
+ "epoch": 1.040943789035392,
1138
+ "grad_norm": 16.138966847361562,
1139
+ "learning_rate": 2.754087695457005e-07,
1140
+ "logits/chosen": 1.3446929454803467,
1141
+ "logits/rejected": 3.647613525390625,
1142
+ "logps/chosen": -660.2525024414062,
1143
+ "logps/rejected": -1274.7076416015625,
1144
+ "loss": 0.1695,
1145
+ "rewards/accuracies": 0.9125000238418579,
1146
+ "rewards/chosen": -3.0320277214050293,
1147
+ "rewards/margins": 6.136539936065674,
1148
+ "rewards/rejected": -9.168566703796387,
1149
+ "step": 750
1150
+ },
1151
+ {
1152
+ "epoch": 1.054823039555864,
1153
+ "grad_norm": 15.364782345427345,
1154
+ "learning_rate": 2.6937310516798275e-07,
1155
+ "logits/chosen": 1.2250503301620483,
1156
+ "logits/rejected": 3.3649864196777344,
1157
+ "logps/chosen": -629.1475830078125,
1158
+ "logps/rejected": -1302.7933349609375,
1159
+ "loss": 0.1799,
1160
+ "rewards/accuracies": 0.9312499761581421,
1161
+ "rewards/chosen": -2.8058159351348877,
1162
+ "rewards/margins": 6.627597808837891,
1163
+ "rewards/rejected": -9.4334135055542,
1164
+ "step": 760
1165
+ },
1166
+ {
1167
+ "epoch": 1.0687022900763359,
1168
+ "grad_norm": 18.530915715706502,
1169
+ "learning_rate": 2.6332605750257456e-07,
1170
+ "logits/chosen": 0.9841400384902954,
1171
+ "logits/rejected": 3.341764450073242,
1172
+ "logps/chosen": -656.33251953125,
1173
+ "logps/rejected": -1349.439697265625,
1174
+ "loss": 0.1607,
1175
+ "rewards/accuracies": 0.925000011920929,
1176
+ "rewards/chosen": -3.064764976501465,
1177
+ "rewards/margins": 6.942727565765381,
1178
+ "rewards/rejected": -10.007492065429688,
1179
+ "step": 770
1180
+ },
1181
+ {
1182
+ "epoch": 1.0825815405968078,
1183
+ "grad_norm": 18.346652449927596,
1184
+ "learning_rate": 2.5727117968577785e-07,
1185
+ "logits/chosen": 1.088732361793518,
1186
+ "logits/rejected": 3.094304323196411,
1187
+ "logps/chosen": -656.88134765625,
1188
+ "logps/rejected": -1397.916259765625,
1189
+ "loss": 0.1514,
1190
+ "rewards/accuracies": 0.918749988079071,
1191
+ "rewards/chosen": -3.20575213432312,
1192
+ "rewards/margins": 7.160604000091553,
1193
+ "rewards/rejected": -10.366357803344727,
1194
+ "step": 780
1195
+ },
1196
+ {
1197
+ "epoch": 1.0964607911172797,
1198
+ "grad_norm": 17.314542797248876,
1199
+ "learning_rate": 2.5121202945475043e-07,
1200
+ "logits/chosen": 1.5640583038330078,
1201
+ "logits/rejected": 3.803560256958008,
1202
+ "logps/chosen": -630.0179443359375,
1203
+ "logps/rejected": -1349.983642578125,
1204
+ "loss": 0.178,
1205
+ "rewards/accuracies": 0.909375011920929,
1206
+ "rewards/chosen": -2.848733425140381,
1207
+ "rewards/margins": 7.226857662200928,
1208
+ "rewards/rejected": -10.075590133666992,
1209
+ "step": 790
1210
+ },
1211
+ {
1212
+ "epoch": 1.1103400416377516,
1213
+ "grad_norm": 14.955502239768675,
1214
+ "learning_rate": 2.4515216705704393e-07,
1215
+ "logits/chosen": 1.8664920330047607,
1216
+ "logits/rejected": 3.7396292686462402,
1217
+ "logps/chosen": -608.3690795898438,
1218
+ "logps/rejected": -1366.815673828125,
1219
+ "loss": 0.1637,
1220
+ "rewards/accuracies": 0.953125,
1221
+ "rewards/chosen": -2.7840514183044434,
1222
+ "rewards/margins": 7.429004669189453,
1223
+ "rewards/rejected": -10.213056564331055,
1224
+ "step": 800
1225
+ },
1226
+ {
1227
+ "epoch": 1.1242192921582235,
1228
+ "grad_norm": 18.71030295389215,
1229
+ "learning_rate": 2.39095153158666e-07,
1230
+ "logits/chosen": 1.694549560546875,
1231
+ "logits/rejected": 3.747912645339966,
1232
+ "logps/chosen": -657.016845703125,
1233
+ "logps/rejected": -1396.2716064453125,
1234
+ "loss": 0.1581,
1235
+ "rewards/accuracies": 0.9312499761581421,
1236
+ "rewards/chosen": -3.1420130729675293,
1237
+ "rewards/margins": 7.294529914855957,
1238
+ "rewards/rejected": -10.436542510986328,
1239
+ "step": 810
1240
+ },
1241
+ {
1242
+ "epoch": 1.1380985426786954,
1243
+ "grad_norm": 14.829984041140372,
1244
+ "learning_rate": 2.330445467518977e-07,
1245
+ "logits/chosen": 0.7256888747215271,
1246
+ "logits/rejected": 2.9225716590881348,
1247
+ "logps/chosen": -660.5631103515625,
1248
+ "logps/rejected": -1347.63134765625,
1249
+ "loss": 0.1619,
1250
+ "rewards/accuracies": 0.918749988079071,
1251
+ "rewards/chosen": -3.208239793777466,
1252
+ "rewards/margins": 6.785590171813965,
1253
+ "rewards/rejected": -9.993829727172852,
1254
+ "step": 820
1255
+ },
1256
+ {
1257
+ "epoch": 1.1519777931991673,
1258
+ "grad_norm": 22.137973289587368,
1259
+ "learning_rate": 2.270039030640931e-07,
1260
+ "logits/chosen": 1.578162431716919,
1261
+ "logits/rejected": 3.47196888923645,
1262
+ "logps/chosen": -628.019287109375,
1263
+ "logps/rejected": -1320.9600830078125,
1264
+ "loss": 0.1749,
1265
+ "rewards/accuracies": 0.893750011920929,
1266
+ "rewards/chosen": -2.9641802310943604,
1267
+ "rewards/margins": 6.771336555480957,
1268
+ "rewards/rejected": -9.735516548156738,
1269
+ "step": 830
1270
+ },
1271
+ {
1272
+ "epoch": 1.1658570437196392,
1273
+ "grad_norm": 19.216307710411925,
1274
+ "learning_rate": 2.209767714686924e-07,
1275
+ "logits/chosen": 1.5978351831436157,
1276
+ "logits/rejected": 3.7053027153015137,
1277
+ "logps/chosen": -653.354736328125,
1278
+ "logps/rejected": -1406.637451171875,
1279
+ "loss": 0.1606,
1280
+ "rewards/accuracies": 0.925000011920929,
1281
+ "rewards/chosen": -3.2107303142547607,
1282
+ "rewards/margins": 7.3466925621032715,
1283
+ "rewards/rejected": -10.557422637939453,
1284
+ "step": 840
1285
+ },
1286
+ {
1287
+ "epoch": 1.179736294240111,
1288
+ "grad_norm": 21.415640762520873,
1289
+ "learning_rate": 2.1496669339967344e-07,
1290
+ "logits/chosen": 1.8785479068756104,
1291
+ "logits/rejected": 3.8594677448272705,
1292
+ "logps/chosen": -668.572265625,
1293
+ "logps/rejected": -1460.521240234375,
1294
+ "loss": 0.1611,
1295
+ "rewards/accuracies": 0.9156249761581421,
1296
+ "rewards/chosen": -3.3157646656036377,
1297
+ "rewards/margins": 7.8289475440979,
1298
+ "rewards/rejected": -11.144712448120117,
1299
+ "step": 850
1300
+ },
1301
+ {
1302
+ "epoch": 1.193615544760583,
1303
+ "grad_norm": 19.68985482074639,
1304
+ "learning_rate": 2.0897720027066897e-07,
1305
+ "logits/chosen": 1.5298289060592651,
1306
+ "logits/rejected": 3.88330078125,
1307
+ "logps/chosen": -660.7781982421875,
1308
+ "logps/rejected": -1415.609130859375,
1309
+ "loss": 0.1637,
1310
+ "rewards/accuracies": 0.9437500238418579,
1311
+ "rewards/chosen": -3.227867603302002,
1312
+ "rewards/margins": 7.6040802001953125,
1313
+ "rewards/rejected": -10.831949234008789,
1314
+ "step": 860
1315
+ },
1316
+ {
1317
+ "epoch": 1.2074947952810549,
1318
+ "grad_norm": 19.76316476703645,
1319
+ "learning_rate": 2.0301181139997202e-07,
1320
+ "logits/chosen": 1.5017510652542114,
1321
+ "logits/rejected": 3.601763963699341,
1322
+ "logps/chosen": -664.7364501953125,
1323
+ "logps/rejected": -1365.153564453125,
1324
+ "loss": 0.1716,
1325
+ "rewards/accuracies": 0.9312499761581421,
1326
+ "rewards/chosen": -3.1189022064208984,
1327
+ "rewards/margins": 6.980868339538574,
1328
+ "rewards/rejected": -10.099771499633789,
1329
+ "step": 870
1330
+ },
1331
+ {
1332
+ "epoch": 1.2213740458015268,
1333
+ "grad_norm": 15.231393479477973,
1334
+ "learning_rate": 1.970740319426474e-07,
1335
+ "logits/chosen": 2.0277457237243652,
1336
+ "logits/rejected": 3.8836147785186768,
1337
+ "logps/chosen": -615.454833984375,
1338
+ "logps/rejected": -1219.456298828125,
1339
+ "loss": 0.1803,
1340
+ "rewards/accuracies": 0.9156249761581421,
1341
+ "rewards/chosen": -2.8338332176208496,
1342
+ "rewards/margins": 5.970818519592285,
1343
+ "rewards/rejected": -8.804651260375977,
1344
+ "step": 880
1345
+ },
1346
+ {
1347
+ "epoch": 1.2352532963219987,
1348
+ "grad_norm": 23.68508206139152,
1349
+ "learning_rate": 1.911673508309656e-07,
1350
+ "logits/chosen": 1.9813998937606812,
1351
+ "logits/rejected": 3.9024734497070312,
1352
+ "logps/chosen": -628.2916259765625,
1353
+ "logps/rejected": -1264.0699462890625,
1354
+ "loss": 0.1643,
1355
+ "rewards/accuracies": 0.9437500238418579,
1356
+ "rewards/chosen": -2.8454182147979736,
1357
+ "rewards/margins": 6.396851539611816,
1358
+ "rewards/rejected": -9.242269515991211,
1359
+ "step": 890
1360
+ },
1361
+ {
1362
+ "epoch": 1.2491325468424705,
1363
+ "grad_norm": 19.952403887659834,
1364
+ "learning_rate": 1.8529523872436977e-07,
1365
+ "logits/chosen": 1.032080054283142,
1366
+ "logits/rejected": 3.210120677947998,
1367
+ "logps/chosen": -671.1250610351562,
1368
+ "logps/rejected": -1549.0615234375,
1369
+ "loss": 0.16,
1370
+ "rewards/accuracies": 0.956250011920929,
1371
+ "rewards/chosen": -3.1327102184295654,
1372
+ "rewards/margins": 8.625667572021484,
1373
+ "rewards/rejected": -11.758378982543945,
1374
+ "step": 900
1375
+ },
1376
+ {
1377
+ "epoch": 1.2630117973629424,
1378
+ "grad_norm": 22.651112148813382,
1379
+ "learning_rate": 1.7946114597017808e-07,
1380
+ "logits/chosen": 1.6984176635742188,
1381
+ "logits/rejected": 3.7526144981384277,
1382
+ "logps/chosen": -660.7144165039062,
1383
+ "logps/rejected": -1447.436767578125,
1384
+ "loss": 0.1619,
1385
+ "rewards/accuracies": 0.940625011920929,
1386
+ "rewards/chosen": -3.2229180335998535,
1387
+ "rewards/margins": 7.7218732833862305,
1388
+ "rewards/rejected": -10.944790840148926,
1389
+ "step": 910
1390
+ },
1391
+ {
1392
+ "epoch": 1.2768910478834143,
1393
+ "grad_norm": 22.126302318317872,
1394
+ "learning_rate": 1.7366850057622172e-07,
1395
+ "logits/chosen": 1.6554454565048218,
1396
+ "logits/rejected": 3.758930206298828,
1397
+ "logps/chosen": -654.619140625,
1398
+ "logps/rejected": -1429.7637939453125,
1399
+ "loss": 0.1653,
1400
+ "rewards/accuracies": 0.9468749761581421,
1401
+ "rewards/chosen": -3.282761812210083,
1402
+ "rewards/margins": 7.560622215270996,
1403
+ "rewards/rejected": -10.8433837890625,
1404
+ "step": 920
1405
+ },
1406
+ {
1407
+ "epoch": 1.2907702984038862,
1408
+ "grad_norm": 17.175640354603786,
1409
+ "learning_rate": 1.6792070619660974e-07,
1410
+ "logits/chosen": 1.6677768230438232,
1411
+ "logits/rejected": 3.8167777061462402,
1412
+ "logps/chosen": -654.5197143554688,
1413
+ "logps/rejected": -1449.1417236328125,
1414
+ "loss": 0.1654,
1415
+ "rewards/accuracies": 0.9375,
1416
+ "rewards/chosen": -3.2044639587402344,
1417
+ "rewards/margins": 7.7717132568359375,
1418
+ "rewards/rejected": -10.976176261901855,
1419
+ "step": 930
1420
+ },
1421
+ {
1422
+ "epoch": 1.3046495489243581,
1423
+ "grad_norm": 19.342272367773113,
1424
+ "learning_rate": 1.622211401318028e-07,
1425
+ "logits/chosen": 1.567354440689087,
1426
+ "logits/rejected": 4.129142761230469,
1427
+ "logps/chosen": -657.1278686523438,
1428
+ "logps/rejected": -1453.904541015625,
1429
+ "loss": 0.1595,
1430
+ "rewards/accuracies": 0.9437500238418579,
1431
+ "rewards/chosen": -3.066843271255493,
1432
+ "rewards/margins": 7.886987209320068,
1433
+ "rewards/rejected": -10.953829765319824,
1434
+ "step": 940
1435
+ },
1436
+ {
1437
+ "epoch": 1.31852879944483,
1438
+ "grad_norm": 23.040092913699233,
1439
+ "learning_rate": 1.5657315134417244e-07,
1440
+ "logits/chosen": 1.8149755001068115,
1441
+ "logits/rejected": 4.2323408126831055,
1442
+ "logps/chosen": -688.6448364257812,
1443
+ "logps/rejected": -1585.87841796875,
1444
+ "loss": 0.1665,
1445
+ "rewards/accuracies": 0.9125000238418579,
1446
+ "rewards/chosen": -3.4780020713806152,
1447
+ "rewards/margins": 8.797907829284668,
1448
+ "rewards/rejected": -12.275908470153809,
1449
+ "step": 950
1450
+ },
1451
+ {
1452
+ "epoch": 1.332408049965302,
1453
+ "grad_norm": 17.336766543398397,
1454
+ "learning_rate": 1.5098005849021078e-07,
1455
+ "logits/chosen": 1.7025401592254639,
1456
+ "logits/rejected": 4.534233570098877,
1457
+ "logps/chosen": -703.167724609375,
1458
+ "logps/rejected": -1630.665283203125,
1459
+ "loss": 0.1672,
1460
+ "rewards/accuracies": 0.9281250238418579,
1461
+ "rewards/chosen": -3.5511913299560547,
1462
+ "rewards/margins": 9.27750301361084,
1463
+ "rewards/rejected": -12.828694343566895,
1464
+ "step": 960
1465
+ },
1466
+ {
1467
+ "epoch": 1.3462873004857738,
1468
+ "grad_norm": 20.603249005514577,
1469
+ "learning_rate": 1.454451479705484e-07,
1470
+ "logits/chosen": 2.531970500946045,
1471
+ "logits/rejected": 4.76376485824585,
1472
+ "logps/chosen": -658.04931640625,
1473
+ "logps/rejected": -1445.8599853515625,
1474
+ "loss": 0.1611,
1475
+ "rewards/accuracies": 0.9312499761581421,
1476
+ "rewards/chosen": -3.303406238555908,
1477
+ "rewards/margins": 7.701096534729004,
1478
+ "rewards/rejected": -11.00450325012207,
1479
+ "step": 970
1480
+ },
1481
+ {
1482
+ "epoch": 1.3601665510062457,
1483
+ "grad_norm": 26.637520706621387,
1484
+ "learning_rate": 1.3997167199892385e-07,
1485
+ "logits/chosen": 2.412917375564575,
1486
+ "logits/rejected": 4.9495530128479,
1487
+ "logps/chosen": -706.7518310546875,
1488
+ "logps/rejected": -1550.7310791015625,
1489
+ "loss": 0.16,
1490
+ "rewards/accuracies": 0.925000011920929,
1491
+ "rewards/chosen": -3.5566086769104004,
1492
+ "rewards/margins": 8.430598258972168,
1493
+ "rewards/rejected": -11.987205505371094,
1494
+ "step": 980
1495
+ },
1496
+ {
1497
+ "epoch": 1.3740458015267176,
1498
+ "grad_norm": 22.552075685705777,
1499
+ "learning_rate": 1.3456284669124157e-07,
1500
+ "logits/chosen": 2.5810890197753906,
1501
+ "logits/rejected": 4.767507076263428,
1502
+ "logps/chosen": -692.7928466796875,
1503
+ "logps/rejected": -1483.0191650390625,
1504
+ "loss": 0.1661,
1505
+ "rewards/accuracies": 0.9437500238418579,
1506
+ "rewards/chosen": -3.5196609497070312,
1507
+ "rewards/margins": 7.722577095031738,
1508
+ "rewards/rejected": -11.24223804473877,
1509
+ "step": 990
1510
+ },
1511
+ {
1512
+ "epoch": 1.3879250520471895,
1513
+ "grad_norm": 18.63062778053758,
1514
+ "learning_rate": 1.2922185017584036e-07,
1515
+ "logits/chosen": 2.9480197429656982,
1516
+ "logits/rejected": 5.796414375305176,
1517
+ "logps/chosen": -692.9706420898438,
1518
+ "logps/rejected": -1486.6146240234375,
1519
+ "loss": 0.1587,
1520
+ "rewards/accuracies": 0.949999988079071,
1521
+ "rewards/chosen": -3.4646847248077393,
1522
+ "rewards/margins": 7.979840278625488,
1523
+ "rewards/rejected": -11.444524765014648,
1524
+ "step": 1000
1525
+ },
1526
+ {
1527
+ "epoch": 1.3879250520471895,
1528
+ "eval_logits/chosen": 2.4397730827331543,
1529
+ "eval_logits/rejected": 4.560609817504883,
1530
+ "eval_logps/chosen": -779.3055419921875,
1531
+ "eval_logps/rejected": -1613.5015869140625,
1532
+ "eval_loss": 0.24712695181369781,
1533
+ "eval_rewards/accuracies": 0.8909774422645569,
1534
+ "eval_rewards/chosen": -3.947213649749756,
1535
+ "eval_rewards/margins": 8.086578369140625,
1536
+ "eval_rewards/rejected": -12.033791542053223,
1537
+ "eval_runtime": 385.5049,
1538
+ "eval_samples_per_second": 22.026,
1539
+ "eval_steps_per_second": 0.345,
1540
+ "step": 1000
1541
+ },
1542
+ {
1543
+ "epoch": 1.4018043025676614,
1544
+ "grad_norm": 17.720503294892175,
1545
+ "learning_rate": 1.2395182072608245e-07,
1546
+ "logits/chosen": 2.783371925354004,
1547
+ "logits/rejected": 5.293555736541748,
1548
+ "logps/chosen": -693.8554077148438,
1549
+ "logps/rejected": -1455.6026611328125,
1550
+ "loss": 0.1578,
1551
+ "rewards/accuracies": 0.918749988079071,
1552
+ "rewards/chosen": -3.711820125579834,
1553
+ "rewards/margins": 7.577448844909668,
1554
+ "rewards/rejected": -11.289270401000977,
1555
+ "step": 1010
1556
+ },
1557
+ {
1558
+ "epoch": 1.4156835530881333,
1559
+ "grad_norm": 39.790575003507485,
1560
+ "learning_rate": 1.1875585491635998e-07,
1561
+ "logits/chosen": 3.120913028717041,
1562
+ "logits/rejected": 5.793875217437744,
1563
+ "logps/chosen": -757.794921875,
1564
+ "logps/rejected": -1674.984375,
1565
+ "loss": 0.1529,
1566
+ "rewards/accuracies": 0.9375,
1567
+ "rewards/chosen": -4.312003135681152,
1568
+ "rewards/margins": 8.977596282958984,
1569
+ "rewards/rejected": -13.289599418640137,
1570
+ "step": 1020
1571
+ },
1572
+ {
1573
+ "epoch": 1.4295628036086052,
1574
+ "grad_norm": 69.28439112059148,
1575
+ "learning_rate": 1.1363700580260438e-07,
1576
+ "logits/chosen": 3.0012755393981934,
1577
+ "logits/rejected": 5.11987829208374,
1578
+ "logps/chosen": -737.9568481445312,
1579
+ "logps/rejected": -1678.4951171875,
1580
+ "loss": 0.161,
1581
+ "rewards/accuracies": 0.8968750238418579,
1582
+ "rewards/chosen": -4.191954612731934,
1583
+ "rewards/margins": 9.063277244567871,
1584
+ "rewards/rejected": -13.255231857299805,
1585
+ "step": 1030
1586
+ },
1587
+ {
1588
+ "epoch": 1.4434420541290771,
1589
+ "grad_norm": 18.312529268747,
1590
+ "learning_rate": 1.0859828112836539e-07,
1591
+ "logits/chosen": 2.285982847213745,
1592
+ "logits/rejected": 4.8581390380859375,
1593
+ "logps/chosen": -739.9261474609375,
1594
+ "logps/rejected": -1586.6966552734375,
1595
+ "loss": 0.1639,
1596
+ "rewards/accuracies": 0.9312499761581421,
1597
+ "rewards/chosen": -3.9794273376464844,
1598
+ "rewards/margins": 8.331348419189453,
1599
+ "rewards/rejected": -12.310776710510254,
1600
+ "step": 1040
1601
+ },
1602
+ {
1603
+ "epoch": 1.457321304649549,
1604
+ "grad_norm": 25.436043150499486,
1605
+ "learning_rate": 1.0364264155751487e-07,
1606
+ "logits/chosen": 2.474147081375122,
1607
+ "logits/rejected": 4.805792331695557,
1608
+ "logps/chosen": -723.0252685546875,
1609
+ "logps/rejected": -1613.719970703125,
1610
+ "loss": 0.1599,
1611
+ "rewards/accuracies": 0.9281250238418579,
1612
+ "rewards/chosen": -4.043066501617432,
1613
+ "rewards/margins": 8.693578720092773,
1614
+ "rewards/rejected": -12.736645698547363,
1615
+ "step": 1050
1616
+ },
1617
+ {
1618
+ "epoch": 1.4712005551700207,
1619
+ "grad_norm": 18.578653218988183,
1620
+ "learning_rate": 9.877299893461455e-08,
1621
+ "logits/chosen": 2.1861484050750732,
1622
+ "logits/rejected": 5.042906761169434,
1623
+ "logps/chosen": -748.6861572265625,
1624
+ "logps/rejected": -1565.2005615234375,
1625
+ "loss": 0.1539,
1626
+ "rewards/accuracies": 0.9468749761581421,
1627
+ "rewards/chosen": -4.071117401123047,
1628
+ "rewards/margins": 8.12061595916748,
1629
+ "rewards/rejected": -12.191734313964844,
1630
+ "step": 1060
1631
+ },
1632
+ {
1633
+ "epoch": 1.4850798056904928,
1634
+ "grad_norm": 21.104448029863153,
1635
+ "learning_rate": 9.39922145739683e-08,
1636
+ "logits/chosen": 2.260671854019165,
1637
+ "logits/rejected": 4.919951438903809,
1638
+ "logps/chosen": -785.3671875,
1639
+ "logps/rejected": -1578.3946533203125,
1640
+ "loss": 0.1559,
1641
+ "rewards/accuracies": 0.9437500238418579,
1642
+ "rewards/chosen": -4.341281414031982,
1643
+ "rewards/margins": 7.95664119720459,
1644
+ "rewards/rejected": -12.29792308807373,
1645
+ "step": 1070
1646
+ },
1647
+ {
1648
+ "epoch": 1.4989590562109645,
1649
+ "grad_norm": 30.412482027711867,
1650
+ "learning_rate": 8.930309757836516e-08,
1651
+ "logits/chosen": 2.812704563140869,
1652
+ "logits/rejected": 5.714043140411377,
1653
+ "logps/chosen": -803.9837036132812,
1654
+ "logps/rejected": -1662.1617431640625,
1655
+ "loss": 0.1624,
1656
+ "rewards/accuracies": 0.9312499761581421,
1657
+ "rewards/chosen": -4.532739162445068,
1658
+ "rewards/margins": 8.574421882629395,
1659
+ "rewards/rejected": -13.107160568237305,
1660
+ "step": 1080
1661
+ },
1662
+ {
1663
+ "epoch": 1.5128383067314366,
1664
+ "grad_norm": 27.560035648494992,
1665
+ "learning_rate": 8.470840318850168e-08,
1666
+ "logits/chosen": 2.2375552654266357,
1667
+ "logits/rejected": 5.399850368499756,
1668
+ "logps/chosen": -772.5816650390625,
1669
+ "logps/rejected": -1577.226806640625,
1670
+ "loss": 0.1576,
1671
+ "rewards/accuracies": 0.953125,
1672
+ "rewards/chosen": -4.188956260681152,
1673
+ "rewards/margins": 8.190386772155762,
1674
+ "rewards/rejected": -12.379343032836914,
1675
+ "step": 1090
1676
+ },
1677
+ {
1678
+ "epoch": 1.5267175572519083,
1679
+ "grad_norm": 21.688406462998472,
1680
+ "learning_rate": 8.021083116405173e-08,
1681
+ "logits/chosen": 2.3147189617156982,
1682
+ "logits/rejected": 5.235360145568848,
1683
+ "logps/chosen": -789.7681884765625,
1684
+ "logps/rejected": -1536.5394287109375,
1685
+ "loss": 0.1576,
1686
+ "rewards/accuracies": 0.9281250238418579,
1687
+ "rewards/chosen": -4.277130126953125,
1688
+ "rewards/margins": 7.6214165687561035,
1689
+ "rewards/rejected": -11.898547172546387,
1690
+ "step": 1100
1691
+ },
1692
+ {
1693
+ "epoch": 1.5405968077723804,
1694
+ "grad_norm": 17.348108637258065,
1695
+ "learning_rate": 7.581302419733632e-08,
1696
+ "logits/chosen": 2.7001595497131348,
1697
+ "logits/rejected": 5.277278900146484,
1698
+ "logps/chosen": -735.4856567382812,
1699
+ "logps/rejected": -1611.0947265625,
1700
+ "loss": 0.1519,
1701
+ "rewards/accuracies": 0.925000011920929,
1702
+ "rewards/chosen": -4.2074875831604,
1703
+ "rewards/margins": 8.55141830444336,
1704
+ "rewards/rejected": -12.758907318115234,
1705
+ "step": 1110
1706
+ },
1707
+ {
1708
+ "epoch": 1.554476058292852,
1709
+ "grad_norm": 26.00795087450143,
1710
+ "learning_rate": 7.151756636052527e-08,
1711
+ "logits/chosen": 2.411917209625244,
1712
+ "logits/rejected": 5.364395618438721,
1713
+ "logps/chosen": -790.9634399414062,
1714
+ "logps/rejected": -1729.780517578125,
1715
+ "loss": 0.1535,
1716
+ "rewards/accuracies": 0.949999988079071,
1717
+ "rewards/chosen": -4.578307151794434,
1718
+ "rewards/margins": 9.278319358825684,
1719
+ "rewards/rejected": -13.856626510620117,
1720
+ "step": 1120
1721
+ },
1722
+ {
1723
+ "epoch": 1.5683553088133242,
1724
+ "grad_norm": 19.597840266227173,
1725
+ "learning_rate": 6.732698158728315e-08,
1726
+ "logits/chosen": 2.3103692531585693,
1727
+ "logits/rejected": 5.478982448577881,
1728
+ "logps/chosen": -766.3836059570312,
1729
+ "logps/rejected": -1625.908203125,
1730
+ "loss": 0.1485,
1731
+ "rewards/accuracies": 0.934374988079071,
1732
+ "rewards/chosen": -4.306478977203369,
1733
+ "rewards/margins": 8.519509315490723,
1734
+ "rewards/rejected": -12.82598876953125,
1735
+ "step": 1130
1736
+ },
1737
+ {
1738
+ "epoch": 1.5822345593337959,
1739
+ "grad_norm": 37.32265853585954,
1740
+ "learning_rate": 6.324373218975104e-08,
1741
+ "logits/chosen": 2.6391475200653076,
1742
+ "logits/rejected": 5.588366508483887,
1743
+ "logps/chosen": -738.453125,
1744
+ "logps/rejected": -1503.0093994140625,
1745
+ "loss": 0.1743,
1746
+ "rewards/accuracies": 0.9125000238418579,
1747
+ "rewards/chosen": -4.1589436531066895,
1748
+ "rewards/margins": 7.595011234283447,
1749
+ "rewards/rejected": -11.75395393371582,
1750
+ "step": 1140
1751
+ },
1752
+ {
1753
+ "epoch": 1.596113809854268,
1754
+ "grad_norm": 34.02286786789855,
1755
+ "learning_rate": 5.927021741173624e-08,
1756
+ "logits/chosen": 2.2584009170532227,
1757
+ "logits/rejected": 5.177404403686523,
1758
+ "logps/chosen": -716.9951171875,
1759
+ "logps/rejected": -1559.7662353515625,
1760
+ "loss": 0.1604,
1761
+ "rewards/accuracies": 0.9468749761581421,
1762
+ "rewards/chosen": -4.111263275146484,
1763
+ "rewards/margins": 8.235260963439941,
1764
+ "rewards/rejected": -12.346525192260742,
1765
+ "step": 1150
1766
+ },
1767
+ {
1768
+ "epoch": 1.6099930603747397,
1769
+ "grad_norm": 20.932494952987984,
1770
+ "learning_rate": 5.5408772018959996e-08,
1771
+ "logits/chosen": 2.0977630615234375,
1772
+ "logits/rejected": 5.0792131423950195,
1773
+ "logps/chosen": -744.5228271484375,
1774
+ "logps/rejected": -1543.290283203125,
1775
+ "loss": 0.1549,
1776
+ "rewards/accuracies": 0.9156249761581421,
1777
+ "rewards/chosen": -3.9662253856658936,
1778
+ "rewards/margins": 7.838561058044434,
1779
+ "rewards/rejected": -11.80478572845459,
1780
+ "step": 1160
1781
+ },
1782
+ {
1783
+ "epoch": 1.6238723108952118,
1784
+ "grad_norm": 18.156628188332558,
1785
+ "learning_rate": 5.166166492719124e-08,
1786
+ "logits/chosen": 2.717636823654175,
1787
+ "logits/rejected": 5.455983638763428,
1788
+ "logps/chosen": -789.8887939453125,
1789
+ "logps/rejected": -1659.1243896484375,
1790
+ "loss": 0.1431,
1791
+ "rewards/accuracies": 0.9312499761581421,
1792
+ "rewards/chosen": -4.5077619552612305,
1793
+ "rewards/margins": 8.738015174865723,
1794
+ "rewards/rejected": -13.245776176452637,
1795
+ "step": 1170
1796
+ },
1797
+ {
1798
+ "epoch": 1.6377515614156835,
1799
+ "grad_norm": 31.62818641679718,
1800
+ "learning_rate": 4.8031097869072225e-08,
1801
+ "logits/chosen": 2.479032516479492,
1802
+ "logits/rejected": 5.483838081359863,
1803
+ "logps/chosen": -843.8903198242188,
1804
+ "logps/rejected": -1682.486328125,
1805
+ "loss": 0.1808,
1806
+ "rewards/accuracies": 0.9312499761581421,
1807
+ "rewards/chosen": -4.988056182861328,
1808
+ "rewards/margins": 8.428102493286133,
1809
+ "rewards/rejected": -13.416158676147461,
1810
+ "step": 1180
1811
+ },
1812
+ {
1813
+ "epoch": 1.6516308119361556,
1814
+ "grad_norm": 23.367654380160257,
1815
+ "learning_rate": 4.451920410042048e-08,
1816
+ "logits/chosen": 2.4087483882904053,
1817
+ "logits/rejected": 5.190215110778809,
1818
+ "logps/chosen": -769.288818359375,
1819
+ "logps/rejected": -1569.325927734375,
1820
+ "loss": 0.1421,
1821
+ "rewards/accuracies": 0.956250011920929,
1822
+ "rewards/chosen": -4.326132297515869,
1823
+ "rewards/margins": 7.9340925216674805,
1824
+ "rewards/rejected": -12.260224342346191,
1825
+ "step": 1190
1826
+ },
1827
+ {
1828
+ "epoch": 1.6655100624566272,
1829
+ "grad_norm": 19.705530443812812,
1830
+ "learning_rate": 4.112804714676593e-08,
1831
+ "logits/chosen": 2.707061529159546,
1832
+ "logits/rejected": 5.535910606384277,
1833
+ "logps/chosen": -801.08935546875,
1834
+ "logps/rejected": -1702.814453125,
1835
+ "loss": 0.1442,
1836
+ "rewards/accuracies": 0.953125,
1837
+ "rewards/chosen": -4.545718193054199,
1838
+ "rewards/margins": 8.991861343383789,
1839
+ "rewards/rejected": -13.537579536437988,
1840
+ "step": 1200
1841
+ },
1842
+ {
1843
+ "epoch": 1.6793893129770994,
1844
+ "grad_norm": 18.481993519274262,
1845
+ "learning_rate": 3.785961959086026e-08,
1846
+ "logits/chosen": 2.372300624847412,
1847
+ "logits/rejected": 4.822690010070801,
1848
+ "logps/chosen": -803.9580688476562,
1849
+ "logps/rejected": -1608.513671875,
1850
+ "loss": 0.1649,
1851
+ "rewards/accuracies": 0.9281250238418579,
1852
+ "rewards/chosen": -4.596296787261963,
1853
+ "rewards/margins": 7.976349830627441,
1854
+ "rewards/rejected": -12.572647094726562,
1855
+ "step": 1210
1856
+ },
1857
+ {
1858
+ "epoch": 1.693268563497571,
1859
+ "grad_norm": 17.963706056533137,
1860
+ "learning_rate": 3.4715841901871545e-08,
1861
+ "logits/chosen": 2.230515956878662,
1862
+ "logits/rejected": 4.730217456817627,
1863
+ "logps/chosen": -780.9340209960938,
1864
+ "logps/rejected": -1548.3511962890625,
1865
+ "loss": 0.1478,
1866
+ "rewards/accuracies": 0.956250011920929,
1867
+ "rewards/chosen": -4.49829626083374,
1868
+ "rewards/margins": 7.652662754058838,
1869
+ "rewards/rejected": -12.150960922241211,
1870
+ "step": 1220
1871
+ },
1872
+ {
1873
+ "epoch": 1.7071478140180432,
1874
+ "grad_norm": 19.452721114097585,
1875
+ "learning_rate": 3.169856130695106e-08,
1876
+ "logits/chosen": 2.7935097217559814,
1877
+ "logits/rejected": 5.348397254943848,
1878
+ "logps/chosen": -837.3346557617188,
1879
+ "logps/rejected": -1694.122314453125,
1880
+ "loss": 0.1637,
1881
+ "rewards/accuracies": 0.90625,
1882
+ "rewards/chosen": -4.844791412353516,
1883
+ "rewards/margins": 8.404356002807617,
1884
+ "rewards/rejected": -13.249147415161133,
1885
+ "step": 1230
1886
+ },
1887
+ {
1888
+ "epoch": 1.7210270645385148,
1889
+ "grad_norm": 24.207735428419646,
1890
+ "learning_rate": 2.8809550705835546e-08,
1891
+ "logits/chosen": 2.0396385192871094,
1892
+ "logits/rejected": 5.347299098968506,
1893
+ "logps/chosen": -809.9459838867188,
1894
+ "logps/rejected": -1699.661865234375,
1895
+ "loss": 0.1595,
1896
+ "rewards/accuracies": 0.9468749761581421,
1897
+ "rewards/chosen": -4.592886447906494,
1898
+ "rewards/margins": 8.931785583496094,
1899
+ "rewards/rejected": -13.52467155456543,
1900
+ "step": 1240
1901
+ },
1902
+ {
1903
+ "epoch": 1.734906315058987,
1904
+ "grad_norm": 24.611504297665018,
1905
+ "learning_rate": 2.6050507629123724e-08,
1906
+ "logits/chosen": 2.3445913791656494,
1907
+ "logits/rejected": 5.292250633239746,
1908
+ "logps/chosen": -763.6495361328125,
1909
+ "logps/rejected": -1591.069091796875,
1910
+ "loss": 0.1746,
1911
+ "rewards/accuracies": 0.921875,
1912
+ "rewards/chosen": -4.202335834503174,
1913
+ "rewards/margins": 8.237665176391602,
1914
+ "rewards/rejected": -12.440000534057617,
1915
+ "step": 1250
1916
+ },
1917
+ {
1918
+ "epoch": 1.7487855655794586,
1919
+ "grad_norm": 17.351441576745916,
1920
+ "learning_rate": 2.3423053240837514e-08,
1921
+ "logits/chosen": 2.1456151008605957,
1922
+ "logits/rejected": 5.197157859802246,
1923
+ "logps/chosen": -775.988525390625,
1924
+ "logps/rejected": -1635.9970703125,
1925
+ "loss": 0.1602,
1926
+ "rewards/accuracies": 0.949999988079071,
1927
+ "rewards/chosen": -4.428194999694824,
1928
+ "rewards/margins": 8.511189460754395,
1929
+ "rewards/rejected": -12.939386367797852,
1930
+ "step": 1260
1931
+ },
1932
+ {
1933
+ "epoch": 1.7626648160999308,
1934
+ "grad_norm": 23.373280732240797,
1935
+ "learning_rate": 2.0928731385855548e-08,
1936
+ "logits/chosen": 2.3203883171081543,
1937
+ "logits/rejected": 5.307148456573486,
1938
+ "logps/chosen": -750.8080444335938,
1939
+ "logps/rejected": -1596.9986572265625,
1940
+ "loss": 0.1503,
1941
+ "rewards/accuracies": 0.921875,
1942
+ "rewards/chosen": -4.260045051574707,
1943
+ "rewards/margins": 8.443410873413086,
1944
+ "rewards/rejected": -12.703454971313477,
1945
+ "step": 1270
1946
+ },
1947
+ {
1948
+ "epoch": 1.7765440666204024,
1949
+ "grad_norm": 21.91095019107269,
1950
+ "learning_rate": 1.8569007682777415e-08,
1951
+ "logits/chosen": 2.3937430381774902,
1952
+ "logits/rejected": 5.018430233001709,
1953
+ "logps/chosen": -775.3260498046875,
1954
+ "logps/rejected": -1653.3775634765625,
1955
+ "loss": 0.1517,
1956
+ "rewards/accuracies": 0.9468749761581421,
1957
+ "rewards/chosen": -4.4354681968688965,
1958
+ "rewards/margins": 8.597478866577148,
1959
+ "rewards/rejected": -13.032946586608887,
1960
+ "step": 1280
1961
+ },
1962
+ {
1963
+ "epoch": 1.7904233171408745,
1964
+ "grad_norm": 18.785456105009327,
1965
+ "learning_rate": 1.6345268662752904e-08,
1966
+ "logits/chosen": 2.1837244033813477,
1967
+ "logits/rejected": 4.996502876281738,
1968
+ "logps/chosen": -797.1134643554688,
1969
+ "logps/rejected": -1595.029541015625,
1970
+ "loss": 0.1469,
1971
+ "rewards/accuracies": 0.934374988079071,
1972
+ "rewards/chosen": -4.397826671600342,
1973
+ "rewards/margins": 7.9338555335998535,
1974
+ "rewards/rejected": -12.331681251525879,
1975
+ "step": 1290
1976
+ },
1977
+ {
1978
+ "epoch": 1.8043025676613462,
1979
+ "grad_norm": 22.286605985901687,
1980
+ "learning_rate": 1.4258820954781037e-08,
1981
+ "logits/chosen": 2.1744418144226074,
1982
+ "logits/rejected": 5.030113697052002,
1983
+ "logps/chosen": -767.9185180664062,
1984
+ "logps/rejected": -1573.216552734375,
1985
+ "loss": 0.1686,
1986
+ "rewards/accuracies": 0.925000011920929,
1987
+ "rewards/chosen": -4.429121971130371,
1988
+ "rewards/margins": 7.945558071136475,
1989
+ "rewards/rejected": -12.37468147277832,
1990
+ "step": 1300
1991
+ },
1992
+ {
1993
+ "epoch": 1.8181818181818183,
1994
+ "grad_norm": 25.01416006471772,
1995
+ "learning_rate": 1.2310890517958389e-08,
1996
+ "logits/chosen": 2.1604392528533936,
1997
+ "logits/rejected": 5.2982378005981445,
1998
+ "logps/chosen": -799.98046875,
1999
+ "logps/rejected": -1684.6910400390625,
2000
+ "loss": 0.1451,
2001
+ "rewards/accuracies": 0.918749988079071,
2002
+ "rewards/chosen": -4.48465633392334,
2003
+ "rewards/margins": 8.867886543273926,
2004
+ "rewards/rejected": -13.352543830871582,
2005
+ "step": 1310
2006
+ },
2007
+ {
2008
+ "epoch": 1.83206106870229,
2009
+ "grad_norm": 22.26904532535015,
2010
+ "learning_rate": 1.0502621921127774e-08,
2011
+ "logits/chosen": 2.3922975063323975,
2012
+ "logits/rejected": 5.1035685539245605,
2013
+ "logps/chosen": -778.9490966796875,
2014
+ "logps/rejected": -1598.516845703125,
2015
+ "loss": 0.1596,
2016
+ "rewards/accuracies": 0.909375011920929,
2017
+ "rewards/chosen": -4.48614501953125,
2018
+ "rewards/margins": 8.004111289978027,
2019
+ "rewards/rejected": -12.490256309509277,
2020
+ "step": 1320
2021
+ },
2022
+ {
2023
+ "epoch": 1.845940319222762,
2024
+ "grad_norm": 20.889996927509948,
2025
+ "learning_rate": 8.83507767035016e-09,
2026
+ "logits/chosen": 2.3073320388793945,
2027
+ "logits/rejected": 4.977648735046387,
2028
+ "logps/chosen": -797.042236328125,
2029
+ "logps/rejected": -1626.1595458984375,
2030
+ "loss": 0.1657,
2031
+ "rewards/accuracies": 0.918749988079071,
2032
+ "rewards/chosen": -4.542098045349121,
2033
+ "rewards/margins": 8.27629280090332,
2034
+ "rewards/rejected": -12.818391799926758,
2035
+ "step": 1330
2036
+ },
2037
+ {
2038
+ "epoch": 1.8598195697432338,
2039
+ "grad_norm": 26.353814591753384,
2040
+ "learning_rate": 7.309237584595007e-09,
2041
+ "logits/chosen": 2.40840220451355,
2042
+ "logits/rejected": 5.275250434875488,
2043
+ "logps/chosen": -784.8826904296875,
2044
+ "logps/rejected": -1593.8211669921875,
2045
+ "loss": 0.1604,
2046
+ "rewards/accuracies": 0.934374988079071,
2047
+ "rewards/chosen": -4.507115364074707,
2048
+ "rewards/margins": 8.047722816467285,
2049
+ "rewards/rejected": -12.554839134216309,
2050
+ "step": 1340
2051
+ },
2052
+ {
2053
+ "epoch": 1.8736988202637057,
2054
+ "grad_norm": 27.69480128946826,
2055
+ "learning_rate": 5.925998220016659e-09,
2056
+ "logits/chosen": 2.0922293663024902,
2057
+ "logits/rejected": 5.0452680587768555,
2058
+ "logps/chosen": -795.8992919921875,
2059
+ "logps/rejected": -1614.81298828125,
2060
+ "loss": 0.1657,
2061
+ "rewards/accuracies": 0.940625011920929,
2062
+ "rewards/chosen": -4.478159427642822,
2063
+ "rewards/margins": 8.272706985473633,
2064
+ "rewards/rejected": -12.75086498260498,
2065
+ "step": 1350
2066
+ },
2067
+ {
2068
+ "epoch": 1.8875780707841776,
2069
+ "grad_norm": 21.698890899471028,
2070
+ "learning_rate": 4.6861723431538265e-09,
2071
+ "logits/chosen": 2.527801275253296,
2072
+ "logits/rejected": 5.509335517883301,
2073
+ "logps/chosen": -762.1702270507812,
2074
+ "logps/rejected": -1630.4300537109375,
2075
+ "loss": 0.16,
2076
+ "rewards/accuracies": 0.9281250238418579,
2077
+ "rewards/chosen": -4.3918232917785645,
2078
+ "rewards/margins": 8.475740432739258,
2079
+ "rewards/rejected": -12.867563247680664,
2080
+ "step": 1360
2081
+ },
2082
+ {
2083
+ "epoch": 1.9014573213046495,
2084
+ "grad_norm": 23.606565024567274,
2085
+ "learning_rate": 3.5904884533627113e-09,
2086
+ "logits/chosen": 2.040306568145752,
2087
+ "logits/rejected": 5.0976457595825195,
2088
+ "logps/chosen": -809.3186645507812,
2089
+ "logps/rejected": -1723.1839599609375,
2090
+ "loss": 0.1661,
2091
+ "rewards/accuracies": 0.956250011920929,
2092
+ "rewards/chosen": -4.630982398986816,
2093
+ "rewards/margins": 9.00139331817627,
2094
+ "rewards/rejected": -13.63237476348877,
2095
+ "step": 1370
2096
+ },
2097
+ {
2098
+ "epoch": 1.9153365718251214,
2099
+ "grad_norm": 23.741237778017844,
2100
+ "learning_rate": 2.639590354763882e-09,
2101
+ "logits/chosen": 2.073582410812378,
2102
+ "logits/rejected": 4.958869934082031,
2103
+ "logps/chosen": -795.91162109375,
2104
+ "logps/rejected": -1612.056396484375,
2105
+ "loss": 0.1613,
2106
+ "rewards/accuracies": 0.940625011920929,
2107
+ "rewards/chosen": -4.596648693084717,
2108
+ "rewards/margins": 8.110963821411133,
2109
+ "rewards/rejected": -12.707611083984375,
2110
+ "step": 1380
2111
+ },
2112
+ {
2113
+ "epoch": 1.9292158223455933,
2114
+ "grad_norm": 18.578608062996363,
2115
+ "learning_rate": 1.8340367779545452e-09,
2116
+ "logits/chosen": 2.297240972518921,
2117
+ "logits/rejected": 5.08568811416626,
2118
+ "logps/chosen": -794.4322509765625,
2119
+ "logps/rejected": -1613.2269287109375,
2120
+ "loss": 0.1486,
2121
+ "rewards/accuracies": 0.9375,
2122
+ "rewards/chosen": -4.632008075714111,
2123
+ "rewards/margins": 8.13344955444336,
2124
+ "rewards/rejected": -12.765457153320312,
2125
+ "step": 1390
2126
+ },
2127
+ {
2128
+ "epoch": 1.9430950728660652,
2129
+ "grad_norm": 18.67065103551978,
2130
+ "learning_rate": 1.1743010517085427e-09,
2131
+ "logits/chosen": 2.2125773429870605,
2132
+ "logits/rejected": 5.183515548706055,
2133
+ "logps/chosen": -828.9417724609375,
2134
+ "logps/rejected": -1649.573974609375,
2135
+ "loss": 0.1548,
2136
+ "rewards/accuracies": 0.9437500238418579,
2137
+ "rewards/chosen": -4.853725433349609,
2138
+ "rewards/margins": 8.179043769836426,
2139
+ "rewards/rejected": -13.032768249511719,
2140
+ "step": 1400
2141
+ },
2142
+ {
2143
+ "epoch": 1.956974323386537,
2144
+ "grad_norm": 22.95032078584552,
2145
+ "learning_rate": 6.607708248569377e-10,
2146
+ "logits/chosen": 2.1667537689208984,
2147
+ "logits/rejected": 4.85316276550293,
2148
+ "logps/chosen": -793.8030395507812,
2149
+ "logps/rejected": -1551.1268310546875,
2150
+ "loss": 0.1672,
2151
+ "rewards/accuracies": 0.918749988079071,
2152
+ "rewards/chosen": -4.6431074142456055,
2153
+ "rewards/margins": 7.463442325592041,
2154
+ "rewards/rejected": -12.106550216674805,
2155
+ "step": 1410
2156
+ },
2157
+ {
2158
+ "epoch": 1.970853573907009,
2159
+ "grad_norm": 18.974426521602602,
2160
+ "learning_rate": 2.9374783851240923e-10,
2161
+ "logits/chosen": 2.2130208015441895,
2162
+ "logits/rejected": 5.2726945877075195,
2163
+ "logps/chosen": -810.5875244140625,
2164
+ "logps/rejected": -1586.91552734375,
2165
+ "loss": 0.1663,
2166
+ "rewards/accuracies": 0.9156249761581421,
2167
+ "rewards/chosen": -4.543966770172119,
2168
+ "rewards/margins": 7.790687561035156,
2169
+ "rewards/rejected": -12.334654808044434,
2170
+ "step": 1420
2171
+ },
2172
+ {
2173
+ "epoch": 1.984732824427481,
2174
+ "grad_norm": 22.46417636362631,
2175
+ "learning_rate": 7.34477487716878e-11,
2176
+ "logits/chosen": 1.7181364297866821,
2177
+ "logits/rejected": 4.7832465171813965,
2178
+ "logps/chosen": -800.3704223632812,
2179
+ "logps/rejected": -1517.8531494140625,
2180
+ "loss": 0.1627,
2181
+ "rewards/accuracies": 0.9375,
2182
+ "rewards/chosen": -4.434643745422363,
2183
+ "rewards/margins": 7.220560550689697,
2184
+ "rewards/rejected": -11.655204772949219,
2185
+ "step": 1430
2186
+ },
2187
+ {
2188
+ "epoch": 1.9986120749479528,
2189
+ "grad_norm": 31.109687143592964,
2190
+ "learning_rate": 0.0,
2191
+ "logits/chosen": 2.243917942047119,
2192
+ "logits/rejected": 5.361481666564941,
2193
+ "logps/chosen": -816.5538940429688,
2194
+ "logps/rejected": -1620.42529296875,
2195
+ "loss": 0.1599,
2196
+ "rewards/accuracies": 0.925000011920929,
2197
+ "rewards/chosen": -4.733948707580566,
2198
+ "rewards/margins": 8.088279724121094,
2199
+ "rewards/rejected": -12.822227478027344,
2200
+ "step": 1440
2201
+ },
2202
+ {
2203
+ "epoch": 1.9986120749479528,
2204
+ "step": 1440,
2205
+ "total_flos": 0.0,
2206
+ "train_loss": 0.25082930790053476,
2207
+ "train_runtime": 42170.4729,
2208
+ "train_samples_per_second": 8.747,
2209
+ "train_steps_per_second": 0.034
2210
+ }
2211
+ ],
2212
+ "logging_steps": 10,
2213
+ "max_steps": 1440,
2214
+ "num_input_tokens_seen": 0,
2215
+ "num_train_epochs": 2,
2216
+ "save_steps": 100,
2217
+ "stateful_callbacks": {
2218
+ "TrainerControl": {
2219
+ "args": {
2220
+ "should_epoch_stop": false,
2221
+ "should_evaluate": false,
2222
+ "should_log": false,
2223
+ "should_save": true,
2224
+ "should_training_stop": true
2225
+ },
2226
+ "attributes": {}
2227
+ }
2228
+ },
2229
+ "total_flos": 0.0,
2230
+ "train_batch_size": 8,
2231
+ "trial_name": null,
2232
+ "trial_params": null
2233
+ }