AmberYifan commited on
Commit
5653f47
1 Parent(s): 30c42cb

Model save

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: alignment-handbook/zephyr-7b-sft-full
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: spin-v-diverse
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # spin-v-diverse
15
+
16
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.0027
19
+ - Rewards/real: -2.6757
20
+ - Rewards/generated: -21.8763
21
+ - Rewards/accuracies: 1.0
22
+ - Rewards/margins: 19.2006
23
+ - Logps/generated: -346.5988
24
+ - Logps/real: -161.4224
25
+ - Logits/generated: -2.5880
26
+ - Logits/real: -2.4315
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-07
46
+ - train_batch_size: 8
47
+ - eval_batch_size: 8
48
+ - seed: 42
49
+ - distributed_type: multi-GPU
50
+ - num_devices: 4
51
+ - total_train_batch_size: 32
52
+ - total_eval_batch_size: 32
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - lr_scheduler_warmup_ratio: 0.1
56
+ - num_epochs: 1
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real |
61
+ |:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:|
62
+ | 0.0257 | 0.06 | 100 | 0.0288 | 1.0058 | -5.7769 | 0.9928 | 6.7828 | -185.6055 | -124.6072 | -2.8843 | -2.6520 |
63
+ | 0.0096 | 0.13 | 200 | 0.0126 | -0.1554 | -12.6258 | 0.9984 | 12.4704 | -254.0941 | -136.2193 | -2.5945 | -2.2413 |
64
+ | 0.024 | 0.19 | 300 | 0.0126 | 0.1173 | -11.0946 | 0.9968 | 11.2119 | -238.7820 | -133.4925 | -2.7227 | -2.5040 |
65
+ | 0.0065 | 0.26 | 400 | 0.0082 | -0.1964 | -13.6305 | 0.9984 | 13.4341 | -264.1411 | -136.6298 | -2.7028 | -2.4738 |
66
+ | 0.0073 | 0.32 | 500 | 0.0081 | 0.0850 | -13.4368 | 0.9984 | 13.5218 | -262.2040 | -133.8156 | -2.6477 | -2.4285 |
67
+ | 0.0035 | 0.38 | 600 | 0.0071 | -2.8739 | -18.4641 | 1.0 | 15.5902 | -312.4772 | -163.4043 | -2.5956 | -2.3811 |
68
+ | 0.0097 | 0.45 | 700 | 0.0077 | -2.2908 | -16.9898 | 0.9984 | 14.6989 | -297.7338 | -157.5739 | -2.5210 | -2.2045 |
69
+ | 0.0052 | 0.51 | 800 | 0.0065 | -1.6983 | -19.8323 | 0.9992 | 18.1340 | -326.1593 | -151.6484 | -2.7183 | -2.5409 |
70
+ | 0.0037 | 0.58 | 900 | 0.0067 | -1.2826 | -16.6590 | 0.9984 | 15.3763 | -294.4258 | -147.4920 | -2.6881 | -2.5334 |
71
+ | 0.0023 | 0.64 | 1000 | 0.0047 | -1.9423 | -19.2263 | 1.0 | 17.2840 | -320.0990 | -154.0886 | -2.6404 | -2.4694 |
72
+ | 0.0041 | 0.7 | 1100 | 0.0050 | -2.4756 | -19.3047 | 1.0 | 16.8290 | -320.8827 | -159.4218 | -2.6368 | -2.4329 |
73
+ | 0.0033 | 0.77 | 1200 | 0.0037 | -2.8600 | -20.2625 | 1.0 | 17.4025 | -330.4614 | -163.2654 | -2.6240 | -2.4681 |
74
+ | 0.0042 | 0.83 | 1300 | 0.0032 | -2.6738 | -20.7669 | 1.0 | 18.0931 | -335.5057 | -161.4039 | -2.5974 | -2.4463 |
75
+ | 0.0031 | 0.9 | 1400 | 0.0030 | -2.1767 | -20.6456 | 0.9992 | 18.4690 | -334.2925 | -156.4323 | -2.6144 | -2.4595 |
76
+ | 0.0015 | 0.96 | 1500 | 0.0027 | -2.6757 | -21.8763 | 1.0 | 19.2006 | -346.5988 | -161.4224 | -2.5880 | -2.4315 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.37.0
82
+ - Pytorch 2.1.2+cu121
83
+ - Datasets 2.14.6
84
+ - Tokenizers 0.15.2
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.016542167173306324,
4
+ "train_runtime": 14061.5551,
5
+ "train_samples": 50000,
6
+ "train_samples_per_second": 3.556,
7
+ "train_steps_per_second": 0.111
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.37.0"
6
+ }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56cb71ddcb82aaee6a7b8ca0e622e15f87933222d26d313cfb74155a94715dc0
3
+ size 4943162336
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9212294504f321d5cf8d4d54ce64a645123bc51dbc6c404b45f17baded9edc46
3
+ size 4999819336
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:997ee064fd5a3903dd738bde7c1398dd31a6921738adf00ddabfcdd082671a8f
3
+ size 4540516344
model.safetensors.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00003-of-00003.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00003.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
242
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
243
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
244
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
245
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
246
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
247
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
248
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
249
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
250
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
251
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
252
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
253
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
254
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
255
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
256
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
257
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
258
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
259
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
260
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
261
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
262
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
263
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
264
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
265
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
266
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
267
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
268
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
269
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
270
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
271
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
272
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
273
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
274
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
275
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
276
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
277
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
278
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
279
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
280
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
281
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
282
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
283
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
284
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
285
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
286
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
287
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
288
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
289
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
290
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
291
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
292
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
293
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
294
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
295
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
296
+ "model.norm.weight": "model-00003-of-00003.safetensors"
297
+ }
298
+ }
runs/Jul16_10-10-53_gilbreth-j001.rcac.purdue.edu/events.out.tfevents.1721139233.gilbreth-j001.rcac.purdue.edu.10968.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:27f50e45fd0bcbda8c9417ab67381e83d339a0b8306b3709ee456def59bde1d7
3
- size 110749
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dc06727fbae979fe8089fccb75eb71db9bca8bd8496106a14da39170d4792ac
3
+ size 114889
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.016542167173306324,
4
+ "train_runtime": 14061.5551,
5
+ "train_samples": 50000,
6
+ "train_samples_per_second": 3.556,
7
+ "train_steps_per_second": 0.111
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2468 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 100,
6
+ "global_step": 1563,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 3.1847133757961784e-09,
14
+ "logits/generated": -3.1013035774230957,
15
+ "logits/real": -2.9071245193481445,
16
+ "logps/generated": -110.83032989501953,
17
+ "logps/real": -115.27798461914062,
18
+ "loss": 0.6931,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/generated": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/real": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 3.184713375796178e-08,
28
+ "logits/generated": -3.0959415435791016,
29
+ "logits/real": -2.8981242179870605,
30
+ "logps/generated": -138.69400024414062,
31
+ "logps/real": -131.35594177246094,
32
+ "loss": 0.6776,
33
+ "rewards/accuracies": 0.6944444179534912,
34
+ "rewards/generated": -0.02448221482336521,
35
+ "rewards/margins": 0.035542234778404236,
36
+ "rewards/real": 0.011060023680329323,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.01,
41
+ "learning_rate": 6.369426751592356e-08,
42
+ "logits/generated": -3.099088430404663,
43
+ "logits/real": -2.867945671081543,
44
+ "logps/generated": -140.5944366455078,
45
+ "logps/real": -127.7799301147461,
46
+ "loss": 0.4267,
47
+ "rewards/accuracies": 0.9750000238418579,
48
+ "rewards/generated": -0.5366789102554321,
49
+ "rewards/margins": 0.748847484588623,
50
+ "rewards/real": 0.21216857433319092,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.02,
55
+ "learning_rate": 9.554140127388536e-08,
56
+ "logits/generated": -3.0423760414123535,
57
+ "logits/real": -2.886434316635132,
58
+ "logps/generated": -136.33938598632812,
59
+ "logps/real": -135.77671813964844,
60
+ "loss": 0.1966,
61
+ "rewards/accuracies": 1.0,
62
+ "rewards/generated": -1.3359780311584473,
63
+ "rewards/margins": 1.8330228328704834,
64
+ "rewards/real": 0.4970448613166809,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.03,
69
+ "learning_rate": 1.2738853503184713e-07,
70
+ "logits/generated": -3.0136947631835938,
71
+ "logits/real": -2.7968850135803223,
72
+ "logps/generated": -149.3777618408203,
73
+ "logps/real": -126.96885681152344,
74
+ "loss": 0.1114,
75
+ "rewards/accuracies": 1.0,
76
+ "rewards/generated": -1.9948108196258545,
77
+ "rewards/margins": 2.8178465366363525,
78
+ "rewards/real": 0.8230355978012085,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.03,
83
+ "learning_rate": 1.592356687898089e-07,
84
+ "logits/generated": -2.986863851547241,
85
+ "logits/real": -2.745471715927124,
86
+ "logps/generated": -158.6234588623047,
87
+ "logps/real": -118.46070861816406,
88
+ "loss": 0.0724,
89
+ "rewards/accuracies": 1.0,
90
+ "rewards/generated": -2.7238974571228027,
91
+ "rewards/margins": 3.6493821144104004,
92
+ "rewards/real": 0.9254843592643738,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.04,
97
+ "learning_rate": 1.9108280254777072e-07,
98
+ "logits/generated": -2.930187702178955,
99
+ "logits/real": -2.784818172454834,
100
+ "logps/generated": -167.3131103515625,
101
+ "logps/real": -126.73246765136719,
102
+ "loss": 0.067,
103
+ "rewards/accuracies": 1.0,
104
+ "rewards/generated": -3.0826990604400635,
105
+ "rewards/margins": 4.143509864807129,
106
+ "rewards/real": 1.0608108043670654,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.04,
111
+ "learning_rate": 2.2292993630573247e-07,
112
+ "logits/generated": -2.9472126960754395,
113
+ "logits/real": -2.7313473224639893,
114
+ "logps/generated": -166.8314666748047,
115
+ "logps/real": -132.07711791992188,
116
+ "loss": 0.0529,
117
+ "rewards/accuracies": 1.0,
118
+ "rewards/generated": -3.4993481636047363,
119
+ "rewards/margins": 4.573017597198486,
120
+ "rewards/real": 1.0736693143844604,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.05,
125
+ "learning_rate": 2.5477707006369425e-07,
126
+ "logits/generated": -2.943176746368408,
127
+ "logits/real": -2.6685433387756348,
128
+ "logps/generated": -169.8344268798828,
129
+ "logps/real": -117.3721923828125,
130
+ "loss": 0.043,
131
+ "rewards/accuracies": 1.0,
132
+ "rewards/generated": -3.954730272293091,
133
+ "rewards/margins": 4.957295894622803,
134
+ "rewards/real": 1.0025657415390015,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.06,
139
+ "learning_rate": 2.86624203821656e-07,
140
+ "logits/generated": -2.9318885803222656,
141
+ "logits/real": -2.675168991088867,
142
+ "logps/generated": -176.53231811523438,
143
+ "logps/real": -127.081787109375,
144
+ "loss": 0.0301,
145
+ "rewards/accuracies": 1.0,
146
+ "rewards/generated": -4.720462322235107,
147
+ "rewards/margins": 5.908375263214111,
148
+ "rewards/real": 1.1879123449325562,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.06,
153
+ "learning_rate": 3.184713375796178e-07,
154
+ "logits/generated": -2.906761884689331,
155
+ "logits/real": -2.6541061401367188,
156
+ "logps/generated": -189.25946044921875,
157
+ "logps/real": -116.45599365234375,
158
+ "loss": 0.0257,
159
+ "rewards/accuracies": 1.0,
160
+ "rewards/generated": -5.607656955718994,
161
+ "rewards/margins": 6.690671443939209,
162
+ "rewards/real": 1.083013892173767,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.06,
167
+ "eval_logits/generated": -2.884312152862549,
168
+ "eval_logits/real": -2.651992082595825,
169
+ "eval_logps/generated": -185.6055450439453,
170
+ "eval_logps/real": -124.60718536376953,
171
+ "eval_loss": 0.02876817248761654,
172
+ "eval_rewards/accuracies": 0.9928343892097473,
173
+ "eval_rewards/generated": -5.7769317626953125,
174
+ "eval_rewards/margins": 6.782776355743408,
175
+ "eval_rewards/real": 1.0058448314666748,
176
+ "eval_runtime": 355.6798,
177
+ "eval_samples_per_second": 14.058,
178
+ "eval_steps_per_second": 0.441,
179
+ "step": 100
180
+ },
181
+ {
182
+ "epoch": 0.07,
183
+ "learning_rate": 3.5031847133757957e-07,
184
+ "logits/generated": -2.8794760704040527,
185
+ "logits/real": -2.660065174102783,
186
+ "logps/generated": -193.64273071289062,
187
+ "logps/real": -135.76461791992188,
188
+ "loss": 0.0255,
189
+ "rewards/accuracies": 1.0,
190
+ "rewards/generated": -6.355246543884277,
191
+ "rewards/margins": 7.2249555587768555,
192
+ "rewards/real": 0.869709312915802,
193
+ "step": 110
194
+ },
195
+ {
196
+ "epoch": 0.08,
197
+ "learning_rate": 3.8216560509554143e-07,
198
+ "logits/generated": -2.8529515266418457,
199
+ "logits/real": -2.640996217727661,
200
+ "logps/generated": -197.51763916015625,
201
+ "logps/real": -123.4830322265625,
202
+ "loss": 0.0231,
203
+ "rewards/accuracies": 1.0,
204
+ "rewards/generated": -6.6658616065979,
205
+ "rewards/margins": 7.497047424316406,
206
+ "rewards/real": 0.8311867713928223,
207
+ "step": 120
208
+ },
209
+ {
210
+ "epoch": 0.08,
211
+ "learning_rate": 4.140127388535032e-07,
212
+ "logits/generated": -2.842136859893799,
213
+ "logits/real": -2.6427853107452393,
214
+ "logps/generated": -211.53878784179688,
215
+ "logps/real": -134.64431762695312,
216
+ "loss": 0.0202,
217
+ "rewards/accuracies": 1.0,
218
+ "rewards/generated": -7.736472129821777,
219
+ "rewards/margins": 8.344034194946289,
220
+ "rewards/real": 0.6075613498687744,
221
+ "step": 130
222
+ },
223
+ {
224
+ "epoch": 0.09,
225
+ "learning_rate": 4.4585987261146494e-07,
226
+ "logits/generated": -2.7967801094055176,
227
+ "logits/real": -2.5978636741638184,
228
+ "logps/generated": -211.2340087890625,
229
+ "logps/real": -119.82623291015625,
230
+ "loss": 0.0118,
231
+ "rewards/accuracies": 1.0,
232
+ "rewards/generated": -8.243621826171875,
233
+ "rewards/margins": 8.861018180847168,
234
+ "rewards/real": 0.6173967123031616,
235
+ "step": 140
236
+ },
237
+ {
238
+ "epoch": 0.1,
239
+ "learning_rate": 4.777070063694267e-07,
240
+ "logits/generated": -2.7344508171081543,
241
+ "logits/real": -2.5488593578338623,
242
+ "logps/generated": -224.51187133789062,
243
+ "logps/real": -135.88230895996094,
244
+ "loss": 0.0161,
245
+ "rewards/accuracies": 1.0,
246
+ "rewards/generated": -9.361867904663086,
247
+ "rewards/margins": 9.38014030456543,
248
+ "rewards/real": 0.01827339455485344,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.1,
253
+ "learning_rate": 4.989331436699858e-07,
254
+ "logits/generated": -2.591235399246216,
255
+ "logits/real": -2.2508230209350586,
256
+ "logps/generated": -239.1712188720703,
257
+ "logps/real": -137.87863159179688,
258
+ "loss": 0.0104,
259
+ "rewards/accuracies": 1.0,
260
+ "rewards/generated": -11.244747161865234,
261
+ "rewards/margins": 10.78372859954834,
262
+ "rewards/real": -0.46101751923561096,
263
+ "step": 160
264
+ },
265
+ {
266
+ "epoch": 0.11,
267
+ "learning_rate": 4.953769559032717e-07,
268
+ "logits/generated": -2.6561527252197266,
269
+ "logits/real": -2.3267903327941895,
270
+ "logps/generated": -236.52676391601562,
271
+ "logps/real": -128.25636291503906,
272
+ "loss": 0.0127,
273
+ "rewards/accuracies": 1.0,
274
+ "rewards/generated": -10.687575340270996,
275
+ "rewards/margins": 10.42524528503418,
276
+ "rewards/real": -0.26232948899269104,
277
+ "step": 170
278
+ },
279
+ {
280
+ "epoch": 0.12,
281
+ "learning_rate": 4.918207681365576e-07,
282
+ "logits/generated": -2.689512014389038,
283
+ "logits/real": -2.4082565307617188,
284
+ "logps/generated": -224.41653442382812,
285
+ "logps/real": -143.75363159179688,
286
+ "loss": 0.0083,
287
+ "rewards/accuracies": 1.0,
288
+ "rewards/generated": -9.677858352661133,
289
+ "rewards/margins": 10.139410018920898,
290
+ "rewards/real": 0.4615510106086731,
291
+ "step": 180
292
+ },
293
+ {
294
+ "epoch": 0.12,
295
+ "learning_rate": 4.882645803698435e-07,
296
+ "logits/generated": -2.6560425758361816,
297
+ "logits/real": -2.3908610343933105,
298
+ "logps/generated": -238.35336303710938,
299
+ "logps/real": -136.33726501464844,
300
+ "loss": 0.0092,
301
+ "rewards/accuracies": 1.0,
302
+ "rewards/generated": -10.994186401367188,
303
+ "rewards/margins": 11.490355491638184,
304
+ "rewards/real": 0.49616795778274536,
305
+ "step": 190
306
+ },
307
+ {
308
+ "epoch": 0.13,
309
+ "learning_rate": 4.847083926031294e-07,
310
+ "logits/generated": -2.649038791656494,
311
+ "logits/real": -2.2306861877441406,
312
+ "logps/generated": -275.4481201171875,
313
+ "logps/real": -132.113037109375,
314
+ "loss": 0.0096,
315
+ "rewards/accuracies": 1.0,
316
+ "rewards/generated": -13.323870658874512,
317
+ "rewards/margins": 12.914576530456543,
318
+ "rewards/real": -0.4092935025691986,
319
+ "step": 200
320
+ },
321
+ {
322
+ "epoch": 0.13,
323
+ "eval_logits/generated": -2.594467878341675,
324
+ "eval_logits/real": -2.2412638664245605,
325
+ "eval_logps/generated": -254.09414672851562,
326
+ "eval_logps/real": -136.21932983398438,
327
+ "eval_loss": 0.012603986077010632,
328
+ "eval_rewards/accuracies": 0.9984076619148254,
329
+ "eval_rewards/generated": -12.62579345703125,
330
+ "eval_rewards/margins": 12.470422744750977,
331
+ "eval_rewards/real": -0.1553698629140854,
332
+ "eval_runtime": 354.1472,
333
+ "eval_samples_per_second": 14.118,
334
+ "eval_steps_per_second": 0.443,
335
+ "step": 200
336
+ },
337
+ {
338
+ "epoch": 0.13,
339
+ "learning_rate": 4.811522048364154e-07,
340
+ "logits/generated": -2.56303334236145,
341
+ "logits/real": -2.194331169128418,
342
+ "logps/generated": -241.0314178466797,
343
+ "logps/real": -127.5682373046875,
344
+ "loss": 0.0299,
345
+ "rewards/accuracies": 1.0,
346
+ "rewards/generated": -12.397547721862793,
347
+ "rewards/margins": 11.594755172729492,
348
+ "rewards/real": -0.802793025970459,
349
+ "step": 210
350
+ },
351
+ {
352
+ "epoch": 0.14,
353
+ "learning_rate": 4.775960170697012e-07,
354
+ "logits/generated": -2.7050158977508545,
355
+ "logits/real": -2.284231424331665,
356
+ "logps/generated": -245.81307983398438,
357
+ "logps/real": -135.11415100097656,
358
+ "loss": 0.0145,
359
+ "rewards/accuracies": 1.0,
360
+ "rewards/generated": -11.418529510498047,
361
+ "rewards/margins": 9.91020393371582,
362
+ "rewards/real": -1.5083262920379639,
363
+ "step": 220
364
+ },
365
+ {
366
+ "epoch": 0.15,
367
+ "learning_rate": 4.7403982930298717e-07,
368
+ "logits/generated": -2.615196704864502,
369
+ "logits/real": -2.214932918548584,
370
+ "logps/generated": -277.31927490234375,
371
+ "logps/real": -153.6757049560547,
372
+ "loss": 0.0137,
373
+ "rewards/accuracies": 1.0,
374
+ "rewards/generated": -14.367179870605469,
375
+ "rewards/margins": 11.812820434570312,
376
+ "rewards/real": -2.5543580055236816,
377
+ "step": 230
378
+ },
379
+ {
380
+ "epoch": 0.15,
381
+ "learning_rate": 4.7048364153627306e-07,
382
+ "logits/generated": -2.5809149742126465,
383
+ "logits/real": -2.3492672443389893,
384
+ "logps/generated": -271.6045227050781,
385
+ "logps/real": -177.99237060546875,
386
+ "loss": 0.0063,
387
+ "rewards/accuracies": 1.0,
388
+ "rewards/generated": -14.206598281860352,
389
+ "rewards/margins": 12.018148422241211,
390
+ "rewards/real": -2.188450336456299,
391
+ "step": 240
392
+ },
393
+ {
394
+ "epoch": 0.16,
395
+ "learning_rate": 4.66927453769559e-07,
396
+ "logits/generated": -2.5388941764831543,
397
+ "logits/real": -2.2705929279327393,
398
+ "logps/generated": -270.50408935546875,
399
+ "logps/real": -148.82215881347656,
400
+ "loss": 0.0168,
401
+ "rewards/accuracies": 1.0,
402
+ "rewards/generated": -14.393671989440918,
403
+ "rewards/margins": 11.958968162536621,
404
+ "rewards/real": -2.4347033500671387,
405
+ "step": 250
406
+ },
407
+ {
408
+ "epoch": 0.17,
409
+ "learning_rate": 4.633712660028449e-07,
410
+ "logits/generated": -2.637037754058838,
411
+ "logits/real": -2.2578437328338623,
412
+ "logps/generated": -250.59121704101562,
413
+ "logps/real": -128.23684692382812,
414
+ "loss": 0.014,
415
+ "rewards/accuracies": 1.0,
416
+ "rewards/generated": -11.980981826782227,
417
+ "rewards/margins": 11.673439025878906,
418
+ "rewards/real": -0.30754321813583374,
419
+ "step": 260
420
+ },
421
+ {
422
+ "epoch": 0.17,
423
+ "learning_rate": 4.5981507823613085e-07,
424
+ "logits/generated": -2.7627644538879395,
425
+ "logits/real": -2.487227201461792,
426
+ "logps/generated": -216.2014617919922,
427
+ "logps/real": -121.24652099609375,
428
+ "loss": 0.0131,
429
+ "rewards/accuracies": 1.0,
430
+ "rewards/generated": -8.369101524353027,
431
+ "rewards/margins": 9.171772956848145,
432
+ "rewards/real": 0.8026714324951172,
433
+ "step": 270
434
+ },
435
+ {
436
+ "epoch": 0.18,
437
+ "learning_rate": 4.562588904694168e-07,
438
+ "logits/generated": -2.71317720413208,
439
+ "logits/real": -2.4743077754974365,
440
+ "logps/generated": -222.00350952148438,
441
+ "logps/real": -134.9727325439453,
442
+ "loss": 0.0099,
443
+ "rewards/accuracies": 1.0,
444
+ "rewards/generated": -9.607930183410645,
445
+ "rewards/margins": 9.943489074707031,
446
+ "rewards/real": 0.33555930852890015,
447
+ "step": 280
448
+ },
449
+ {
450
+ "epoch": 0.19,
451
+ "learning_rate": 4.5270270270270264e-07,
452
+ "logits/generated": -2.706022024154663,
453
+ "logits/real": -2.4252126216888428,
454
+ "logps/generated": -249.82839965820312,
455
+ "logps/real": -131.52957153320312,
456
+ "loss": 0.0107,
457
+ "rewards/accuracies": 1.0,
458
+ "rewards/generated": -11.630704879760742,
459
+ "rewards/margins": 11.901201248168945,
460
+ "rewards/real": 0.270496666431427,
461
+ "step": 290
462
+ },
463
+ {
464
+ "epoch": 0.19,
465
+ "learning_rate": 4.491465149359886e-07,
466
+ "logits/generated": -2.7073264122009277,
467
+ "logits/real": -2.37262225151062,
468
+ "logps/generated": -234.38796997070312,
469
+ "logps/real": -122.046630859375,
470
+ "loss": 0.024,
471
+ "rewards/accuracies": 1.0,
472
+ "rewards/generated": -10.743330001831055,
473
+ "rewards/margins": 11.167850494384766,
474
+ "rewards/real": 0.4245213568210602,
475
+ "step": 300
476
+ },
477
+ {
478
+ "epoch": 0.19,
479
+ "eval_logits/generated": -2.7227139472961426,
480
+ "eval_logits/real": -2.5040037631988525,
481
+ "eval_logps/generated": -238.78199768066406,
482
+ "eval_logps/real": -133.49249267578125,
483
+ "eval_loss": 0.012583808973431587,
484
+ "eval_rewards/accuracies": 0.9968152642250061,
485
+ "eval_rewards/generated": -11.094578742980957,
486
+ "eval_rewards/margins": 11.211891174316406,
487
+ "eval_rewards/real": 0.11731348186731339,
488
+ "eval_runtime": 358.5504,
489
+ "eval_samples_per_second": 13.945,
490
+ "eval_steps_per_second": 0.438,
491
+ "step": 300
492
+ },
493
+ {
494
+ "epoch": 0.2,
495
+ "learning_rate": 4.4559032716927454e-07,
496
+ "logits/generated": -2.7238636016845703,
497
+ "logits/real": -2.5220375061035156,
498
+ "logps/generated": -257.8681640625,
499
+ "logps/real": -148.02816772460938,
500
+ "loss": 0.0065,
501
+ "rewards/accuracies": 1.0,
502
+ "rewards/generated": -11.876145362854004,
503
+ "rewards/margins": 11.944305419921875,
504
+ "rewards/real": 0.06815892457962036,
505
+ "step": 310
506
+ },
507
+ {
508
+ "epoch": 0.2,
509
+ "learning_rate": 4.420341394025605e-07,
510
+ "logits/generated": -2.6920430660247803,
511
+ "logits/real": -2.4902937412261963,
512
+ "logps/generated": -246.45480346679688,
513
+ "logps/real": -140.23394775390625,
514
+ "loss": 0.0118,
515
+ "rewards/accuracies": 1.0,
516
+ "rewards/generated": -11.85106372833252,
517
+ "rewards/margins": 11.594002723693848,
518
+ "rewards/real": -0.2570618987083435,
519
+ "step": 320
520
+ },
521
+ {
522
+ "epoch": 0.21,
523
+ "learning_rate": 4.384779516358463e-07,
524
+ "logits/generated": -2.7012107372283936,
525
+ "logits/real": -2.5255706310272217,
526
+ "logps/generated": -236.407470703125,
527
+ "logps/real": -141.53895568847656,
528
+ "loss": 0.014,
529
+ "rewards/accuracies": 1.0,
530
+ "rewards/generated": -10.566434860229492,
531
+ "rewards/margins": 10.503705024719238,
532
+ "rewards/real": -0.06273023784160614,
533
+ "step": 330
534
+ },
535
+ {
536
+ "epoch": 0.22,
537
+ "learning_rate": 4.3492176386913227e-07,
538
+ "logits/generated": -2.703942060470581,
539
+ "logits/real": -2.521278142929077,
540
+ "logps/generated": -258.814208984375,
541
+ "logps/real": -150.25686645507812,
542
+ "loss": 0.0056,
543
+ "rewards/accuracies": 1.0,
544
+ "rewards/generated": -12.056513786315918,
545
+ "rewards/margins": 11.703389167785645,
546
+ "rewards/real": -0.3531256914138794,
547
+ "step": 340
548
+ },
549
+ {
550
+ "epoch": 0.22,
551
+ "learning_rate": 4.313655761024182e-07,
552
+ "logits/generated": -2.696030378341675,
553
+ "logits/real": -2.461880922317505,
554
+ "logps/generated": -268.0128173828125,
555
+ "logps/real": -144.72943115234375,
556
+ "loss": 0.0049,
557
+ "rewards/accuracies": 1.0,
558
+ "rewards/generated": -13.055676460266113,
559
+ "rewards/margins": 12.351667404174805,
560
+ "rewards/real": -0.704009473323822,
561
+ "step": 350
562
+ },
563
+ {
564
+ "epoch": 0.23,
565
+ "learning_rate": 4.278093883357041e-07,
566
+ "logits/generated": -2.7000229358673096,
567
+ "logits/real": -2.4609451293945312,
568
+ "logps/generated": -257.57745361328125,
569
+ "logps/real": -132.90365600585938,
570
+ "loss": 0.0151,
571
+ "rewards/accuracies": 1.0,
572
+ "rewards/generated": -12.664046287536621,
573
+ "rewards/margins": 12.791855812072754,
574
+ "rewards/real": 0.12780967354774475,
575
+ "step": 360
576
+ },
577
+ {
578
+ "epoch": 0.24,
579
+ "learning_rate": 4.2425320056899e-07,
580
+ "logits/generated": -2.660365343093872,
581
+ "logits/real": -2.428536891937256,
582
+ "logps/generated": -251.99588012695312,
583
+ "logps/real": -153.57388305664062,
584
+ "loss": 0.0041,
585
+ "rewards/accuracies": 1.0,
586
+ "rewards/generated": -12.598872184753418,
587
+ "rewards/margins": 12.66786003112793,
588
+ "rewards/real": 0.06898676604032516,
589
+ "step": 370
590
+ },
591
+ {
592
+ "epoch": 0.24,
593
+ "learning_rate": 4.2069701280227595e-07,
594
+ "logits/generated": -2.6655094623565674,
595
+ "logits/real": -2.4059395790100098,
596
+ "logps/generated": -268.7626953125,
597
+ "logps/real": -157.2294464111328,
598
+ "loss": 0.0148,
599
+ "rewards/accuracies": 0.987500011920929,
600
+ "rewards/generated": -13.887835502624512,
601
+ "rewards/margins": 12.452804565429688,
602
+ "rewards/real": -1.4350312948226929,
603
+ "step": 380
604
+ },
605
+ {
606
+ "epoch": 0.25,
607
+ "learning_rate": 4.1714082503556185e-07,
608
+ "logits/generated": -2.717266798019409,
609
+ "logits/real": -2.5067367553710938,
610
+ "logps/generated": -264.3714599609375,
611
+ "logps/real": -148.14230346679688,
612
+ "loss": 0.0111,
613
+ "rewards/accuracies": 1.0,
614
+ "rewards/generated": -13.594515800476074,
615
+ "rewards/margins": 12.036942481994629,
616
+ "rewards/real": -1.5575740337371826,
617
+ "step": 390
618
+ },
619
+ {
620
+ "epoch": 0.26,
621
+ "learning_rate": 4.135846372688478e-07,
622
+ "logits/generated": -2.7214503288269043,
623
+ "logits/real": -2.450742244720459,
624
+ "logps/generated": -262.1417541503906,
625
+ "logps/real": -143.6311798095703,
626
+ "loss": 0.0065,
627
+ "rewards/accuracies": 1.0,
628
+ "rewards/generated": -13.615748405456543,
629
+ "rewards/margins": 13.03248119354248,
630
+ "rewards/real": -0.5832666754722595,
631
+ "step": 400
632
+ },
633
+ {
634
+ "epoch": 0.26,
635
+ "eval_logits/generated": -2.702763080596924,
636
+ "eval_logits/real": -2.4737653732299805,
637
+ "eval_logps/generated": -264.14105224609375,
638
+ "eval_logps/real": -136.62977600097656,
639
+ "eval_loss": 0.008167657069861889,
640
+ "eval_rewards/accuracies": 0.9984076619148254,
641
+ "eval_rewards/generated": -13.630484580993652,
642
+ "eval_rewards/margins": 13.434069633483887,
643
+ "eval_rewards/real": -0.19641424715518951,
644
+ "eval_runtime": 353.3453,
645
+ "eval_samples_per_second": 14.15,
646
+ "eval_steps_per_second": 0.444,
647
+ "step": 400
648
+ },
649
+ {
650
+ "epoch": 0.26,
651
+ "learning_rate": 4.100284495021337e-07,
652
+ "logits/generated": -2.676800012588501,
653
+ "logits/real": -2.4450738430023193,
654
+ "logps/generated": -282.40460205078125,
655
+ "logps/real": -134.112060546875,
656
+ "loss": 0.0029,
657
+ "rewards/accuracies": 1.0,
658
+ "rewards/generated": -14.695889472961426,
659
+ "rewards/margins": 14.696159362792969,
660
+ "rewards/real": 0.0002695709408726543,
661
+ "step": 410
662
+ },
663
+ {
664
+ "epoch": 0.27,
665
+ "learning_rate": 4.064722617354196e-07,
666
+ "logits/generated": -2.7011332511901855,
667
+ "logits/real": -2.433877468109131,
668
+ "logps/generated": -285.080078125,
669
+ "logps/real": -144.6403350830078,
670
+ "loss": 0.0042,
671
+ "rewards/accuracies": 1.0,
672
+ "rewards/generated": -14.842653274536133,
673
+ "rewards/margins": 14.158676147460938,
674
+ "rewards/real": -0.6839768290519714,
675
+ "step": 420
676
+ },
677
+ {
678
+ "epoch": 0.28,
679
+ "learning_rate": 4.0291607396870553e-07,
680
+ "logits/generated": -2.6828908920288086,
681
+ "logits/real": -2.396275043487549,
682
+ "logps/generated": -274.49346923828125,
683
+ "logps/real": -135.5173797607422,
684
+ "loss": 0.002,
685
+ "rewards/accuracies": 1.0,
686
+ "rewards/generated": -15.000213623046875,
687
+ "rewards/margins": 14.3043851852417,
688
+ "rewards/real": -0.6958280801773071,
689
+ "step": 430
690
+ },
691
+ {
692
+ "epoch": 0.28,
693
+ "learning_rate": 3.993598862019915e-07,
694
+ "logits/generated": -2.6642165184020996,
695
+ "logits/real": -2.3830320835113525,
696
+ "logps/generated": -289.18939208984375,
697
+ "logps/real": -149.1437530517578,
698
+ "loss": 0.003,
699
+ "rewards/accuracies": 1.0,
700
+ "rewards/generated": -16.230016708374023,
701
+ "rewards/margins": 14.582804679870605,
702
+ "rewards/real": -1.6472117900848389,
703
+ "step": 440
704
+ },
705
+ {
706
+ "epoch": 0.29,
707
+ "learning_rate": 3.9580369843527737e-07,
708
+ "logits/generated": -2.66624116897583,
709
+ "logits/real": -2.3914172649383545,
710
+ "logps/generated": -312.4716796875,
711
+ "logps/real": -150.9366912841797,
712
+ "loss": 0.0032,
713
+ "rewards/accuracies": 1.0,
714
+ "rewards/generated": -17.956409454345703,
715
+ "rewards/margins": 15.78361701965332,
716
+ "rewards/real": -2.1727941036224365,
717
+ "step": 450
718
+ },
719
+ {
720
+ "epoch": 0.29,
721
+ "learning_rate": 3.9224751066856327e-07,
722
+ "logits/generated": -2.5691237449645996,
723
+ "logits/real": -2.455709934234619,
724
+ "logps/generated": -310.66326904296875,
725
+ "logps/real": -183.9186553955078,
726
+ "loss": 0.0054,
727
+ "rewards/accuracies": 1.0,
728
+ "rewards/generated": -18.07615089416504,
729
+ "rewards/margins": 15.606036186218262,
730
+ "rewards/real": -2.4701130390167236,
731
+ "step": 460
732
+ },
733
+ {
734
+ "epoch": 0.3,
735
+ "learning_rate": 3.886913229018492e-07,
736
+ "logits/generated": -2.6210334300994873,
737
+ "logits/real": -2.333034038543701,
738
+ "logps/generated": -283.49578857421875,
739
+ "logps/real": -139.85386657714844,
740
+ "loss": 0.0078,
741
+ "rewards/accuracies": 1.0,
742
+ "rewards/generated": -15.575042724609375,
743
+ "rewards/margins": 15.00732707977295,
744
+ "rewards/real": -0.5677127242088318,
745
+ "step": 470
746
+ },
747
+ {
748
+ "epoch": 0.31,
749
+ "learning_rate": 3.851351351351351e-07,
750
+ "logits/generated": -2.5755233764648438,
751
+ "logits/real": -2.3314523696899414,
752
+ "logps/generated": -289.11907958984375,
753
+ "logps/real": -146.9121551513672,
754
+ "loss": 0.002,
755
+ "rewards/accuracies": 1.0,
756
+ "rewards/generated": -16.91128921508789,
757
+ "rewards/margins": 16.000783920288086,
758
+ "rewards/real": -0.9105021357536316,
759
+ "step": 480
760
+ },
761
+ {
762
+ "epoch": 0.31,
763
+ "learning_rate": 3.8157894736842105e-07,
764
+ "logits/generated": -2.615877628326416,
765
+ "logits/real": -2.3037173748016357,
766
+ "logps/generated": -293.7407531738281,
767
+ "logps/real": -138.11875915527344,
768
+ "loss": 0.0099,
769
+ "rewards/accuracies": 1.0,
770
+ "rewards/generated": -16.228160858154297,
771
+ "rewards/margins": 14.908615112304688,
772
+ "rewards/real": -1.319542646408081,
773
+ "step": 490
774
+ },
775
+ {
776
+ "epoch": 0.32,
777
+ "learning_rate": 3.7802275960170695e-07,
778
+ "logits/generated": -2.6651482582092285,
779
+ "logits/real": -2.4918980598449707,
780
+ "logps/generated": -273.33746337890625,
781
+ "logps/real": -137.34373474121094,
782
+ "loss": 0.0073,
783
+ "rewards/accuracies": 1.0,
784
+ "rewards/generated": -14.052767753601074,
785
+ "rewards/margins": 13.785321235656738,
786
+ "rewards/real": -0.26744553446769714,
787
+ "step": 500
788
+ },
789
+ {
790
+ "epoch": 0.32,
791
+ "eval_logits/generated": -2.6477458477020264,
792
+ "eval_logits/real": -2.4284629821777344,
793
+ "eval_logps/generated": -262.2040100097656,
794
+ "eval_logps/real": -133.81561279296875,
795
+ "eval_loss": 0.00813828781247139,
796
+ "eval_rewards/accuracies": 0.9984076619148254,
797
+ "eval_rewards/generated": -13.436781883239746,
798
+ "eval_rewards/margins": 13.521784782409668,
799
+ "eval_rewards/real": 0.08500289916992188,
800
+ "eval_runtime": 358.267,
801
+ "eval_samples_per_second": 13.956,
802
+ "eval_steps_per_second": 0.438,
803
+ "step": 500
804
+ },
805
+ {
806
+ "epoch": 0.33,
807
+ "learning_rate": 3.7446657183499284e-07,
808
+ "logits/generated": -2.6251749992370605,
809
+ "logits/real": -2.447301149368286,
810
+ "logps/generated": -277.0758972167969,
811
+ "logps/real": -132.02191162109375,
812
+ "loss": 0.0046,
813
+ "rewards/accuracies": 1.0,
814
+ "rewards/generated": -14.138468742370605,
815
+ "rewards/margins": 14.193331718444824,
816
+ "rewards/real": 0.05486304685473442,
817
+ "step": 510
818
+ },
819
+ {
820
+ "epoch": 0.33,
821
+ "learning_rate": 3.709103840682788e-07,
822
+ "logits/generated": -2.637619972229004,
823
+ "logits/real": -2.458965301513672,
824
+ "logps/generated": -268.1296081542969,
825
+ "logps/real": -129.8132781982422,
826
+ "loss": 0.0027,
827
+ "rewards/accuracies": 1.0,
828
+ "rewards/generated": -14.191378593444824,
829
+ "rewards/margins": 14.115701675415039,
830
+ "rewards/real": -0.07567773759365082,
831
+ "step": 520
832
+ },
833
+ {
834
+ "epoch": 0.34,
835
+ "learning_rate": 3.6735419630156474e-07,
836
+ "logits/generated": -2.642280101776123,
837
+ "logits/real": -2.3937056064605713,
838
+ "logps/generated": -273.968994140625,
839
+ "logps/real": -132.57595825195312,
840
+ "loss": 0.0076,
841
+ "rewards/accuracies": 0.987500011920929,
842
+ "rewards/generated": -14.767156600952148,
843
+ "rewards/margins": 13.815185546875,
844
+ "rewards/real": -0.9519737958908081,
845
+ "step": 530
846
+ },
847
+ {
848
+ "epoch": 0.35,
849
+ "learning_rate": 3.637980085348506e-07,
850
+ "logits/generated": -2.6142544746398926,
851
+ "logits/real": -2.3965210914611816,
852
+ "logps/generated": -291.0428161621094,
853
+ "logps/real": -143.89450073242188,
854
+ "loss": 0.0036,
855
+ "rewards/accuracies": 1.0,
856
+ "rewards/generated": -15.896871566772461,
857
+ "rewards/margins": 14.926864624023438,
858
+ "rewards/real": -0.9700061678886414,
859
+ "step": 540
860
+ },
861
+ {
862
+ "epoch": 0.35,
863
+ "learning_rate": 3.602418207681365e-07,
864
+ "logits/generated": -2.547969102859497,
865
+ "logits/real": -2.314603805541992,
866
+ "logps/generated": -292.73193359375,
867
+ "logps/real": -152.8965301513672,
868
+ "loss": 0.0038,
869
+ "rewards/accuracies": 1.0,
870
+ "rewards/generated": -16.91225242614746,
871
+ "rewards/margins": 15.625356674194336,
872
+ "rewards/real": -1.286894679069519,
873
+ "step": 550
874
+ },
875
+ {
876
+ "epoch": 0.36,
877
+ "learning_rate": 3.5668563300142247e-07,
878
+ "logits/generated": -2.5611112117767334,
879
+ "logits/real": -2.341109037399292,
880
+ "logps/generated": -326.85406494140625,
881
+ "logps/real": -146.52040100097656,
882
+ "loss": 0.0029,
883
+ "rewards/accuracies": 1.0,
884
+ "rewards/generated": -19.382369995117188,
885
+ "rewards/margins": 17.716310501098633,
886
+ "rewards/real": -1.6660608053207397,
887
+ "step": 560
888
+ },
889
+ {
890
+ "epoch": 0.36,
891
+ "learning_rate": 3.5312944523470837e-07,
892
+ "logits/generated": -2.522569417953491,
893
+ "logits/real": -2.2818217277526855,
894
+ "logps/generated": -307.17657470703125,
895
+ "logps/real": -154.59783935546875,
896
+ "loss": 0.0105,
897
+ "rewards/accuracies": 0.987500011920929,
898
+ "rewards/generated": -18.03821563720703,
899
+ "rewards/margins": 15.825765609741211,
900
+ "rewards/real": -2.2124505043029785,
901
+ "step": 570
902
+ },
903
+ {
904
+ "epoch": 0.37,
905
+ "learning_rate": 3.495732574679943e-07,
906
+ "logits/generated": -2.546279191970825,
907
+ "logits/real": -2.276088237762451,
908
+ "logps/generated": -326.0049743652344,
909
+ "logps/real": -164.81809997558594,
910
+ "loss": 0.0088,
911
+ "rewards/accuracies": 1.0,
912
+ "rewards/generated": -20.003278732299805,
913
+ "rewards/margins": 16.616857528686523,
914
+ "rewards/real": -3.3864219188690186,
915
+ "step": 580
916
+ },
917
+ {
918
+ "epoch": 0.38,
919
+ "learning_rate": 3.460170697012802e-07,
920
+ "logits/generated": -2.621000051498413,
921
+ "logits/real": -2.389765977859497,
922
+ "logps/generated": -319.06060791015625,
923
+ "logps/real": -164.0532684326172,
924
+ "loss": 0.0049,
925
+ "rewards/accuracies": 1.0,
926
+ "rewards/generated": -18.283437728881836,
927
+ "rewards/margins": 15.354583740234375,
928
+ "rewards/real": -2.928854465484619,
929
+ "step": 590
930
+ },
931
+ {
932
+ "epoch": 0.38,
933
+ "learning_rate": 3.424608819345661e-07,
934
+ "logits/generated": -2.6297097206115723,
935
+ "logits/real": -2.3201308250427246,
936
+ "logps/generated": -322.4084167480469,
937
+ "logps/real": -152.594482421875,
938
+ "loss": 0.0035,
939
+ "rewards/accuracies": 1.0,
940
+ "rewards/generated": -18.64119529724121,
941
+ "rewards/margins": 16.223892211914062,
942
+ "rewards/real": -2.417302370071411,
943
+ "step": 600
944
+ },
945
+ {
946
+ "epoch": 0.38,
947
+ "eval_logits/generated": -2.595550060272217,
948
+ "eval_logits/real": -2.381129264831543,
949
+ "eval_logps/generated": -312.4772033691406,
950
+ "eval_logps/real": -163.40432739257812,
951
+ "eval_loss": 0.007091078907251358,
952
+ "eval_rewards/accuracies": 1.0,
953
+ "eval_rewards/generated": -18.46409797668457,
954
+ "eval_rewards/margins": 15.590229034423828,
955
+ "eval_rewards/real": -2.873868703842163,
956
+ "eval_runtime": 353.487,
957
+ "eval_samples_per_second": 14.145,
958
+ "eval_steps_per_second": 0.444,
959
+ "step": 600
960
+ },
961
+ {
962
+ "epoch": 0.39,
963
+ "learning_rate": 3.3890469416785205e-07,
964
+ "logits/generated": -2.6035828590393066,
965
+ "logits/real": -2.3773751258850098,
966
+ "logps/generated": -321.2727966308594,
967
+ "logps/real": -163.73019409179688,
968
+ "loss": 0.0087,
969
+ "rewards/accuracies": 1.0,
970
+ "rewards/generated": -18.917009353637695,
971
+ "rewards/margins": 15.915303230285645,
972
+ "rewards/real": -3.0017073154449463,
973
+ "step": 610
974
+ },
975
+ {
976
+ "epoch": 0.4,
977
+ "learning_rate": 3.35348506401138e-07,
978
+ "logits/generated": -2.6622188091278076,
979
+ "logits/real": -2.389137029647827,
980
+ "logps/generated": -283.23095703125,
981
+ "logps/real": -136.14297485351562,
982
+ "loss": 0.0053,
983
+ "rewards/accuracies": 1.0,
984
+ "rewards/generated": -15.39533805847168,
985
+ "rewards/margins": 14.832684516906738,
986
+ "rewards/real": -0.5626530051231384,
987
+ "step": 620
988
+ },
989
+ {
990
+ "epoch": 0.4,
991
+ "learning_rate": 3.3179231863442384e-07,
992
+ "logits/generated": -2.605447292327881,
993
+ "logits/real": -2.3979125022888184,
994
+ "logps/generated": -284.0592041015625,
995
+ "logps/real": -152.1702880859375,
996
+ "loss": 0.0052,
997
+ "rewards/accuracies": 1.0,
998
+ "rewards/generated": -15.38554573059082,
999
+ "rewards/margins": 14.56556224822998,
1000
+ "rewards/real": -0.8199828863143921,
1001
+ "step": 630
1002
+ },
1003
+ {
1004
+ "epoch": 0.41,
1005
+ "learning_rate": 3.282361308677098e-07,
1006
+ "logits/generated": -2.576028347015381,
1007
+ "logits/real": -2.3189806938171387,
1008
+ "logps/generated": -303.54766845703125,
1009
+ "logps/real": -147.36514282226562,
1010
+ "loss": 0.0039,
1011
+ "rewards/accuracies": 1.0,
1012
+ "rewards/generated": -16.905027389526367,
1013
+ "rewards/margins": 15.85954475402832,
1014
+ "rewards/real": -1.045483112335205,
1015
+ "step": 640
1016
+ },
1017
+ {
1018
+ "epoch": 0.42,
1019
+ "learning_rate": 3.2467994310099573e-07,
1020
+ "logits/generated": -2.55846905708313,
1021
+ "logits/real": -2.2951676845550537,
1022
+ "logps/generated": -308.7777404785156,
1023
+ "logps/real": -159.1045379638672,
1024
+ "loss": 0.003,
1025
+ "rewards/accuracies": 1.0,
1026
+ "rewards/generated": -18.19515609741211,
1027
+ "rewards/margins": 16.93841552734375,
1028
+ "rewards/real": -1.2567408084869385,
1029
+ "step": 650
1030
+ },
1031
+ {
1032
+ "epoch": 0.42,
1033
+ "learning_rate": 3.211237553342817e-07,
1034
+ "logits/generated": -2.5596654415130615,
1035
+ "logits/real": -2.2995269298553467,
1036
+ "logps/generated": -308.10760498046875,
1037
+ "logps/real": -145.70848083496094,
1038
+ "loss": 0.0013,
1039
+ "rewards/accuracies": 1.0,
1040
+ "rewards/generated": -17.918209075927734,
1041
+ "rewards/margins": 16.569629669189453,
1042
+ "rewards/real": -1.3485772609710693,
1043
+ "step": 660
1044
+ },
1045
+ {
1046
+ "epoch": 0.43,
1047
+ "learning_rate": 3.175675675675675e-07,
1048
+ "logits/generated": -2.5624568462371826,
1049
+ "logits/real": -2.2460904121398926,
1050
+ "logps/generated": -308.0290832519531,
1051
+ "logps/real": -130.75479125976562,
1052
+ "loss": 0.0102,
1053
+ "rewards/accuracies": 1.0,
1054
+ "rewards/generated": -17.573022842407227,
1055
+ "rewards/margins": 16.77570152282715,
1056
+ "rewards/real": -0.797319769859314,
1057
+ "step": 670
1058
+ },
1059
+ {
1060
+ "epoch": 0.44,
1061
+ "learning_rate": 3.1401137980085347e-07,
1062
+ "logits/generated": -2.581447124481201,
1063
+ "logits/real": -2.313119411468506,
1064
+ "logps/generated": -366.02130126953125,
1065
+ "logps/real": -138.63331604003906,
1066
+ "loss": 0.0066,
1067
+ "rewards/accuracies": 0.987500011920929,
1068
+ "rewards/generated": -22.266155242919922,
1069
+ "rewards/margins": 21.87325668334961,
1070
+ "rewards/real": -0.3929004669189453,
1071
+ "step": 680
1072
+ },
1073
+ {
1074
+ "epoch": 0.44,
1075
+ "learning_rate": 3.104551920341394e-07,
1076
+ "logits/generated": -2.5475454330444336,
1077
+ "logits/real": -2.2901225090026855,
1078
+ "logps/generated": -328.03204345703125,
1079
+ "logps/real": -142.40859985351562,
1080
+ "loss": 0.0009,
1081
+ "rewards/accuracies": 1.0,
1082
+ "rewards/generated": -18.810443878173828,
1083
+ "rewards/margins": 18.20998191833496,
1084
+ "rewards/real": -0.6004606485366821,
1085
+ "step": 690
1086
+ },
1087
+ {
1088
+ "epoch": 0.45,
1089
+ "learning_rate": 3.068990042674253e-07,
1090
+ "logits/generated": -2.5037403106689453,
1091
+ "logits/real": -2.134835720062256,
1092
+ "logps/generated": -323.8102722167969,
1093
+ "logps/real": -141.37823486328125,
1094
+ "loss": 0.0097,
1095
+ "rewards/accuracies": 1.0,
1096
+ "rewards/generated": -19.94583511352539,
1097
+ "rewards/margins": 18.077741622924805,
1098
+ "rewards/real": -1.868093490600586,
1099
+ "step": 700
1100
+ },
1101
+ {
1102
+ "epoch": 0.45,
1103
+ "eval_logits/generated": -2.5209903717041016,
1104
+ "eval_logits/real": -2.2045412063598633,
1105
+ "eval_logps/generated": -297.73382568359375,
1106
+ "eval_logps/real": -157.5738525390625,
1107
+ "eval_loss": 0.007741523906588554,
1108
+ "eval_rewards/accuracies": 0.9984076619148254,
1109
+ "eval_rewards/generated": -16.989761352539062,
1110
+ "eval_rewards/margins": 14.698938369750977,
1111
+ "eval_rewards/real": -2.290821075439453,
1112
+ "eval_runtime": 359.389,
1113
+ "eval_samples_per_second": 13.913,
1114
+ "eval_steps_per_second": 0.437,
1115
+ "step": 700
1116
+ },
1117
+ {
1118
+ "epoch": 0.45,
1119
+ "learning_rate": 3.033428165007112e-07,
1120
+ "logits/generated": -2.552351236343384,
1121
+ "logits/real": -2.212099552154541,
1122
+ "logps/generated": -309.7628479003906,
1123
+ "logps/real": -141.3500518798828,
1124
+ "loss": 0.0032,
1125
+ "rewards/accuracies": 1.0,
1126
+ "rewards/generated": -18.07712173461914,
1127
+ "rewards/margins": 16.641395568847656,
1128
+ "rewards/real": -1.4357249736785889,
1129
+ "step": 710
1130
+ },
1131
+ {
1132
+ "epoch": 0.46,
1133
+ "learning_rate": 2.9978662873399715e-07,
1134
+ "logits/generated": -2.5791096687316895,
1135
+ "logits/real": -2.263129234313965,
1136
+ "logps/generated": -321.2320251464844,
1137
+ "logps/real": -147.78073120117188,
1138
+ "loss": 0.0025,
1139
+ "rewards/accuracies": 1.0,
1140
+ "rewards/generated": -18.866134643554688,
1141
+ "rewards/margins": 17.729503631591797,
1142
+ "rewards/real": -1.1366310119628906,
1143
+ "step": 720
1144
+ },
1145
+ {
1146
+ "epoch": 0.47,
1147
+ "learning_rate": 2.9623044096728305e-07,
1148
+ "logits/generated": -2.6658735275268555,
1149
+ "logits/real": -2.530928611755371,
1150
+ "logps/generated": -284.85797119140625,
1151
+ "logps/real": -157.20460510253906,
1152
+ "loss": 0.0136,
1153
+ "rewards/accuracies": 0.987500011920929,
1154
+ "rewards/generated": -16.189687728881836,
1155
+ "rewards/margins": 15.173242568969727,
1156
+ "rewards/real": -1.016442894935608,
1157
+ "step": 730
1158
+ },
1159
+ {
1160
+ "epoch": 0.47,
1161
+ "learning_rate": 2.92674253200569e-07,
1162
+ "logits/generated": -2.7502713203430176,
1163
+ "logits/real": -2.5832438468933105,
1164
+ "logps/generated": -276.59503173828125,
1165
+ "logps/real": -141.8132781982422,
1166
+ "loss": 0.0024,
1167
+ "rewards/accuracies": 1.0,
1168
+ "rewards/generated": -14.831079483032227,
1169
+ "rewards/margins": 14.12823486328125,
1170
+ "rewards/real": -0.7028436064720154,
1171
+ "step": 740
1172
+ },
1173
+ {
1174
+ "epoch": 0.48,
1175
+ "learning_rate": 2.8911806543385494e-07,
1176
+ "logits/generated": -2.728147506713867,
1177
+ "logits/real": -2.558288097381592,
1178
+ "logps/generated": -308.59332275390625,
1179
+ "logps/real": -154.87002563476562,
1180
+ "loss": 0.0126,
1181
+ "rewards/accuracies": 1.0,
1182
+ "rewards/generated": -17.347309112548828,
1183
+ "rewards/margins": 16.73641586303711,
1184
+ "rewards/real": -0.6108967661857605,
1185
+ "step": 750
1186
+ },
1187
+ {
1188
+ "epoch": 0.49,
1189
+ "learning_rate": 2.855618776671408e-07,
1190
+ "logits/generated": -2.744182586669922,
1191
+ "logits/real": -2.6283411979675293,
1192
+ "logps/generated": -332.50518798828125,
1193
+ "logps/real": -180.13763427734375,
1194
+ "loss": 0.0056,
1195
+ "rewards/accuracies": 1.0,
1196
+ "rewards/generated": -19.751907348632812,
1197
+ "rewards/margins": 17.28380012512207,
1198
+ "rewards/real": -2.4681074619293213,
1199
+ "step": 760
1200
+ },
1201
+ {
1202
+ "epoch": 0.49,
1203
+ "learning_rate": 2.8200568990042673e-07,
1204
+ "logits/generated": -2.758481979370117,
1205
+ "logits/real": -2.552919626235962,
1206
+ "logps/generated": -315.9869689941406,
1207
+ "logps/real": -153.39651489257812,
1208
+ "loss": 0.007,
1209
+ "rewards/accuracies": 1.0,
1210
+ "rewards/generated": -18.343120574951172,
1211
+ "rewards/margins": 16.596370697021484,
1212
+ "rewards/real": -1.7467491626739502,
1213
+ "step": 770
1214
+ },
1215
+ {
1216
+ "epoch": 0.5,
1217
+ "learning_rate": 2.784495021337127e-07,
1218
+ "logits/generated": -2.720580816268921,
1219
+ "logits/real": -2.4772982597351074,
1220
+ "logps/generated": -295.3759460449219,
1221
+ "logps/real": -141.34848022460938,
1222
+ "loss": 0.008,
1223
+ "rewards/accuracies": 1.0,
1224
+ "rewards/generated": -17.396093368530273,
1225
+ "rewards/margins": 15.803703308105469,
1226
+ "rewards/real": -1.5923895835876465,
1227
+ "step": 780
1228
+ },
1229
+ {
1230
+ "epoch": 0.51,
1231
+ "learning_rate": 2.7489331436699857e-07,
1232
+ "logits/generated": -2.7093377113342285,
1233
+ "logits/real": -2.5702013969421387,
1234
+ "logps/generated": -339.5489807128906,
1235
+ "logps/real": -177.2974853515625,
1236
+ "loss": 0.0024,
1237
+ "rewards/accuracies": 1.0,
1238
+ "rewards/generated": -21.449642181396484,
1239
+ "rewards/margins": 18.11000633239746,
1240
+ "rewards/real": -3.339632034301758,
1241
+ "step": 790
1242
+ },
1243
+ {
1244
+ "epoch": 0.51,
1245
+ "learning_rate": 2.7133712660028446e-07,
1246
+ "logits/generated": -2.722670555114746,
1247
+ "logits/real": -2.518655300140381,
1248
+ "logps/generated": -336.6543884277344,
1249
+ "logps/real": -144.3495635986328,
1250
+ "loss": 0.0052,
1251
+ "rewards/accuracies": 0.987500011920929,
1252
+ "rewards/generated": -20.40723419189453,
1253
+ "rewards/margins": 18.609041213989258,
1254
+ "rewards/real": -1.7981945276260376,
1255
+ "step": 800
1256
+ },
1257
+ {
1258
+ "epoch": 0.51,
1259
+ "eval_logits/generated": -2.71830677986145,
1260
+ "eval_logits/real": -2.5409433841705322,
1261
+ "eval_logps/generated": -326.1592712402344,
1262
+ "eval_logps/real": -151.64842224121094,
1263
+ "eval_loss": 0.006455760914832354,
1264
+ "eval_rewards/accuracies": 0.9992038011550903,
1265
+ "eval_rewards/generated": -19.832304000854492,
1266
+ "eval_rewards/margins": 18.1340274810791,
1267
+ "eval_rewards/real": -1.698278784751892,
1268
+ "eval_runtime": 352.7904,
1269
+ "eval_samples_per_second": 14.173,
1270
+ "eval_steps_per_second": 0.445,
1271
+ "step": 800
1272
+ },
1273
+ {
1274
+ "epoch": 0.52,
1275
+ "learning_rate": 2.677809388335704e-07,
1276
+ "logits/generated": -2.73547625541687,
1277
+ "logits/real": -2.518897294998169,
1278
+ "logps/generated": -324.478271484375,
1279
+ "logps/real": -146.60842895507812,
1280
+ "loss": 0.0025,
1281
+ "rewards/accuracies": 1.0,
1282
+ "rewards/generated": -18.922399520874023,
1283
+ "rewards/margins": 17.002269744873047,
1284
+ "rewards/real": -1.9201271533966064,
1285
+ "step": 810
1286
+ },
1287
+ {
1288
+ "epoch": 0.52,
1289
+ "learning_rate": 2.642247510668563e-07,
1290
+ "logits/generated": -2.7113728523254395,
1291
+ "logits/real": -2.50697922706604,
1292
+ "logps/generated": -313.7933349609375,
1293
+ "logps/real": -149.09490966796875,
1294
+ "loss": 0.0155,
1295
+ "rewards/accuracies": 1.0,
1296
+ "rewards/generated": -18.239910125732422,
1297
+ "rewards/margins": 16.581689834594727,
1298
+ "rewards/real": -1.6582221984863281,
1299
+ "step": 820
1300
+ },
1301
+ {
1302
+ "epoch": 0.53,
1303
+ "learning_rate": 2.6066856330014225e-07,
1304
+ "logits/generated": -2.7217416763305664,
1305
+ "logits/real": -2.5234363079071045,
1306
+ "logps/generated": -287.1163024902344,
1307
+ "logps/real": -139.886962890625,
1308
+ "loss": 0.0066,
1309
+ "rewards/accuracies": 1.0,
1310
+ "rewards/generated": -15.454859733581543,
1311
+ "rewards/margins": 15.261648178100586,
1312
+ "rewards/real": -0.1932099163532257,
1313
+ "step": 830
1314
+ },
1315
+ {
1316
+ "epoch": 0.54,
1317
+ "learning_rate": 2.5711237553342815e-07,
1318
+ "logits/generated": -2.7132928371429443,
1319
+ "logits/real": -2.5606608390808105,
1320
+ "logps/generated": -294.3720397949219,
1321
+ "logps/real": -152.9593963623047,
1322
+ "loss": 0.0074,
1323
+ "rewards/accuracies": 1.0,
1324
+ "rewards/generated": -16.173110961914062,
1325
+ "rewards/margins": 15.697965621948242,
1326
+ "rewards/real": -0.47514596581459045,
1327
+ "step": 840
1328
+ },
1329
+ {
1330
+ "epoch": 0.54,
1331
+ "learning_rate": 2.5355618776671404e-07,
1332
+ "logits/generated": -2.6806228160858154,
1333
+ "logits/real": -2.4857285022735596,
1334
+ "logps/generated": -279.65948486328125,
1335
+ "logps/real": -146.00035095214844,
1336
+ "loss": 0.0075,
1337
+ "rewards/accuracies": 1.0,
1338
+ "rewards/generated": -15.32105541229248,
1339
+ "rewards/margins": 14.74870777130127,
1340
+ "rewards/real": -0.5723468065261841,
1341
+ "step": 850
1342
+ },
1343
+ {
1344
+ "epoch": 0.55,
1345
+ "learning_rate": 2.5e-07,
1346
+ "logits/generated": -2.6649889945983887,
1347
+ "logits/real": -2.503398895263672,
1348
+ "logps/generated": -286.626708984375,
1349
+ "logps/real": -137.37606811523438,
1350
+ "loss": 0.0055,
1351
+ "rewards/accuracies": 1.0,
1352
+ "rewards/generated": -16.276308059692383,
1353
+ "rewards/margins": 15.699457168579102,
1354
+ "rewards/real": -0.5768507122993469,
1355
+ "step": 860
1356
+ },
1357
+ {
1358
+ "epoch": 0.56,
1359
+ "learning_rate": 2.4644381223328594e-07,
1360
+ "logits/generated": -2.665027618408203,
1361
+ "logits/real": -2.5171055793762207,
1362
+ "logps/generated": -294.4344177246094,
1363
+ "logps/real": -142.8474884033203,
1364
+ "loss": 0.0025,
1365
+ "rewards/accuracies": 1.0,
1366
+ "rewards/generated": -16.496715545654297,
1367
+ "rewards/margins": 16.218063354492188,
1368
+ "rewards/real": -0.2786518335342407,
1369
+ "step": 870
1370
+ },
1371
+ {
1372
+ "epoch": 0.56,
1373
+ "learning_rate": 2.4288762446657183e-07,
1374
+ "logits/generated": -2.6565191745758057,
1375
+ "logits/real": -2.5109829902648926,
1376
+ "logps/generated": -332.05914306640625,
1377
+ "logps/real": -152.0934600830078,
1378
+ "loss": 0.0032,
1379
+ "rewards/accuracies": 1.0,
1380
+ "rewards/generated": -19.833471298217773,
1381
+ "rewards/margins": 18.865116119384766,
1382
+ "rewards/real": -0.9683563113212585,
1383
+ "step": 880
1384
+ },
1385
+ {
1386
+ "epoch": 0.57,
1387
+ "learning_rate": 2.393314366998578e-07,
1388
+ "logits/generated": -2.697728395462036,
1389
+ "logits/real": -2.5354275703430176,
1390
+ "logps/generated": -308.3212890625,
1391
+ "logps/real": -153.72103881835938,
1392
+ "loss": 0.0022,
1393
+ "rewards/accuracies": 1.0,
1394
+ "rewards/generated": -17.799835205078125,
1395
+ "rewards/margins": 16.078920364379883,
1396
+ "rewards/real": -1.7209144830703735,
1397
+ "step": 890
1398
+ },
1399
+ {
1400
+ "epoch": 0.58,
1401
+ "learning_rate": 2.3577524893314365e-07,
1402
+ "logits/generated": -2.679509162902832,
1403
+ "logits/real": -2.495677947998047,
1404
+ "logps/generated": -330.8416748046875,
1405
+ "logps/real": -143.41543579101562,
1406
+ "loss": 0.0037,
1407
+ "rewards/accuracies": 1.0,
1408
+ "rewards/generated": -18.74724769592285,
1409
+ "rewards/margins": 17.45358657836914,
1410
+ "rewards/real": -1.2936601638793945,
1411
+ "step": 900
1412
+ },
1413
+ {
1414
+ "epoch": 0.58,
1415
+ "eval_logits/generated": -2.6881191730499268,
1416
+ "eval_logits/real": -2.5333993434906006,
1417
+ "eval_logps/generated": -294.42584228515625,
1418
+ "eval_logps/real": -147.4919891357422,
1419
+ "eval_loss": 0.006735246162861586,
1420
+ "eval_rewards/accuracies": 0.9984076619148254,
1421
+ "eval_rewards/generated": -16.658964157104492,
1422
+ "eval_rewards/margins": 15.376328468322754,
1423
+ "eval_rewards/real": -1.282636284828186,
1424
+ "eval_runtime": 358.48,
1425
+ "eval_samples_per_second": 13.948,
1426
+ "eval_steps_per_second": 0.438,
1427
+ "step": 900
1428
+ },
1429
+ {
1430
+ "epoch": 0.58,
1431
+ "learning_rate": 2.322190611664296e-07,
1432
+ "logits/generated": -2.6863019466400146,
1433
+ "logits/real": -2.584163188934326,
1434
+ "logps/generated": -306.78399658203125,
1435
+ "logps/real": -154.82162475585938,
1436
+ "loss": 0.0055,
1437
+ "rewards/accuracies": 1.0,
1438
+ "rewards/generated": -16.801063537597656,
1439
+ "rewards/margins": 16.25944709777832,
1440
+ "rewards/real": -0.5416165590286255,
1441
+ "step": 910
1442
+ },
1443
+ {
1444
+ "epoch": 0.59,
1445
+ "learning_rate": 2.2866287339971549e-07,
1446
+ "logits/generated": -2.6918511390686035,
1447
+ "logits/real": -2.498774528503418,
1448
+ "logps/generated": -304.48968505859375,
1449
+ "logps/real": -154.81939697265625,
1450
+ "loss": 0.0018,
1451
+ "rewards/accuracies": 1.0,
1452
+ "rewards/generated": -17.309907913208008,
1453
+ "rewards/margins": 15.712469100952148,
1454
+ "rewards/real": -1.5974372625350952,
1455
+ "step": 920
1456
+ },
1457
+ {
1458
+ "epoch": 0.6,
1459
+ "learning_rate": 2.251066856330014e-07,
1460
+ "logits/generated": -2.6575112342834473,
1461
+ "logits/real": -2.5105912685394287,
1462
+ "logps/generated": -300.7004089355469,
1463
+ "logps/real": -150.9338836669922,
1464
+ "loss": 0.0051,
1465
+ "rewards/accuracies": 1.0,
1466
+ "rewards/generated": -17.501060485839844,
1467
+ "rewards/margins": 14.903904914855957,
1468
+ "rewards/real": -2.597156047821045,
1469
+ "step": 930
1470
+ },
1471
+ {
1472
+ "epoch": 0.6,
1473
+ "learning_rate": 2.2155049786628733e-07,
1474
+ "logits/generated": -2.672095537185669,
1475
+ "logits/real": -2.5284764766693115,
1476
+ "logps/generated": -307.1285705566406,
1477
+ "logps/real": -160.99423217773438,
1478
+ "loss": 0.002,
1479
+ "rewards/accuracies": 1.0,
1480
+ "rewards/generated": -17.466632843017578,
1481
+ "rewards/margins": 15.776178359985352,
1482
+ "rewards/real": -1.6904557943344116,
1483
+ "step": 940
1484
+ },
1485
+ {
1486
+ "epoch": 0.61,
1487
+ "learning_rate": 2.1799431009957325e-07,
1488
+ "logits/generated": -2.66410756111145,
1489
+ "logits/real": -2.4963908195495605,
1490
+ "logps/generated": -316.0965881347656,
1491
+ "logps/real": -141.8169403076172,
1492
+ "loss": 0.0032,
1493
+ "rewards/accuracies": 1.0,
1494
+ "rewards/generated": -18.660221099853516,
1495
+ "rewards/margins": 16.873653411865234,
1496
+ "rewards/real": -1.7865674495697021,
1497
+ "step": 950
1498
+ },
1499
+ {
1500
+ "epoch": 0.61,
1501
+ "learning_rate": 2.1443812233285914e-07,
1502
+ "logits/generated": -2.685979127883911,
1503
+ "logits/real": -2.5782992839813232,
1504
+ "logps/generated": -327.9977722167969,
1505
+ "logps/real": -162.27938842773438,
1506
+ "loss": 0.0018,
1507
+ "rewards/accuracies": 1.0,
1508
+ "rewards/generated": -18.976680755615234,
1509
+ "rewards/margins": 17.09455108642578,
1510
+ "rewards/real": -1.8821306228637695,
1511
+ "step": 960
1512
+ },
1513
+ {
1514
+ "epoch": 0.62,
1515
+ "learning_rate": 2.108819345661451e-07,
1516
+ "logits/generated": -2.695758104324341,
1517
+ "logits/real": -2.485863208770752,
1518
+ "logps/generated": -313.2763671875,
1519
+ "logps/real": -153.49114990234375,
1520
+ "loss": 0.0022,
1521
+ "rewards/accuracies": 1.0,
1522
+ "rewards/generated": -18.86570167541504,
1523
+ "rewards/margins": 16.97518539428711,
1524
+ "rewards/real": -1.8905149698257446,
1525
+ "step": 970
1526
+ },
1527
+ {
1528
+ "epoch": 0.63,
1529
+ "learning_rate": 2.0732574679943098e-07,
1530
+ "logits/generated": -2.6816251277923584,
1531
+ "logits/real": -2.4063215255737305,
1532
+ "logps/generated": -310.80975341796875,
1533
+ "logps/real": -138.0583953857422,
1534
+ "loss": 0.0021,
1535
+ "rewards/accuracies": 1.0,
1536
+ "rewards/generated": -18.112398147583008,
1537
+ "rewards/margins": 16.26905632019043,
1538
+ "rewards/real": -1.8433430194854736,
1539
+ "step": 980
1540
+ },
1541
+ {
1542
+ "epoch": 0.63,
1543
+ "learning_rate": 2.0376955903271693e-07,
1544
+ "logits/generated": -2.676630973815918,
1545
+ "logits/real": -2.545987606048584,
1546
+ "logps/generated": -324.10198974609375,
1547
+ "logps/real": -170.13424682617188,
1548
+ "loss": 0.0017,
1549
+ "rewards/accuracies": 1.0,
1550
+ "rewards/generated": -19.194168090820312,
1551
+ "rewards/margins": 17.191312789916992,
1552
+ "rewards/real": -2.0028531551361084,
1553
+ "step": 990
1554
+ },
1555
+ {
1556
+ "epoch": 0.64,
1557
+ "learning_rate": 2.0021337126600283e-07,
1558
+ "logits/generated": -2.6588072776794434,
1559
+ "logits/real": -2.4335250854492188,
1560
+ "logps/generated": -307.8160705566406,
1561
+ "logps/real": -143.21780395507812,
1562
+ "loss": 0.0023,
1563
+ "rewards/accuracies": 1.0,
1564
+ "rewards/generated": -18.040515899658203,
1565
+ "rewards/margins": 16.0909423828125,
1566
+ "rewards/real": -1.949573278427124,
1567
+ "step": 1000
1568
+ },
1569
+ {
1570
+ "epoch": 0.64,
1571
+ "eval_logits/generated": -2.6403703689575195,
1572
+ "eval_logits/real": -2.469404697418213,
1573
+ "eval_logps/generated": -320.0989685058594,
1574
+ "eval_logps/real": -154.08863830566406,
1575
+ "eval_loss": 0.004710016772150993,
1576
+ "eval_rewards/accuracies": 1.0,
1577
+ "eval_rewards/generated": -19.226276397705078,
1578
+ "eval_rewards/margins": 17.283977508544922,
1579
+ "eval_rewards/real": -1.9422993659973145,
1580
+ "eval_runtime": 353.54,
1581
+ "eval_samples_per_second": 14.143,
1582
+ "eval_steps_per_second": 0.444,
1583
+ "step": 1000
1584
+ },
1585
+ {
1586
+ "epoch": 0.65,
1587
+ "learning_rate": 1.9665718349928875e-07,
1588
+ "logits/generated": -2.6474626064300537,
1589
+ "logits/real": -2.3872134685516357,
1590
+ "logps/generated": -319.66595458984375,
1591
+ "logps/real": -144.20675659179688,
1592
+ "loss": 0.0091,
1593
+ "rewards/accuracies": 0.987500011920929,
1594
+ "rewards/generated": -19.061620712280273,
1595
+ "rewards/margins": 17.328655242919922,
1596
+ "rewards/real": -1.732965111732483,
1597
+ "step": 1010
1598
+ },
1599
+ {
1600
+ "epoch": 0.65,
1601
+ "learning_rate": 1.931009957325747e-07,
1602
+ "logits/generated": -2.6576783657073975,
1603
+ "logits/real": -2.367976665496826,
1604
+ "logps/generated": -314.0473937988281,
1605
+ "logps/real": -135.40328979492188,
1606
+ "loss": 0.0043,
1607
+ "rewards/accuracies": 1.0,
1608
+ "rewards/generated": -18.228788375854492,
1609
+ "rewards/margins": 16.391712188720703,
1610
+ "rewards/real": -1.837074637413025,
1611
+ "step": 1020
1612
+ },
1613
+ {
1614
+ "epoch": 0.66,
1615
+ "learning_rate": 1.895448079658606e-07,
1616
+ "logits/generated": -2.6377809047698975,
1617
+ "logits/real": -2.456001043319702,
1618
+ "logps/generated": -304.30194091796875,
1619
+ "logps/real": -158.57046508789062,
1620
+ "loss": 0.0017,
1621
+ "rewards/accuracies": 1.0,
1622
+ "rewards/generated": -17.988388061523438,
1623
+ "rewards/margins": 16.181509017944336,
1624
+ "rewards/real": -1.8068807125091553,
1625
+ "step": 1030
1626
+ },
1627
+ {
1628
+ "epoch": 0.67,
1629
+ "learning_rate": 1.859886201991465e-07,
1630
+ "logits/generated": -2.713982343673706,
1631
+ "logits/real": -2.5400590896606445,
1632
+ "logps/generated": -319.47222900390625,
1633
+ "logps/real": -171.1156463623047,
1634
+ "loss": 0.004,
1635
+ "rewards/accuracies": 1.0,
1636
+ "rewards/generated": -18.150558471679688,
1637
+ "rewards/margins": 15.5328369140625,
1638
+ "rewards/real": -2.61772084236145,
1639
+ "step": 1040
1640
+ },
1641
+ {
1642
+ "epoch": 0.67,
1643
+ "learning_rate": 1.8243243243243243e-07,
1644
+ "logits/generated": -2.6828341484069824,
1645
+ "logits/real": -2.451007604598999,
1646
+ "logps/generated": -323.1172790527344,
1647
+ "logps/real": -165.8328399658203,
1648
+ "loss": 0.0023,
1649
+ "rewards/accuracies": 1.0,
1650
+ "rewards/generated": -19.114700317382812,
1651
+ "rewards/margins": 16.214031219482422,
1652
+ "rewards/real": -2.900669813156128,
1653
+ "step": 1050
1654
+ },
1655
+ {
1656
+ "epoch": 0.68,
1657
+ "learning_rate": 1.7887624466571835e-07,
1658
+ "logits/generated": -2.6970200538635254,
1659
+ "logits/real": -2.5064215660095215,
1660
+ "logps/generated": -334.9453125,
1661
+ "logps/real": -161.0951385498047,
1662
+ "loss": 0.0024,
1663
+ "rewards/accuracies": 1.0,
1664
+ "rewards/generated": -19.451995849609375,
1665
+ "rewards/margins": 16.473833084106445,
1666
+ "rewards/real": -2.9781641960144043,
1667
+ "step": 1060
1668
+ },
1669
+ {
1670
+ "epoch": 0.68,
1671
+ "learning_rate": 1.7532005689900424e-07,
1672
+ "logits/generated": -2.680431842803955,
1673
+ "logits/real": -2.45689058303833,
1674
+ "logps/generated": -318.7887268066406,
1675
+ "logps/real": -160.87933349609375,
1676
+ "loss": 0.0019,
1677
+ "rewards/accuracies": 1.0,
1678
+ "rewards/generated": -18.85040855407715,
1679
+ "rewards/margins": 16.028974533081055,
1680
+ "rewards/real": -2.821434497833252,
1681
+ "step": 1070
1682
+ },
1683
+ {
1684
+ "epoch": 0.69,
1685
+ "learning_rate": 1.717638691322902e-07,
1686
+ "logits/generated": -2.631726026535034,
1687
+ "logits/real": -2.430089235305786,
1688
+ "logps/generated": -325.7797546386719,
1689
+ "logps/real": -160.8832244873047,
1690
+ "loss": 0.005,
1691
+ "rewards/accuracies": 1.0,
1692
+ "rewards/generated": -19.576419830322266,
1693
+ "rewards/margins": 16.21274185180664,
1694
+ "rewards/real": -3.363679885864258,
1695
+ "step": 1080
1696
+ },
1697
+ {
1698
+ "epoch": 0.7,
1699
+ "learning_rate": 1.6820768136557609e-07,
1700
+ "logits/generated": -2.6365928649902344,
1701
+ "logits/real": -2.410539388656616,
1702
+ "logps/generated": -320.75250244140625,
1703
+ "logps/real": -152.6123504638672,
1704
+ "loss": 0.0011,
1705
+ "rewards/accuracies": 1.0,
1706
+ "rewards/generated": -19.101604461669922,
1707
+ "rewards/margins": 16.658384323120117,
1708
+ "rewards/real": -2.4432196617126465,
1709
+ "step": 1090
1710
+ },
1711
+ {
1712
+ "epoch": 0.7,
1713
+ "learning_rate": 1.64651493598862e-07,
1714
+ "logits/generated": -2.627570629119873,
1715
+ "logits/real": -2.4656529426574707,
1716
+ "logps/generated": -295.3066101074219,
1717
+ "logps/real": -164.18795776367188,
1718
+ "loss": 0.0041,
1719
+ "rewards/accuracies": 1.0,
1720
+ "rewards/generated": -17.335086822509766,
1721
+ "rewards/margins": 15.275428771972656,
1722
+ "rewards/real": -2.0596542358398438,
1723
+ "step": 1100
1724
+ },
1725
+ {
1726
+ "epoch": 0.7,
1727
+ "eval_logits/generated": -2.636800527572632,
1728
+ "eval_logits/real": -2.432948112487793,
1729
+ "eval_logps/generated": -320.8827209472656,
1730
+ "eval_logps/real": -159.4218292236328,
1731
+ "eval_loss": 0.005006026476621628,
1732
+ "eval_rewards/accuracies": 1.0,
1733
+ "eval_rewards/generated": -19.304651260375977,
1734
+ "eval_rewards/margins": 16.82903289794922,
1735
+ "eval_rewards/real": -2.4756181240081787,
1736
+ "eval_runtime": 358.0601,
1737
+ "eval_samples_per_second": 13.964,
1738
+ "eval_steps_per_second": 0.438,
1739
+ "step": 1100
1740
+ },
1741
+ {
1742
+ "epoch": 0.71,
1743
+ "learning_rate": 1.6109530583214793e-07,
1744
+ "logits/generated": -2.6352360248565674,
1745
+ "logits/real": -2.3814148902893066,
1746
+ "logps/generated": -325.02911376953125,
1747
+ "logps/real": -155.18861389160156,
1748
+ "loss": 0.0015,
1749
+ "rewards/accuracies": 1.0,
1750
+ "rewards/generated": -19.062488555908203,
1751
+ "rewards/margins": 16.672191619873047,
1752
+ "rewards/real": -2.390298366546631,
1753
+ "step": 1110
1754
+ },
1755
+ {
1756
+ "epoch": 0.72,
1757
+ "learning_rate": 1.5753911806543385e-07,
1758
+ "logits/generated": -2.5947158336639404,
1759
+ "logits/real": -2.370400905609131,
1760
+ "logps/generated": -328.9488220214844,
1761
+ "logps/real": -161.29063415527344,
1762
+ "loss": 0.0027,
1763
+ "rewards/accuracies": 1.0,
1764
+ "rewards/generated": -20.087726593017578,
1765
+ "rewards/margins": 17.328855514526367,
1766
+ "rewards/real": -2.7588706016540527,
1767
+ "step": 1120
1768
+ },
1769
+ {
1770
+ "epoch": 0.72,
1771
+ "learning_rate": 1.5398293029871974e-07,
1772
+ "logits/generated": -2.5837631225585938,
1773
+ "logits/real": -2.3980956077575684,
1774
+ "logps/generated": -340.8889465332031,
1775
+ "logps/real": -171.73155212402344,
1776
+ "loss": 0.0046,
1777
+ "rewards/accuracies": 1.0,
1778
+ "rewards/generated": -21.824243545532227,
1779
+ "rewards/margins": 18.651147842407227,
1780
+ "rewards/real": -3.173096179962158,
1781
+ "step": 1130
1782
+ },
1783
+ {
1784
+ "epoch": 0.73,
1785
+ "learning_rate": 1.504267425320057e-07,
1786
+ "logits/generated": -2.6092448234558105,
1787
+ "logits/real": -2.384089469909668,
1788
+ "logps/generated": -353.5443420410156,
1789
+ "logps/real": -164.05426025390625,
1790
+ "loss": 0.0054,
1791
+ "rewards/accuracies": 1.0,
1792
+ "rewards/generated": -22.162696838378906,
1793
+ "rewards/margins": 18.883724212646484,
1794
+ "rewards/real": -3.2789711952209473,
1795
+ "step": 1140
1796
+ },
1797
+ {
1798
+ "epoch": 0.74,
1799
+ "learning_rate": 1.4687055476529158e-07,
1800
+ "logits/generated": -2.604997158050537,
1801
+ "logits/real": -2.4028961658477783,
1802
+ "logps/generated": -345.52691650390625,
1803
+ "logps/real": -178.81590270996094,
1804
+ "loss": 0.0016,
1805
+ "rewards/accuracies": 1.0,
1806
+ "rewards/generated": -21.2681827545166,
1807
+ "rewards/margins": 17.40066909790039,
1808
+ "rewards/real": -3.8675148487091064,
1809
+ "step": 1150
1810
+ },
1811
+ {
1812
+ "epoch": 0.74,
1813
+ "learning_rate": 1.4331436699857753e-07,
1814
+ "logits/generated": -2.5864603519439697,
1815
+ "logits/real": -2.4394125938415527,
1816
+ "logps/generated": -358.2093200683594,
1817
+ "logps/real": -176.05535888671875,
1818
+ "loss": 0.0007,
1819
+ "rewards/accuracies": 1.0,
1820
+ "rewards/generated": -22.399456024169922,
1821
+ "rewards/margins": 18.71761703491211,
1822
+ "rewards/real": -3.6818385124206543,
1823
+ "step": 1160
1824
+ },
1825
+ {
1826
+ "epoch": 0.75,
1827
+ "learning_rate": 1.3975817923186345e-07,
1828
+ "logits/generated": -2.622929096221924,
1829
+ "logits/real": -2.4487674236297607,
1830
+ "logps/generated": -359.487060546875,
1831
+ "logps/real": -182.25320434570312,
1832
+ "loss": 0.002,
1833
+ "rewards/accuracies": 1.0,
1834
+ "rewards/generated": -21.96976089477539,
1835
+ "rewards/margins": 19.018558502197266,
1836
+ "rewards/real": -2.951202392578125,
1837
+ "step": 1170
1838
+ },
1839
+ {
1840
+ "epoch": 0.75,
1841
+ "learning_rate": 1.3620199146514935e-07,
1842
+ "logits/generated": -2.5927932262420654,
1843
+ "logits/real": -2.4133658409118652,
1844
+ "logps/generated": -350.9914855957031,
1845
+ "logps/real": -157.60354614257812,
1846
+ "loss": 0.0032,
1847
+ "rewards/accuracies": 1.0,
1848
+ "rewards/generated": -21.829925537109375,
1849
+ "rewards/margins": 18.090991973876953,
1850
+ "rewards/real": -3.738931655883789,
1851
+ "step": 1180
1852
+ },
1853
+ {
1854
+ "epoch": 0.76,
1855
+ "learning_rate": 1.326458036984353e-07,
1856
+ "logits/generated": -2.6194968223571777,
1857
+ "logits/real": -2.442230463027954,
1858
+ "logps/generated": -328.1695556640625,
1859
+ "logps/real": -166.50439453125,
1860
+ "loss": 0.0101,
1861
+ "rewards/accuracies": 0.987500011920929,
1862
+ "rewards/generated": -20.360904693603516,
1863
+ "rewards/margins": 16.797086715698242,
1864
+ "rewards/real": -3.563816785812378,
1865
+ "step": 1190
1866
+ },
1867
+ {
1868
+ "epoch": 0.77,
1869
+ "learning_rate": 1.290896159317212e-07,
1870
+ "logits/generated": -2.6396727561950684,
1871
+ "logits/real": -2.4529004096984863,
1872
+ "logps/generated": -351.5899658203125,
1873
+ "logps/real": -158.91500854492188,
1874
+ "loss": 0.0033,
1875
+ "rewards/accuracies": 1.0,
1876
+ "rewards/generated": -21.666501998901367,
1877
+ "rewards/margins": 18.48183822631836,
1878
+ "rewards/real": -3.1846625804901123,
1879
+ "step": 1200
1880
+ },
1881
+ {
1882
+ "epoch": 0.77,
1883
+ "eval_logits/generated": -2.62396502494812,
1884
+ "eval_logits/real": -2.4681177139282227,
1885
+ "eval_logps/generated": -330.46142578125,
1886
+ "eval_logps/real": -163.26539611816406,
1887
+ "eval_loss": 0.0037050044629722834,
1888
+ "eval_rewards/accuracies": 1.0,
1889
+ "eval_rewards/generated": -20.262523651123047,
1890
+ "eval_rewards/margins": 17.402549743652344,
1891
+ "eval_rewards/real": -2.8599753379821777,
1892
+ "eval_runtime": 353.2592,
1893
+ "eval_samples_per_second": 14.154,
1894
+ "eval_steps_per_second": 0.444,
1895
+ "step": 1200
1896
+ },
1897
+ {
1898
+ "epoch": 0.77,
1899
+ "learning_rate": 1.255334281650071e-07,
1900
+ "logits/generated": -2.603823184967041,
1901
+ "logits/real": -2.468867540359497,
1902
+ "logps/generated": -342.6956787109375,
1903
+ "logps/real": -169.12026977539062,
1904
+ "loss": 0.0096,
1905
+ "rewards/accuracies": 0.987500011920929,
1906
+ "rewards/generated": -20.63424301147461,
1907
+ "rewards/margins": 17.795860290527344,
1908
+ "rewards/real": -2.8383822441101074,
1909
+ "step": 1210
1910
+ },
1911
+ {
1912
+ "epoch": 0.78,
1913
+ "learning_rate": 1.2197724039829303e-07,
1914
+ "logits/generated": -2.6127591133117676,
1915
+ "logits/real": -2.378269672393799,
1916
+ "logps/generated": -333.45458984375,
1917
+ "logps/real": -153.81045532226562,
1918
+ "loss": 0.0035,
1919
+ "rewards/accuracies": 1.0,
1920
+ "rewards/generated": -19.970136642456055,
1921
+ "rewards/margins": 17.384471893310547,
1922
+ "rewards/real": -2.5856640338897705,
1923
+ "step": 1220
1924
+ },
1925
+ {
1926
+ "epoch": 0.79,
1927
+ "learning_rate": 1.1842105263157894e-07,
1928
+ "logits/generated": -2.555279016494751,
1929
+ "logits/real": -2.393095016479492,
1930
+ "logps/generated": -334.4615478515625,
1931
+ "logps/real": -148.1243438720703,
1932
+ "loss": 0.0043,
1933
+ "rewards/accuracies": 1.0,
1934
+ "rewards/generated": -20.898094177246094,
1935
+ "rewards/margins": 17.973209381103516,
1936
+ "rewards/real": -2.9248833656311035,
1937
+ "step": 1230
1938
+ },
1939
+ {
1940
+ "epoch": 0.79,
1941
+ "learning_rate": 1.1486486486486487e-07,
1942
+ "logits/generated": -2.6060502529144287,
1943
+ "logits/real": -2.4234375953674316,
1944
+ "logps/generated": -325.2197570800781,
1945
+ "logps/real": -164.59268188476562,
1946
+ "loss": 0.0074,
1947
+ "rewards/accuracies": 1.0,
1948
+ "rewards/generated": -20.13725471496582,
1949
+ "rewards/margins": 17.43795394897461,
1950
+ "rewards/real": -2.6992993354797363,
1951
+ "step": 1240
1952
+ },
1953
+ {
1954
+ "epoch": 0.8,
1955
+ "learning_rate": 1.1130867709815078e-07,
1956
+ "logits/generated": -2.5801854133605957,
1957
+ "logits/real": -2.407710313796997,
1958
+ "logps/generated": -346.99639892578125,
1959
+ "logps/real": -163.65025329589844,
1960
+ "loss": 0.0023,
1961
+ "rewards/accuracies": 1.0,
1962
+ "rewards/generated": -21.688512802124023,
1963
+ "rewards/margins": 18.730518341064453,
1964
+ "rewards/real": -2.9579977989196777,
1965
+ "step": 1250
1966
+ },
1967
+ {
1968
+ "epoch": 0.81,
1969
+ "learning_rate": 1.077524893314367e-07,
1970
+ "logits/generated": -2.569446086883545,
1971
+ "logits/real": -2.3492586612701416,
1972
+ "logps/generated": -362.0098571777344,
1973
+ "logps/real": -155.21597290039062,
1974
+ "loss": 0.0054,
1975
+ "rewards/accuracies": 1.0,
1976
+ "rewards/generated": -22.53670883178711,
1977
+ "rewards/margins": 19.49285316467285,
1978
+ "rewards/real": -3.043856143951416,
1979
+ "step": 1260
1980
+ },
1981
+ {
1982
+ "epoch": 0.81,
1983
+ "learning_rate": 1.0419630156472262e-07,
1984
+ "logits/generated": -2.5712552070617676,
1985
+ "logits/real": -2.49360728263855,
1986
+ "logps/generated": -345.88848876953125,
1987
+ "logps/real": -187.5894775390625,
1988
+ "loss": 0.0026,
1989
+ "rewards/accuracies": 1.0,
1990
+ "rewards/generated": -21.826547622680664,
1991
+ "rewards/margins": 18.033016204833984,
1992
+ "rewards/real": -3.793531894683838,
1993
+ "step": 1270
1994
+ },
1995
+ {
1996
+ "epoch": 0.82,
1997
+ "learning_rate": 1.0064011379800854e-07,
1998
+ "logits/generated": -2.605649471282959,
1999
+ "logits/real": -2.4789321422576904,
2000
+ "logps/generated": -365.0675048828125,
2001
+ "logps/real": -179.69142150878906,
2002
+ "loss": 0.0059,
2003
+ "rewards/accuracies": 1.0,
2004
+ "rewards/generated": -22.86709213256836,
2005
+ "rewards/margins": 19.44388198852539,
2006
+ "rewards/real": -3.423208236694336,
2007
+ "step": 1280
2008
+ },
2009
+ {
2010
+ "epoch": 0.83,
2011
+ "learning_rate": 9.708392603129445e-08,
2012
+ "logits/generated": -2.6148712635040283,
2013
+ "logits/real": -2.455827236175537,
2014
+ "logps/generated": -312.61138916015625,
2015
+ "logps/real": -163.89895629882812,
2016
+ "loss": 0.0043,
2017
+ "rewards/accuracies": 0.987500011920929,
2018
+ "rewards/generated": -18.890113830566406,
2019
+ "rewards/margins": 15.756782531738281,
2020
+ "rewards/real": -3.1333322525024414,
2021
+ "step": 1290
2022
+ },
2023
+ {
2024
+ "epoch": 0.83,
2025
+ "learning_rate": 9.352773826458037e-08,
2026
+ "logits/generated": -2.61708927154541,
2027
+ "logits/real": -2.4340155124664307,
2028
+ "logps/generated": -358.7867126464844,
2029
+ "logps/real": -162.8968505859375,
2030
+ "loss": 0.0042,
2031
+ "rewards/accuracies": 1.0,
2032
+ "rewards/generated": -21.204402923583984,
2033
+ "rewards/margins": 18.589527130126953,
2034
+ "rewards/real": -2.614877223968506,
2035
+ "step": 1300
2036
+ },
2037
+ {
2038
+ "epoch": 0.83,
2039
+ "eval_logits/generated": -2.5973832607269287,
2040
+ "eval_logits/real": -2.4462833404541016,
2041
+ "eval_logps/generated": -335.50567626953125,
2042
+ "eval_logps/real": -161.40390014648438,
2043
+ "eval_loss": 0.003191521856933832,
2044
+ "eval_rewards/accuracies": 1.0,
2045
+ "eval_rewards/generated": -20.76694679260254,
2046
+ "eval_rewards/margins": 18.093122482299805,
2047
+ "eval_rewards/real": -2.673825740814209,
2048
+ "eval_runtime": 357.3744,
2049
+ "eval_samples_per_second": 13.991,
2050
+ "eval_steps_per_second": 0.439,
2051
+ "step": 1300
2052
+ },
2053
+ {
2054
+ "epoch": 0.84,
2055
+ "learning_rate": 8.997155049786629e-08,
2056
+ "logits/generated": -2.6018290519714355,
2057
+ "logits/real": -2.384018659591675,
2058
+ "logps/generated": -346.05120849609375,
2059
+ "logps/real": -154.76527404785156,
2060
+ "loss": 0.006,
2061
+ "rewards/accuracies": 1.0,
2062
+ "rewards/generated": -21.367633819580078,
2063
+ "rewards/margins": 18.317487716674805,
2064
+ "rewards/real": -3.0501456260681152,
2065
+ "step": 1310
2066
+ },
2067
+ {
2068
+ "epoch": 0.84,
2069
+ "learning_rate": 8.64153627311522e-08,
2070
+ "logits/generated": -2.591632604598999,
2071
+ "logits/real": -2.4401702880859375,
2072
+ "logps/generated": -376.4816589355469,
2073
+ "logps/real": -161.6133575439453,
2074
+ "loss": 0.001,
2075
+ "rewards/accuracies": 1.0,
2076
+ "rewards/generated": -24.407400131225586,
2077
+ "rewards/margins": 21.652812957763672,
2078
+ "rewards/real": -2.7545878887176514,
2079
+ "step": 1320
2080
+ },
2081
+ {
2082
+ "epoch": 0.85,
2083
+ "learning_rate": 8.285917496443812e-08,
2084
+ "logits/generated": -2.614602565765381,
2085
+ "logits/real": -2.4444077014923096,
2086
+ "logps/generated": -353.73712158203125,
2087
+ "logps/real": -166.6267547607422,
2088
+ "loss": 0.0016,
2089
+ "rewards/accuracies": 1.0,
2090
+ "rewards/generated": -21.954816818237305,
2091
+ "rewards/margins": 19.12883758544922,
2092
+ "rewards/real": -2.825981378555298,
2093
+ "step": 1330
2094
+ },
2095
+ {
2096
+ "epoch": 0.86,
2097
+ "learning_rate": 7.930298719772404e-08,
2098
+ "logits/generated": -2.584188461303711,
2099
+ "logits/real": -2.4484755992889404,
2100
+ "logps/generated": -343.3823547363281,
2101
+ "logps/real": -176.88575744628906,
2102
+ "loss": 0.0014,
2103
+ "rewards/accuracies": 1.0,
2104
+ "rewards/generated": -21.353042602539062,
2105
+ "rewards/margins": 18.155269622802734,
2106
+ "rewards/real": -3.1977739334106445,
2107
+ "step": 1340
2108
+ },
2109
+ {
2110
+ "epoch": 0.86,
2111
+ "learning_rate": 7.574679943100994e-08,
2112
+ "logits/generated": -2.6062331199645996,
2113
+ "logits/real": -2.4200310707092285,
2114
+ "logps/generated": -346.0461120605469,
2115
+ "logps/real": -147.83078002929688,
2116
+ "loss": 0.003,
2117
+ "rewards/accuracies": 1.0,
2118
+ "rewards/generated": -21.630901336669922,
2119
+ "rewards/margins": 18.697572708129883,
2120
+ "rewards/real": -2.9333269596099854,
2121
+ "step": 1350
2122
+ },
2123
+ {
2124
+ "epoch": 0.87,
2125
+ "learning_rate": 7.219061166429587e-08,
2126
+ "logits/generated": -2.5920839309692383,
2127
+ "logits/real": -2.4139137268066406,
2128
+ "logps/generated": -330.27569580078125,
2129
+ "logps/real": -157.5228729248047,
2130
+ "loss": 0.0019,
2131
+ "rewards/accuracies": 1.0,
2132
+ "rewards/generated": -20.386878967285156,
2133
+ "rewards/margins": 17.642425537109375,
2134
+ "rewards/real": -2.7444536685943604,
2135
+ "step": 1360
2136
+ },
2137
+ {
2138
+ "epoch": 0.88,
2139
+ "learning_rate": 6.863442389758179e-08,
2140
+ "logits/generated": -2.588639974594116,
2141
+ "logits/real": -2.4485459327697754,
2142
+ "logps/generated": -358.29486083984375,
2143
+ "logps/real": -170.88619995117188,
2144
+ "loss": 0.0004,
2145
+ "rewards/accuracies": 1.0,
2146
+ "rewards/generated": -21.756534576416016,
2147
+ "rewards/margins": 19.36898422241211,
2148
+ "rewards/real": -2.3875479698181152,
2149
+ "step": 1370
2150
+ },
2151
+ {
2152
+ "epoch": 0.88,
2153
+ "learning_rate": 6.507823613086771e-08,
2154
+ "logits/generated": -2.616661548614502,
2155
+ "logits/real": -2.473728895187378,
2156
+ "logps/generated": -344.1673583984375,
2157
+ "logps/real": -171.72756958007812,
2158
+ "loss": 0.0003,
2159
+ "rewards/accuracies": 1.0,
2160
+ "rewards/generated": -21.39995574951172,
2161
+ "rewards/margins": 18.465641021728516,
2162
+ "rewards/real": -2.9343149662017822,
2163
+ "step": 1380
2164
+ },
2165
+ {
2166
+ "epoch": 0.89,
2167
+ "learning_rate": 6.152204836415363e-08,
2168
+ "logits/generated": -2.623744010925293,
2169
+ "logits/real": -2.4601359367370605,
2170
+ "logps/generated": -356.36572265625,
2171
+ "logps/real": -165.5369110107422,
2172
+ "loss": 0.0017,
2173
+ "rewards/accuracies": 1.0,
2174
+ "rewards/generated": -21.57801055908203,
2175
+ "rewards/margins": 18.742273330688477,
2176
+ "rewards/real": -2.8357362747192383,
2177
+ "step": 1390
2178
+ },
2179
+ {
2180
+ "epoch": 0.9,
2181
+ "learning_rate": 5.796586059743954e-08,
2182
+ "logits/generated": -2.58418345451355,
2183
+ "logits/real": -2.4554524421691895,
2184
+ "logps/generated": -342.3634338378906,
2185
+ "logps/real": -161.27557373046875,
2186
+ "loss": 0.0031,
2187
+ "rewards/accuracies": 1.0,
2188
+ "rewards/generated": -21.252824783325195,
2189
+ "rewards/margins": 18.831134796142578,
2190
+ "rewards/real": -2.4216885566711426,
2191
+ "step": 1400
2192
+ },
2193
+ {
2194
+ "epoch": 0.9,
2195
+ "eval_logits/generated": -2.6144187450408936,
2196
+ "eval_logits/real": -2.459505081176758,
2197
+ "eval_logps/generated": -334.29248046875,
2198
+ "eval_logps/real": -156.4322509765625,
2199
+ "eval_loss": 0.003032037289813161,
2200
+ "eval_rewards/accuracies": 0.9992038011550903,
2201
+ "eval_rewards/generated": -20.64562225341797,
2202
+ "eval_rewards/margins": 18.468963623046875,
2203
+ "eval_rewards/real": -2.1766605377197266,
2204
+ "eval_runtime": 353.8921,
2205
+ "eval_samples_per_second": 14.129,
2206
+ "eval_steps_per_second": 0.444,
2207
+ "step": 1400
2208
+ },
2209
+ {
2210
+ "epoch": 0.9,
2211
+ "learning_rate": 5.4409672830725456e-08,
2212
+ "logits/generated": -2.6259565353393555,
2213
+ "logits/real": -2.446854591369629,
2214
+ "logps/generated": -327.33294677734375,
2215
+ "logps/real": -157.96328735351562,
2216
+ "loss": 0.0007,
2217
+ "rewards/accuracies": 1.0,
2218
+ "rewards/generated": -19.752155303955078,
2219
+ "rewards/margins": 17.652597427368164,
2220
+ "rewards/real": -2.0995583534240723,
2221
+ "step": 1410
2222
+ },
2223
+ {
2224
+ "epoch": 0.91,
2225
+ "learning_rate": 5.0853485064011376e-08,
2226
+ "logits/generated": -2.6113741397857666,
2227
+ "logits/real": -2.3581416606903076,
2228
+ "logps/generated": -337.38336181640625,
2229
+ "logps/real": -144.90785217285156,
2230
+ "loss": 0.0075,
2231
+ "rewards/accuracies": 0.987500011920929,
2232
+ "rewards/generated": -20.918914794921875,
2233
+ "rewards/margins": 18.755264282226562,
2234
+ "rewards/real": -2.163649082183838,
2235
+ "step": 1420
2236
+ },
2237
+ {
2238
+ "epoch": 0.91,
2239
+ "learning_rate": 4.72972972972973e-08,
2240
+ "logits/generated": -2.5806427001953125,
2241
+ "logits/real": -2.4139533042907715,
2242
+ "logps/generated": -328.04327392578125,
2243
+ "logps/real": -159.30142211914062,
2244
+ "loss": 0.0044,
2245
+ "rewards/accuracies": 1.0,
2246
+ "rewards/generated": -20.404008865356445,
2247
+ "rewards/margins": 18.004352569580078,
2248
+ "rewards/real": -2.3996574878692627,
2249
+ "step": 1430
2250
+ },
2251
+ {
2252
+ "epoch": 0.92,
2253
+ "learning_rate": 4.374110953058322e-08,
2254
+ "logits/generated": -2.5779836177825928,
2255
+ "logits/real": -2.41233229637146,
2256
+ "logps/generated": -331.53558349609375,
2257
+ "logps/real": -157.76052856445312,
2258
+ "loss": 0.0011,
2259
+ "rewards/accuracies": 1.0,
2260
+ "rewards/generated": -20.964191436767578,
2261
+ "rewards/margins": 18.361709594726562,
2262
+ "rewards/real": -2.6024842262268066,
2263
+ "step": 1440
2264
+ },
2265
+ {
2266
+ "epoch": 0.93,
2267
+ "learning_rate": 4.018492176386913e-08,
2268
+ "logits/generated": -2.5819339752197266,
2269
+ "logits/real": -2.4180991649627686,
2270
+ "logps/generated": -335.5973205566406,
2271
+ "logps/real": -161.56442260742188,
2272
+ "loss": 0.0023,
2273
+ "rewards/accuracies": 1.0,
2274
+ "rewards/generated": -20.901308059692383,
2275
+ "rewards/margins": 18.16925811767578,
2276
+ "rewards/real": -2.7320468425750732,
2277
+ "step": 1450
2278
+ },
2279
+ {
2280
+ "epoch": 0.93,
2281
+ "learning_rate": 3.6628733997155046e-08,
2282
+ "logits/generated": -2.565924644470215,
2283
+ "logits/real": -2.343358278274536,
2284
+ "logps/generated": -367.3075256347656,
2285
+ "logps/real": -140.11587524414062,
2286
+ "loss": 0.0036,
2287
+ "rewards/accuracies": 1.0,
2288
+ "rewards/generated": -24.341123580932617,
2289
+ "rewards/margins": 21.56112289428711,
2290
+ "rewards/real": -2.7799980640411377,
2291
+ "step": 1460
2292
+ },
2293
+ {
2294
+ "epoch": 0.94,
2295
+ "learning_rate": 3.3072546230440967e-08,
2296
+ "logits/generated": -2.5737624168395996,
2297
+ "logits/real": -2.3942620754241943,
2298
+ "logps/generated": -330.3951721191406,
2299
+ "logps/real": -156.66238403320312,
2300
+ "loss": 0.0037,
2301
+ "rewards/accuracies": 0.987500011920929,
2302
+ "rewards/generated": -20.762723922729492,
2303
+ "rewards/margins": 18.088510513305664,
2304
+ "rewards/real": -2.674212694168091,
2305
+ "step": 1470
2306
+ },
2307
+ {
2308
+ "epoch": 0.95,
2309
+ "learning_rate": 2.9516358463726884e-08,
2310
+ "logits/generated": -2.5995941162109375,
2311
+ "logits/real": -2.4209532737731934,
2312
+ "logps/generated": -342.66656494140625,
2313
+ "logps/real": -166.10543823242188,
2314
+ "loss": 0.001,
2315
+ "rewards/accuracies": 1.0,
2316
+ "rewards/generated": -21.383831024169922,
2317
+ "rewards/margins": 18.954071044921875,
2318
+ "rewards/real": -2.4297597408294678,
2319
+ "step": 1480
2320
+ },
2321
+ {
2322
+ "epoch": 0.95,
2323
+ "learning_rate": 2.59601706970128e-08,
2324
+ "logits/generated": -2.5837836265563965,
2325
+ "logits/real": -2.4423680305480957,
2326
+ "logps/generated": -344.68670654296875,
2327
+ "logps/real": -168.974365234375,
2328
+ "loss": 0.0062,
2329
+ "rewards/accuracies": 1.0,
2330
+ "rewards/generated": -21.8641300201416,
2331
+ "rewards/margins": 19.55858612060547,
2332
+ "rewards/real": -2.3055408000946045,
2333
+ "step": 1490
2334
+ },
2335
+ {
2336
+ "epoch": 0.96,
2337
+ "learning_rate": 2.240398293029872e-08,
2338
+ "logits/generated": -2.5561881065368652,
2339
+ "logits/real": -2.378162384033203,
2340
+ "logps/generated": -345.0732421875,
2341
+ "logps/real": -157.55331420898438,
2342
+ "loss": 0.0015,
2343
+ "rewards/accuracies": 1.0,
2344
+ "rewards/generated": -21.966922760009766,
2345
+ "rewards/margins": 19.243404388427734,
2346
+ "rewards/real": -2.7235145568847656,
2347
+ "step": 1500
2348
+ },
2349
+ {
2350
+ "epoch": 0.96,
2351
+ "eval_logits/generated": -2.5879950523376465,
2352
+ "eval_logits/real": -2.431485414505005,
2353
+ "eval_logps/generated": -346.59881591796875,
2354
+ "eval_logps/real": -161.42237854003906,
2355
+ "eval_loss": 0.002722462872043252,
2356
+ "eval_rewards/accuracies": 1.0,
2357
+ "eval_rewards/generated": -21.876266479492188,
2358
+ "eval_rewards/margins": 19.200592041015625,
2359
+ "eval_rewards/real": -2.6756742000579834,
2360
+ "eval_runtime": 358.839,
2361
+ "eval_samples_per_second": 13.934,
2362
+ "eval_steps_per_second": 0.438,
2363
+ "step": 1500
2364
+ },
2365
+ {
2366
+ "epoch": 0.97,
2367
+ "learning_rate": 1.8847795163584636e-08,
2368
+ "logits/generated": -2.5720112323760986,
2369
+ "logits/real": -2.4139506816864014,
2370
+ "logps/generated": -347.1139221191406,
2371
+ "logps/real": -159.2230224609375,
2372
+ "loss": 0.0019,
2373
+ "rewards/accuracies": 1.0,
2374
+ "rewards/generated": -21.823551177978516,
2375
+ "rewards/margins": 19.27743911743164,
2376
+ "rewards/real": -2.546113967895508,
2377
+ "step": 1510
2378
+ },
2379
+ {
2380
+ "epoch": 0.97,
2381
+ "learning_rate": 1.5291607396870554e-08,
2382
+ "logits/generated": -2.59587025642395,
2383
+ "logits/real": -2.453565835952759,
2384
+ "logps/generated": -342.2265930175781,
2385
+ "logps/real": -155.0762481689453,
2386
+ "loss": 0.0003,
2387
+ "rewards/accuracies": 1.0,
2388
+ "rewards/generated": -21.84916877746582,
2389
+ "rewards/margins": 19.360273361206055,
2390
+ "rewards/real": -2.4888949394226074,
2391
+ "step": 1520
2392
+ },
2393
+ {
2394
+ "epoch": 0.98,
2395
+ "learning_rate": 1.1735419630156473e-08,
2396
+ "logits/generated": -2.597951650619507,
2397
+ "logits/real": -2.443702459335327,
2398
+ "logps/generated": -356.6807556152344,
2399
+ "logps/real": -161.6763458251953,
2400
+ "loss": 0.0005,
2401
+ "rewards/accuracies": 1.0,
2402
+ "rewards/generated": -22.68019676208496,
2403
+ "rewards/margins": 19.755842208862305,
2404
+ "rewards/real": -2.9243524074554443,
2405
+ "step": 1530
2406
+ },
2407
+ {
2408
+ "epoch": 0.99,
2409
+ "learning_rate": 8.179231863442388e-09,
2410
+ "logits/generated": -2.577643871307373,
2411
+ "logits/real": -2.454272985458374,
2412
+ "logps/generated": -328.8896484375,
2413
+ "logps/real": -161.75326538085938,
2414
+ "loss": 0.0066,
2415
+ "rewards/accuracies": 1.0,
2416
+ "rewards/generated": -20.900354385375977,
2417
+ "rewards/margins": 18.582683563232422,
2418
+ "rewards/real": -2.31767201423645,
2419
+ "step": 1540
2420
+ },
2421
+ {
2422
+ "epoch": 0.99,
2423
+ "learning_rate": 4.623044096728307e-09,
2424
+ "logits/generated": -2.6040985584259033,
2425
+ "logits/real": -2.4326038360595703,
2426
+ "logps/generated": -353.8281555175781,
2427
+ "logps/real": -167.1890869140625,
2428
+ "loss": 0.0004,
2429
+ "rewards/accuracies": 1.0,
2430
+ "rewards/generated": -22.231428146362305,
2431
+ "rewards/margins": 19.598224639892578,
2432
+ "rewards/real": -2.6332058906555176,
2433
+ "step": 1550
2434
+ },
2435
+ {
2436
+ "epoch": 1.0,
2437
+ "learning_rate": 1.0668563300142248e-09,
2438
+ "logits/generated": -2.5751712322235107,
2439
+ "logits/real": -2.382331132888794,
2440
+ "logps/generated": -337.31011962890625,
2441
+ "logps/real": -157.5955047607422,
2442
+ "loss": 0.0014,
2443
+ "rewards/accuracies": 1.0,
2444
+ "rewards/generated": -21.168819427490234,
2445
+ "rewards/margins": 18.588611602783203,
2446
+ "rewards/real": -2.5802083015441895,
2447
+ "step": 1560
2448
+ },
2449
+ {
2450
+ "epoch": 1.0,
2451
+ "step": 1563,
2452
+ "total_flos": 0.0,
2453
+ "train_loss": 0.016542167173306324,
2454
+ "train_runtime": 14061.5551,
2455
+ "train_samples_per_second": 3.556,
2456
+ "train_steps_per_second": 0.111
2457
+ }
2458
+ ],
2459
+ "logging_steps": 10,
2460
+ "max_steps": 1563,
2461
+ "num_input_tokens_seen": 0,
2462
+ "num_train_epochs": 1,
2463
+ "save_steps": 100,
2464
+ "total_flos": 0.0,
2465
+ "train_batch_size": 8,
2466
+ "trial_name": null,
2467
+ "trial_params": null
2468
+ }