tonyassi commited on
Commit
05fce59
1 Parent(s): 92f6129

Upload folder using huggingface_hub

Browse files
Files changed (8) hide show
  1. README.md +95 -0
  2. metadata.json +8 -0
  3. model.safetensors +3 -0
  4. optimizer.pt +3 -0
  5. rng_state.pth +3 -0
  6. scheduler.pt +3 -0
  7. trainer_state.json +772 -0
  8. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224
4
+ tags:
5
+ - Image Regression
6
+ datasets:
7
+ - "tonyassi/sales1"
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: "sales-prediction13"
12
+ results: []
13
+ ---
14
+
15
+ # sales-prediction13
16
+ ## Image Regression Model
17
+
18
+ This model was trained with [Image Regression Model Trainer](https://github.com/TonyAssi/ImageRegression/tree/main). It takes an image as input and outputs a float value.
19
+
20
+ ```python
21
+ from ImageRegression import predict
22
+ predict(repo_id='tonyassi/sales-prediction13',image_path='image.jpg')
23
+ ```
24
+
25
+ ---
26
+
27
+ ## Dataset
28
+ Dataset: tonyassi/sales1\
29
+ Value Column: 'sales'\
30
+ Train Test Split: 0.2
31
+
32
+ ---
33
+
34
+ ## Training
35
+ Base Model: [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)\
36
+ Epochs: 10\
37
+ Learning Rate: 0.0001
38
+
39
+ ---
40
+
41
+ ## Usage
42
+
43
+ ### Download
44
+ ```bash
45
+ git clone https://github.com/TonyAssi/ImageRegression.git
46
+ cd ImageRegression
47
+ ```
48
+
49
+ ### Installation
50
+ ```bash
51
+ pip install -r requirements.txt
52
+ ```
53
+
54
+ ### Import
55
+ ```python
56
+ from ImageRegression import train_model, upload_model, predict
57
+ ```
58
+
59
+ ### Inference (Prediction)
60
+ - **repo_id** 🤗 repo id of the model
61
+ - **image_path** path to image
62
+ ```python
63
+ predict(repo_id='tonyassi/sales-prediction13',
64
+ image_path='image.jpg')
65
+ ```
66
+ The first time this function is called it'll download the safetensor model. Subsequent function calls will run faster.
67
+
68
+ ### Train Model
69
+ - **dataset_id** 🤗 dataset id
70
+ - **value_column_name** column name of prediction values in dataset
71
+ - **test_split** test split of the train/test split
72
+ - **output_dir** the directory where the checkpoints will be saved
73
+ - **num_train_epochs** training epochs
74
+ - **learning_rate** learning rate
75
+ ```python
76
+ train_model(dataset_id='tonyassi/sales1',
77
+ value_column_name='sales',
78
+ test_split=0.2,
79
+ output_dir='./results',
80
+ num_train_epochs=10,
81
+ learning_rate=0.0001)
82
+
83
+ ```
84
+ The trainer will save the checkpoints in the output_dir location. The model.safetensors are the trained weights you'll use for inference (predicton).
85
+
86
+ ### Upload Model
87
+ This function will upload your model to the 🤗 Hub.
88
+ - **model_id** the name of the model id
89
+ - **token** go [here](https://huggingface.co/settings/tokens) to create a new 🤗 token
90
+ - **checkpoint_dir** checkpoint folder that will be uploaded
91
+ ```python
92
+ upload_model(model_id='sales-prediction13',
93
+ token='YOUR_HF_TOKEN',
94
+ checkpoint_dir='./results/checkpoint-940')
95
+ ```
metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_id": "tonyassi/sales1",
3
+ "value_column_name": "sales",
4
+ "test_split": 0.2,
5
+ "num_train_epochs": 10,
6
+ "learning_rate": 0.0001,
7
+ "max_value": 100000
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72a6c88a47ebcb74e8939d24ad8ae8da57e0795764fcf42e0cfb2f52f7c9138f
3
+ size 345583444
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce8b52c476317198da63f354c4c3369b917e71f7d9a4ba4647237024de507bcf
3
+ size 686557178
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da1db5c227c2000e391e1d225e13a38eda71746be2164bab198c44af9ae0882b
3
+ size 13990
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c8eaa2a8b1b5ec96261497120f675979fe119748d8017a7cec0ad5b8bab9d4e
3
+ size 1064
trainer_state.json ADDED
@@ -0,0 +1,772 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 10.0,
5
+ "eval_steps": 500,
6
+ "global_step": 940,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.10638297872340426,
13
+ "grad_norm": 10.06272029876709,
14
+ "learning_rate": 9.893617021276596e-05,
15
+ "loss": 0.2002,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.2127659574468085,
20
+ "grad_norm": 9.355937004089355,
21
+ "learning_rate": 9.787234042553192e-05,
22
+ "loss": 0.1577,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.3191489361702128,
27
+ "grad_norm": 6.437077522277832,
28
+ "learning_rate": 9.680851063829788e-05,
29
+ "loss": 0.0686,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.425531914893617,
34
+ "grad_norm": 7.390567302703857,
35
+ "learning_rate": 9.574468085106384e-05,
36
+ "loss": 0.0587,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.5319148936170213,
41
+ "grad_norm": 1.0643635988235474,
42
+ "learning_rate": 9.468085106382978e-05,
43
+ "loss": 0.0391,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.6382978723404256,
48
+ "grad_norm": 4.998205184936523,
49
+ "learning_rate": 9.361702127659576e-05,
50
+ "loss": 0.0309,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.7446808510638298,
55
+ "grad_norm": 3.10650372505188,
56
+ "learning_rate": 9.25531914893617e-05,
57
+ "loss": 0.0189,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.851063829787234,
62
+ "grad_norm": 3.8945138454437256,
63
+ "learning_rate": 9.148936170212766e-05,
64
+ "loss": 0.036,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.9574468085106383,
69
+ "grad_norm": 1.1039351224899292,
70
+ "learning_rate": 9.042553191489363e-05,
71
+ "loss": 0.009,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 1.0,
76
+ "eval_loss": 0.013753031380474567,
77
+ "eval_mse": 0.013753034174442291,
78
+ "eval_runtime": 14.0344,
79
+ "eval_samples_per_second": 13.324,
80
+ "eval_steps_per_second": 1.71,
81
+ "step": 94
82
+ },
83
+ {
84
+ "epoch": 1.0638297872340425,
85
+ "grad_norm": 1.4354372024536133,
86
+ "learning_rate": 8.936170212765958e-05,
87
+ "loss": 0.0189,
88
+ "step": 100
89
+ },
90
+ {
91
+ "epoch": 1.1702127659574468,
92
+ "grad_norm": 2.4736692905426025,
93
+ "learning_rate": 8.829787234042553e-05,
94
+ "loss": 0.0203,
95
+ "step": 110
96
+ },
97
+ {
98
+ "epoch": 1.2765957446808511,
99
+ "grad_norm": 4.297308921813965,
100
+ "learning_rate": 8.723404255319149e-05,
101
+ "loss": 0.0142,
102
+ "step": 120
103
+ },
104
+ {
105
+ "epoch": 1.3829787234042552,
106
+ "grad_norm": 0.9237638711929321,
107
+ "learning_rate": 8.617021276595745e-05,
108
+ "loss": 0.0196,
109
+ "step": 130
110
+ },
111
+ {
112
+ "epoch": 1.4893617021276595,
113
+ "grad_norm": 0.5327073335647583,
114
+ "learning_rate": 8.510638297872341e-05,
115
+ "loss": 0.005,
116
+ "step": 140
117
+ },
118
+ {
119
+ "epoch": 1.5957446808510638,
120
+ "grad_norm": 4.415370464324951,
121
+ "learning_rate": 8.404255319148937e-05,
122
+ "loss": 0.0154,
123
+ "step": 150
124
+ },
125
+ {
126
+ "epoch": 1.702127659574468,
127
+ "grad_norm": 4.288142681121826,
128
+ "learning_rate": 8.297872340425533e-05,
129
+ "loss": 0.0079,
130
+ "step": 160
131
+ },
132
+ {
133
+ "epoch": 1.8085106382978724,
134
+ "grad_norm": 3.239485025405884,
135
+ "learning_rate": 8.191489361702128e-05,
136
+ "loss": 0.0193,
137
+ "step": 170
138
+ },
139
+ {
140
+ "epoch": 1.9148936170212765,
141
+ "grad_norm": 4.977686882019043,
142
+ "learning_rate": 8.085106382978723e-05,
143
+ "loss": 0.0073,
144
+ "step": 180
145
+ },
146
+ {
147
+ "epoch": 2.0,
148
+ "eval_loss": 0.012815679423511028,
149
+ "eval_mse": 0.012815679423511028,
150
+ "eval_runtime": 13.7185,
151
+ "eval_samples_per_second": 13.631,
152
+ "eval_steps_per_second": 1.749,
153
+ "step": 188
154
+ },
155
+ {
156
+ "epoch": 2.021276595744681,
157
+ "grad_norm": 0.5058400630950928,
158
+ "learning_rate": 7.978723404255319e-05,
159
+ "loss": 0.0292,
160
+ "step": 190
161
+ },
162
+ {
163
+ "epoch": 2.127659574468085,
164
+ "grad_norm": 0.8780695199966431,
165
+ "learning_rate": 7.872340425531916e-05,
166
+ "loss": 0.0057,
167
+ "step": 200
168
+ },
169
+ {
170
+ "epoch": 2.2340425531914896,
171
+ "grad_norm": 1.6473358869552612,
172
+ "learning_rate": 7.76595744680851e-05,
173
+ "loss": 0.0038,
174
+ "step": 210
175
+ },
176
+ {
177
+ "epoch": 2.3404255319148937,
178
+ "grad_norm": 0.6567038297653198,
179
+ "learning_rate": 7.659574468085106e-05,
180
+ "loss": 0.0064,
181
+ "step": 220
182
+ },
183
+ {
184
+ "epoch": 2.4468085106382977,
185
+ "grad_norm": 0.7227131724357605,
186
+ "learning_rate": 7.553191489361703e-05,
187
+ "loss": 0.0274,
188
+ "step": 230
189
+ },
190
+ {
191
+ "epoch": 2.5531914893617023,
192
+ "grad_norm": 2.78585147857666,
193
+ "learning_rate": 7.446808510638298e-05,
194
+ "loss": 0.0153,
195
+ "step": 240
196
+ },
197
+ {
198
+ "epoch": 2.6595744680851063,
199
+ "grad_norm": 0.9734132885932922,
200
+ "learning_rate": 7.340425531914894e-05,
201
+ "loss": 0.0051,
202
+ "step": 250
203
+ },
204
+ {
205
+ "epoch": 2.7659574468085104,
206
+ "grad_norm": 0.26271963119506836,
207
+ "learning_rate": 7.23404255319149e-05,
208
+ "loss": 0.0178,
209
+ "step": 260
210
+ },
211
+ {
212
+ "epoch": 2.872340425531915,
213
+ "grad_norm": 0.6783913373947144,
214
+ "learning_rate": 7.127659574468085e-05,
215
+ "loss": 0.0084,
216
+ "step": 270
217
+ },
218
+ {
219
+ "epoch": 2.978723404255319,
220
+ "grad_norm": 2.631406307220459,
221
+ "learning_rate": 7.021276595744681e-05,
222
+ "loss": 0.0054,
223
+ "step": 280
224
+ },
225
+ {
226
+ "epoch": 3.0,
227
+ "eval_loss": 0.013233263976871967,
228
+ "eval_mse": 0.013233264908194542,
229
+ "eval_runtime": 13.8981,
230
+ "eval_samples_per_second": 13.455,
231
+ "eval_steps_per_second": 1.727,
232
+ "step": 282
233
+ },
234
+ {
235
+ "epoch": 3.0851063829787235,
236
+ "grad_norm": 0.7301953434944153,
237
+ "learning_rate": 6.914893617021277e-05,
238
+ "loss": 0.0032,
239
+ "step": 290
240
+ },
241
+ {
242
+ "epoch": 3.1914893617021276,
243
+ "grad_norm": 0.15255877375602722,
244
+ "learning_rate": 6.808510638297873e-05,
245
+ "loss": 0.0035,
246
+ "step": 300
247
+ },
248
+ {
249
+ "epoch": 3.297872340425532,
250
+ "grad_norm": 1.292937994003296,
251
+ "learning_rate": 6.702127659574469e-05,
252
+ "loss": 0.0097,
253
+ "step": 310
254
+ },
255
+ {
256
+ "epoch": 3.404255319148936,
257
+ "grad_norm": 0.5148919224739075,
258
+ "learning_rate": 6.595744680851063e-05,
259
+ "loss": 0.0042,
260
+ "step": 320
261
+ },
262
+ {
263
+ "epoch": 3.5106382978723403,
264
+ "grad_norm": 0.4015097916126251,
265
+ "learning_rate": 6.489361702127659e-05,
266
+ "loss": 0.0086,
267
+ "step": 330
268
+ },
269
+ {
270
+ "epoch": 3.617021276595745,
271
+ "grad_norm": 3.0170764923095703,
272
+ "learning_rate": 6.382978723404256e-05,
273
+ "loss": 0.007,
274
+ "step": 340
275
+ },
276
+ {
277
+ "epoch": 3.723404255319149,
278
+ "grad_norm": 3.584163188934326,
279
+ "learning_rate": 6.276595744680851e-05,
280
+ "loss": 0.0144,
281
+ "step": 350
282
+ },
283
+ {
284
+ "epoch": 3.829787234042553,
285
+ "grad_norm": 1.1259922981262207,
286
+ "learning_rate": 6.170212765957447e-05,
287
+ "loss": 0.0083,
288
+ "step": 360
289
+ },
290
+ {
291
+ "epoch": 3.9361702127659575,
292
+ "grad_norm": 0.7375078797340393,
293
+ "learning_rate": 6.063829787234043e-05,
294
+ "loss": 0.0212,
295
+ "step": 370
296
+ },
297
+ {
298
+ "epoch": 4.0,
299
+ "eval_loss": 0.017101282253861427,
300
+ "eval_mse": 0.017101280391216278,
301
+ "eval_runtime": 13.872,
302
+ "eval_samples_per_second": 13.48,
303
+ "eval_steps_per_second": 1.73,
304
+ "step": 376
305
+ },
306
+ {
307
+ "epoch": 4.042553191489362,
308
+ "grad_norm": 2.1846752166748047,
309
+ "learning_rate": 5.9574468085106384e-05,
310
+ "loss": 0.0195,
311
+ "step": 380
312
+ },
313
+ {
314
+ "epoch": 4.148936170212766,
315
+ "grad_norm": 5.545344829559326,
316
+ "learning_rate": 5.851063829787234e-05,
317
+ "loss": 0.0105,
318
+ "step": 390
319
+ },
320
+ {
321
+ "epoch": 4.25531914893617,
322
+ "grad_norm": 0.2653953731060028,
323
+ "learning_rate": 5.744680851063831e-05,
324
+ "loss": 0.0025,
325
+ "step": 400
326
+ },
327
+ {
328
+ "epoch": 4.361702127659575,
329
+ "grad_norm": 1.925662875175476,
330
+ "learning_rate": 5.638297872340426e-05,
331
+ "loss": 0.0086,
332
+ "step": 410
333
+ },
334
+ {
335
+ "epoch": 4.468085106382979,
336
+ "grad_norm": 1.8210370540618896,
337
+ "learning_rate": 5.531914893617022e-05,
338
+ "loss": 0.0043,
339
+ "step": 420
340
+ },
341
+ {
342
+ "epoch": 4.574468085106383,
343
+ "grad_norm": 0.3224211037158966,
344
+ "learning_rate": 5.425531914893617e-05,
345
+ "loss": 0.0032,
346
+ "step": 430
347
+ },
348
+ {
349
+ "epoch": 4.680851063829787,
350
+ "grad_norm": 1.967247486114502,
351
+ "learning_rate": 5.319148936170213e-05,
352
+ "loss": 0.0019,
353
+ "step": 440
354
+ },
355
+ {
356
+ "epoch": 4.787234042553192,
357
+ "grad_norm": 0.4594058692455292,
358
+ "learning_rate": 5.212765957446809e-05,
359
+ "loss": 0.003,
360
+ "step": 450
361
+ },
362
+ {
363
+ "epoch": 4.8936170212765955,
364
+ "grad_norm": 1.7685928344726562,
365
+ "learning_rate": 5.1063829787234044e-05,
366
+ "loss": 0.0098,
367
+ "step": 460
368
+ },
369
+ {
370
+ "epoch": 5.0,
371
+ "grad_norm": 1.2480486631393433,
372
+ "learning_rate": 5e-05,
373
+ "loss": 0.0037,
374
+ "step": 470
375
+ },
376
+ {
377
+ "epoch": 5.0,
378
+ "eval_loss": 0.012585950084030628,
379
+ "eval_mse": 0.012585948221385479,
380
+ "eval_runtime": 13.9988,
381
+ "eval_samples_per_second": 13.358,
382
+ "eval_steps_per_second": 1.714,
383
+ "step": 470
384
+ },
385
+ {
386
+ "epoch": 5.1063829787234045,
387
+ "grad_norm": 0.3565947413444519,
388
+ "learning_rate": 4.893617021276596e-05,
389
+ "loss": 0.0022,
390
+ "step": 480
391
+ },
392
+ {
393
+ "epoch": 5.212765957446808,
394
+ "grad_norm": 1.9558695554733276,
395
+ "learning_rate": 4.787234042553192e-05,
396
+ "loss": 0.0024,
397
+ "step": 490
398
+ },
399
+ {
400
+ "epoch": 5.319148936170213,
401
+ "grad_norm": 2.0850021839141846,
402
+ "learning_rate": 4.680851063829788e-05,
403
+ "loss": 0.0017,
404
+ "step": 500
405
+ },
406
+ {
407
+ "epoch": 5.425531914893617,
408
+ "grad_norm": 0.6376081705093384,
409
+ "learning_rate": 4.574468085106383e-05,
410
+ "loss": 0.0024,
411
+ "step": 510
412
+ },
413
+ {
414
+ "epoch": 5.531914893617021,
415
+ "grad_norm": 0.6883834600448608,
416
+ "learning_rate": 4.468085106382979e-05,
417
+ "loss": 0.0027,
418
+ "step": 520
419
+ },
420
+ {
421
+ "epoch": 5.638297872340425,
422
+ "grad_norm": 0.7575547695159912,
423
+ "learning_rate": 4.3617021276595746e-05,
424
+ "loss": 0.0089,
425
+ "step": 530
426
+ },
427
+ {
428
+ "epoch": 5.74468085106383,
429
+ "grad_norm": 0.9643301367759705,
430
+ "learning_rate": 4.2553191489361704e-05,
431
+ "loss": 0.0019,
432
+ "step": 540
433
+ },
434
+ {
435
+ "epoch": 5.851063829787234,
436
+ "grad_norm": 0.4256587326526642,
437
+ "learning_rate": 4.148936170212766e-05,
438
+ "loss": 0.0076,
439
+ "step": 550
440
+ },
441
+ {
442
+ "epoch": 5.957446808510638,
443
+ "grad_norm": 0.6945656538009644,
444
+ "learning_rate": 4.0425531914893614e-05,
445
+ "loss": 0.0015,
446
+ "step": 560
447
+ },
448
+ {
449
+ "epoch": 6.0,
450
+ "eval_loss": 0.012150809168815613,
451
+ "eval_mse": 0.012150808237493038,
452
+ "eval_runtime": 13.8305,
453
+ "eval_samples_per_second": 13.521,
454
+ "eval_steps_per_second": 1.735,
455
+ "step": 564
456
+ },
457
+ {
458
+ "epoch": 6.0638297872340425,
459
+ "grad_norm": 0.319540798664093,
460
+ "learning_rate": 3.936170212765958e-05,
461
+ "loss": 0.0019,
462
+ "step": 570
463
+ },
464
+ {
465
+ "epoch": 6.170212765957447,
466
+ "grad_norm": 1.3060803413391113,
467
+ "learning_rate": 3.829787234042553e-05,
468
+ "loss": 0.0078,
469
+ "step": 580
470
+ },
471
+ {
472
+ "epoch": 6.276595744680851,
473
+ "grad_norm": 0.2700423300266266,
474
+ "learning_rate": 3.723404255319149e-05,
475
+ "loss": 0.0011,
476
+ "step": 590
477
+ },
478
+ {
479
+ "epoch": 6.382978723404255,
480
+ "grad_norm": 0.3229953348636627,
481
+ "learning_rate": 3.617021276595745e-05,
482
+ "loss": 0.0012,
483
+ "step": 600
484
+ },
485
+ {
486
+ "epoch": 6.48936170212766,
487
+ "grad_norm": 1.5781389474868774,
488
+ "learning_rate": 3.5106382978723407e-05,
489
+ "loss": 0.0016,
490
+ "step": 610
491
+ },
492
+ {
493
+ "epoch": 6.595744680851064,
494
+ "grad_norm": 1.4765293598175049,
495
+ "learning_rate": 3.4042553191489365e-05,
496
+ "loss": 0.0022,
497
+ "step": 620
498
+ },
499
+ {
500
+ "epoch": 6.702127659574468,
501
+ "grad_norm": 0.3169589340686798,
502
+ "learning_rate": 3.2978723404255317e-05,
503
+ "loss": 0.0017,
504
+ "step": 630
505
+ },
506
+ {
507
+ "epoch": 6.808510638297872,
508
+ "grad_norm": 0.05266076698899269,
509
+ "learning_rate": 3.191489361702128e-05,
510
+ "loss": 0.0012,
511
+ "step": 640
512
+ },
513
+ {
514
+ "epoch": 6.914893617021277,
515
+ "grad_norm": 1.669859766960144,
516
+ "learning_rate": 3.085106382978723e-05,
517
+ "loss": 0.0044,
518
+ "step": 650
519
+ },
520
+ {
521
+ "epoch": 7.0,
522
+ "eval_loss": 0.01300265546888113,
523
+ "eval_mse": 0.01300265546888113,
524
+ "eval_runtime": 13.9251,
525
+ "eval_samples_per_second": 13.429,
526
+ "eval_steps_per_second": 1.724,
527
+ "step": 658
528
+ },
529
+ {
530
+ "epoch": 7.0212765957446805,
531
+ "grad_norm": 0.5370873808860779,
532
+ "learning_rate": 2.9787234042553192e-05,
533
+ "loss": 0.0013,
534
+ "step": 660
535
+ },
536
+ {
537
+ "epoch": 7.127659574468085,
538
+ "grad_norm": 1.7552562952041626,
539
+ "learning_rate": 2.8723404255319154e-05,
540
+ "loss": 0.0097,
541
+ "step": 670
542
+ },
543
+ {
544
+ "epoch": 7.23404255319149,
545
+ "grad_norm": 0.7278550863265991,
546
+ "learning_rate": 2.765957446808511e-05,
547
+ "loss": 0.0021,
548
+ "step": 680
549
+ },
550
+ {
551
+ "epoch": 7.340425531914893,
552
+ "grad_norm": 0.3339664041996002,
553
+ "learning_rate": 2.6595744680851064e-05,
554
+ "loss": 0.0013,
555
+ "step": 690
556
+ },
557
+ {
558
+ "epoch": 7.446808510638298,
559
+ "grad_norm": 0.2945927381515503,
560
+ "learning_rate": 2.5531914893617022e-05,
561
+ "loss": 0.0006,
562
+ "step": 700
563
+ },
564
+ {
565
+ "epoch": 7.553191489361702,
566
+ "grad_norm": 0.47596001625061035,
567
+ "learning_rate": 2.446808510638298e-05,
568
+ "loss": 0.0016,
569
+ "step": 710
570
+ },
571
+ {
572
+ "epoch": 7.659574468085106,
573
+ "grad_norm": 0.6078076362609863,
574
+ "learning_rate": 2.340425531914894e-05,
575
+ "loss": 0.0007,
576
+ "step": 720
577
+ },
578
+ {
579
+ "epoch": 7.76595744680851,
580
+ "grad_norm": 0.16935944557189941,
581
+ "learning_rate": 2.2340425531914894e-05,
582
+ "loss": 0.0005,
583
+ "step": 730
584
+ },
585
+ {
586
+ "epoch": 7.872340425531915,
587
+ "grad_norm": 0.29044824838638306,
588
+ "learning_rate": 2.1276595744680852e-05,
589
+ "loss": 0.0006,
590
+ "step": 740
591
+ },
592
+ {
593
+ "epoch": 7.9787234042553195,
594
+ "grad_norm": 0.8321860432624817,
595
+ "learning_rate": 2.0212765957446807e-05,
596
+ "loss": 0.0005,
597
+ "step": 750
598
+ },
599
+ {
600
+ "epoch": 8.0,
601
+ "eval_loss": 0.012086642906069756,
602
+ "eval_mse": 0.012086641043424606,
603
+ "eval_runtime": 13.9355,
604
+ "eval_samples_per_second": 13.419,
605
+ "eval_steps_per_second": 1.722,
606
+ "step": 752
607
+ },
608
+ {
609
+ "epoch": 8.085106382978724,
610
+ "grad_norm": 0.5752009153366089,
611
+ "learning_rate": 1.9148936170212766e-05,
612
+ "loss": 0.0003,
613
+ "step": 760
614
+ },
615
+ {
616
+ "epoch": 8.191489361702128,
617
+ "grad_norm": 0.2549976408481598,
618
+ "learning_rate": 1.8085106382978724e-05,
619
+ "loss": 0.0002,
620
+ "step": 770
621
+ },
622
+ {
623
+ "epoch": 8.297872340425531,
624
+ "grad_norm": 0.41198980808258057,
625
+ "learning_rate": 1.7021276595744682e-05,
626
+ "loss": 0.0015,
627
+ "step": 780
628
+ },
629
+ {
630
+ "epoch": 8.404255319148936,
631
+ "grad_norm": 0.25298696756362915,
632
+ "learning_rate": 1.595744680851064e-05,
633
+ "loss": 0.0037,
634
+ "step": 790
635
+ },
636
+ {
637
+ "epoch": 8.51063829787234,
638
+ "grad_norm": 0.38285136222839355,
639
+ "learning_rate": 1.4893617021276596e-05,
640
+ "loss": 0.0008,
641
+ "step": 800
642
+ },
643
+ {
644
+ "epoch": 8.617021276595745,
645
+ "grad_norm": 0.2713870704174042,
646
+ "learning_rate": 1.3829787234042554e-05,
647
+ "loss": 0.0003,
648
+ "step": 810
649
+ },
650
+ {
651
+ "epoch": 8.72340425531915,
652
+ "grad_norm": 0.038877371698617935,
653
+ "learning_rate": 1.2765957446808511e-05,
654
+ "loss": 0.0016,
655
+ "step": 820
656
+ },
657
+ {
658
+ "epoch": 8.829787234042554,
659
+ "grad_norm": 0.5373714566230774,
660
+ "learning_rate": 1.170212765957447e-05,
661
+ "loss": 0.0004,
662
+ "step": 830
663
+ },
664
+ {
665
+ "epoch": 8.936170212765958,
666
+ "grad_norm": 0.03909245505928993,
667
+ "learning_rate": 1.0638297872340426e-05,
668
+ "loss": 0.0002,
669
+ "step": 840
670
+ },
671
+ {
672
+ "epoch": 9.0,
673
+ "eval_loss": 0.01240901555866003,
674
+ "eval_mse": 0.01240901555866003,
675
+ "eval_runtime": 13.8515,
676
+ "eval_samples_per_second": 13.5,
677
+ "eval_steps_per_second": 1.733,
678
+ "step": 846
679
+ },
680
+ {
681
+ "epoch": 9.042553191489361,
682
+ "grad_norm": 0.5892852544784546,
683
+ "learning_rate": 9.574468085106383e-06,
684
+ "loss": 0.0006,
685
+ "step": 850
686
+ },
687
+ {
688
+ "epoch": 9.148936170212766,
689
+ "grad_norm": 0.5549539923667908,
690
+ "learning_rate": 8.510638297872341e-06,
691
+ "loss": 0.0002,
692
+ "step": 860
693
+ },
694
+ {
695
+ "epoch": 9.25531914893617,
696
+ "grad_norm": 0.04103722795844078,
697
+ "learning_rate": 7.446808510638298e-06,
698
+ "loss": 0.0002,
699
+ "step": 870
700
+ },
701
+ {
702
+ "epoch": 9.361702127659575,
703
+ "grad_norm": 0.3398093283176422,
704
+ "learning_rate": 6.3829787234042555e-06,
705
+ "loss": 0.0002,
706
+ "step": 880
707
+ },
708
+ {
709
+ "epoch": 9.46808510638298,
710
+ "grad_norm": 0.24325959384441376,
711
+ "learning_rate": 5.319148936170213e-06,
712
+ "loss": 0.0002,
713
+ "step": 890
714
+ },
715
+ {
716
+ "epoch": 9.574468085106384,
717
+ "grad_norm": 0.1950555145740509,
718
+ "learning_rate": 4.255319148936171e-06,
719
+ "loss": 0.0002,
720
+ "step": 900
721
+ },
722
+ {
723
+ "epoch": 9.680851063829786,
724
+ "grad_norm": 0.8445388674736023,
725
+ "learning_rate": 3.1914893617021277e-06,
726
+ "loss": 0.0005,
727
+ "step": 910
728
+ },
729
+ {
730
+ "epoch": 9.787234042553191,
731
+ "grad_norm": 0.035643890500068665,
732
+ "learning_rate": 2.1276595744680853e-06,
733
+ "loss": 0.0001,
734
+ "step": 920
735
+ },
736
+ {
737
+ "epoch": 9.893617021276595,
738
+ "grad_norm": 0.12715914845466614,
739
+ "learning_rate": 1.0638297872340427e-06,
740
+ "loss": 0.0021,
741
+ "step": 930
742
+ },
743
+ {
744
+ "epoch": 10.0,
745
+ "grad_norm": 0.24454765021800995,
746
+ "learning_rate": 0.0,
747
+ "loss": 0.0002,
748
+ "step": 940
749
+ }
750
+ ],
751
+ "logging_steps": 10,
752
+ "max_steps": 940,
753
+ "num_input_tokens_seen": 0,
754
+ "num_train_epochs": 10,
755
+ "save_steps": 10,
756
+ "stateful_callbacks": {
757
+ "TrainerControl": {
758
+ "args": {
759
+ "should_epoch_stop": false,
760
+ "should_evaluate": false,
761
+ "should_log": false,
762
+ "should_save": true,
763
+ "should_training_stop": true
764
+ },
765
+ "attributes": {}
766
+ }
767
+ },
768
+ "total_flos": 0.0,
769
+ "train_batch_size": 8,
770
+ "trial_name": null,
771
+ "trial_params": null
772
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d92bd92e9dcc31671cbbafa6a11ceb13b7808ae7bb1316162c8acee0f92bd08
3
+ size 5048