zodiache commited on
Commit
a1cc9b0
1 Parent(s): 4194011

Model save

Browse files
Files changed (4) hide show
  1. README.md +22 -42
  2. adapter_model.safetensors +1 -1
  3. all_results.json +6 -6
  4. train_results.json +6 -6
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0361
22
 
23
  ## Model description
24
 
@@ -46,52 +46,32 @@ The following hyperparameters were used during training:
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 100
49
- - training_steps: 4096
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:------:|:----:|:---------------:|
55
- | 0.0852 | 0.1892 | 100 | 0.0749 |
56
- | 0.048 | 0.3784 | 200 | 0.0482 |
57
- | 0.0401 | 0.5676 | 300 | 0.0384 |
58
- | 0.0355 | 0.7569 | 400 | 0.0353 |
59
- | 0.0711 | 0.9461 | 500 | 0.0325 |
60
- | 0.0325 | 1.1353 | 600 | 0.0358 |
61
- | 0.0226 | 1.3245 | 700 | 0.0276 |
62
- | 0.0433 | 1.5137 | 800 | 0.0273 |
63
- | 0.0202 | 1.7029 | 900 | 0.0281 |
64
- | 0.0029 | 1.8921 | 1000 | 0.0276 |
65
- | 0.012 | 2.0814 | 1100 | 0.0266 |
66
- | 0.0052 | 2.2706 | 1200 | 0.0248 |
67
- | 0.0268 | 2.4598 | 1300 | 0.0273 |
68
- | 0.0046 | 2.6490 | 1400 | 0.0278 |
69
- | 0.0231 | 2.8382 | 1500 | 0.0256 |
70
- | 0.0034 | 3.0274 | 1600 | 0.0282 |
71
- | 0.0103 | 3.2167 | 1700 | 0.0253 |
72
- | 0.0005 | 3.4059 | 1800 | 0.0296 |
73
- | 0.0215 | 3.5951 | 1900 | 0.0251 |
74
- | 0.0213 | 3.7843 | 2000 | 0.0239 |
75
- | 0.0037 | 3.9735 | 2100 | 0.0284 |
76
- | 0.0046 | 4.1627 | 2200 | 0.0273 |
77
- | 0.0343 | 4.3519 | 2300 | 0.0317 |
78
- | 0.0009 | 4.5412 | 2400 | 0.0270 |
79
- | 0.0041 | 4.7304 | 2500 | 0.0282 |
80
- | 0.0007 | 4.9196 | 2600 | 0.0297 |
81
- | 0.0023 | 5.1088 | 2700 | 0.0286 |
82
- | 0.0281 | 5.2980 | 2800 | 0.0307 |
83
- | 0.0008 | 5.4872 | 2900 | 0.0335 |
84
- | 0.0071 | 5.6764 | 3000 | 0.0309 |
85
- | 0.0006 | 5.8657 | 3100 | 0.0313 |
86
- | 0.0021 | 6.0549 | 3200 | 0.0332 |
87
- | 0.0034 | 6.2441 | 3300 | 0.0345 |
88
- | 0.0024 | 6.4333 | 3400 | 0.0349 |
89
- | 0.0025 | 6.6225 | 3500 | 0.0354 |
90
- | 0.0003 | 6.8117 | 3600 | 0.0356 |
91
- | 0.0001 | 7.0009 | 3700 | 0.0360 |
92
- | 0.0005 | 7.1902 | 3800 | 0.0358 |
93
- | 0.0038 | 7.3794 | 3900 | 0.0361 |
94
- | 0.0203 | 7.5686 | 4000 | 0.0361 |
95
 
96
 
97
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0709
22
 
23
  ## Model description
24
 
 
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 100
49
+ - training_steps: 2048
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:------:|:----:|:---------------:|
55
+ | 0.1154 | 0.1110 | 100 | 0.1172 |
56
+ | 0.092 | 0.2220 | 200 | 0.1028 |
57
+ | 0.0462 | 0.3330 | 300 | 0.0992 |
58
+ | 0.0482 | 0.4440 | 400 | 0.0755 |
59
+ | 0.043 | 0.5550 | 500 | 0.0794 |
60
+ | 0.0476 | 0.6660 | 600 | 0.0628 |
61
+ | 0.0482 | 0.7770 | 700 | 0.0821 |
62
+ | 0.0484 | 0.8880 | 800 | 0.0691 |
63
+ | 0.0448 | 0.9990 | 900 | 0.0829 |
64
+ | 0.0214 | 1.1100 | 1000 | 0.0720 |
65
+ | 0.0439 | 1.2210 | 1100 | 0.0635 |
66
+ | 0.0364 | 1.3320 | 1200 | 0.0713 |
67
+ | 0.0497 | 1.4430 | 1300 | 0.0669 |
68
+ | 0.0455 | 1.5540 | 1400 | 0.0672 |
69
+ | 0.0614 | 1.6650 | 1500 | 0.0805 |
70
+ | 0.0416 | 1.7761 | 1600 | 0.0669 |
71
+ | 0.0367 | 1.8871 | 1700 | 0.0716 |
72
+ | 0.0578 | 1.9981 | 1800 | 0.0684 |
73
+ | 0.0358 | 2.1091 | 1900 | 0.0705 |
74
+ | 0.0326 | 2.2201 | 2000 | 0.0709 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
 
77
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d39a7d9da0099ce70259812036149facc17739657421d0040d2766874b44fc44
3
  size 2115012328
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ba67be13a2cc146e8878ad84b09a2a87ae2a5f1f1e8432e07006717de92d569
3
  size 2115012328
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 7.750236518448439,
3
- "total_flos": 3.618461896742535e+18,
4
- "train_loss": 0.06036830692467143,
5
- "train_runtime": 34366.2607,
6
- "train_samples_per_second": 7.628,
7
- "train_steps_per_second": 0.119
8
  }
 
1
  {
2
+ "epoch": 2.273345358679062,
3
+ "total_flos": 3.452630161043423e+18,
4
+ "train_loss": 0.14040503208616428,
5
+ "train_runtime": 40752.135,
6
+ "train_samples_per_second": 3.216,
7
+ "train_steps_per_second": 0.05
8
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 7.750236518448439,
3
- "total_flos": 3.618461896742535e+18,
4
- "train_loss": 0.06036830692467143,
5
- "train_runtime": 34366.2607,
6
- "train_samples_per_second": 7.628,
7
- "train_steps_per_second": 0.119
8
  }
 
1
  {
2
+ "epoch": 2.273345358679062,
3
+ "total_flos": 3.452630161043423e+18,
4
+ "train_loss": 0.14040503208616428,
5
+ "train_runtime": 40752.135,
6
+ "train_samples_per_second": 3.216,
7
+ "train_steps_per_second": 0.05
8
  }