Casper0508 commited on
Commit
3e9e86d
1 Parent(s): 7234337

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.4797
20
 
21
  ## Model description
22
 
@@ -32,6 +32,20 @@ More information needed
32
 
33
  ## Training procedure
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
@@ -50,31 +64,31 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 3.365 | 1.36 | 10 | 2.0638 |
54
- | 1.3671 | 2.71 | 20 | 0.9814 |
55
- | 0.817 | 4.07 | 30 | 0.7618 |
56
- | 0.6648 | 5.42 | 40 | 0.7134 |
57
- | 0.5897 | 6.78 | 50 | 0.6871 |
58
- | 0.5076 | 8.14 | 60 | 0.6776 |
59
- | 0.4545 | 9.49 | 70 | 0.7360 |
60
- | 0.4059 | 10.85 | 80 | 0.7673 |
61
- | 0.3544 | 12.2 | 90 | 0.8158 |
62
- | 0.3161 | 13.56 | 100 | 0.8801 |
63
- | 0.2844 | 14.92 | 110 | 0.9591 |
64
- | 0.259 | 16.27 | 120 | 0.9817 |
65
- | 0.2405 | 17.63 | 130 | 1.0922 |
66
- | 0.2298 | 18.98 | 140 | 1.1705 |
67
- | 0.2125 | 20.34 | 150 | 1.1817 |
68
- | 0.2073 | 21.69 | 160 | 1.2862 |
69
- | 0.1998 | 23.05 | 170 | 1.3352 |
70
- | 0.1912 | 24.41 | 180 | 1.3434 |
71
- | 0.1883 | 25.76 | 190 | 1.4113 |
72
- | 0.1851 | 27.12 | 200 | 1.4113 |
73
- | 0.1796 | 28.47 | 210 | 1.4654 |
74
- | 0.1805 | 29.83 | 220 | 1.4565 |
75
- | 0.1768 | 31.19 | 230 | 1.4650 |
76
- | 0.1763 | 32.54 | 240 | 1.4769 |
77
- | 0.1752 | 33.9 | 250 | 1.4797 |
78
 
79
 
80
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.4554
20
 
21
  ## Model description
22
 
 
32
 
33
  ## Training procedure
34
 
35
+
36
+ The following `bitsandbytes` quantization config was used during training:
37
+ - quant_method: bitsandbytes
38
+ - _load_in_8bit: False
39
+ - _load_in_4bit: True
40
+ - llm_int8_threshold: 6.0
41
+ - llm_int8_skip_modules: None
42
+ - llm_int8_enable_fp32_cpu_offload: False
43
+ - llm_int8_has_fp16_weight: False
44
+ - bnb_4bit_quant_type: nf4
45
+ - bnb_4bit_use_double_quant: True
46
+ - bnb_4bit_compute_dtype: bfloat16
47
+ - load_in_4bit: True
48
+ - load_in_8bit: False
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
 
64
 
65
  | Training Loss | Epoch | Step | Validation Loss |
66
  |:-------------:|:-----:|:----:|:---------------:|
67
+ | 3.4063 | 1.36 | 10 | 2.0249 |
68
+ | 1.4234 | 2.71 | 20 | 1.1088 |
69
+ | 0.9874 | 4.07 | 30 | 0.8900 |
70
+ | 0.7207 | 5.42 | 40 | 0.6961 |
71
+ | 0.5784 | 6.78 | 50 | 0.6823 |
72
+ | 0.5088 | 8.14 | 60 | 0.6767 |
73
+ | 0.4453 | 9.49 | 70 | 0.7067 |
74
+ | 0.3935 | 10.85 | 80 | 0.7432 |
75
+ | 0.3417 | 12.2 | 90 | 0.8008 |
76
+ | 0.3026 | 13.56 | 100 | 0.9167 |
77
+ | 0.2754 | 14.92 | 110 | 0.9432 |
78
+ | 0.2507 | 16.27 | 120 | 0.9834 |
79
+ | 0.2359 | 17.63 | 130 | 1.0581 |
80
+ | 0.2213 | 18.98 | 140 | 1.1612 |
81
+ | 0.2075 | 20.34 | 150 | 1.1553 |
82
+ | 0.2011 | 21.69 | 160 | 1.3062 |
83
+ | 0.1959 | 23.05 | 170 | 1.3247 |
84
+ | 0.1891 | 24.41 | 180 | 1.3318 |
85
+ | 0.1865 | 25.76 | 190 | 1.3603 |
86
+ | 0.1825 | 27.12 | 200 | 1.3980 |
87
+ | 0.1797 | 28.47 | 210 | 1.4180 |
88
+ | 0.178 | 29.83 | 220 | 1.4311 |
89
+ | 0.176 | 31.19 | 230 | 1.4476 |
90
+ | 0.1748 | 32.54 | 240 | 1.4538 |
91
+ | 0.1753 | 33.9 | 250 | 1.4554 |
92
 
93
 
94
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7baefad0749b95301d1ef5729beb829eccb0fff2ef98c4b1dfe752fecdb4b7cf
3
  size 151020944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ea850e17a642a80a9f6054cba639a863c38fd5ee587fc288a24e2a510e28b46
3
  size 151020944
emissions.csv CHANGED
@@ -1,2 +1,2 @@
1
  timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region
2
- 2024-07-25T20:07:05,71cb2d14-a3e9-44f2-9adf-aa99d60af3f0,codecarbon,6487.100156784058,0.3647758733647877,0.5427331623606918,United Kingdom,GBR,scotland,N,,
 
1
  timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region
2
+ 2024-07-29T16:59:42,0b1b335d-594e-49e2-84c9-dbc256266dc6,codecarbon,1422.8740487098694,0.08038502085419474,0.11960115720427114,United Kingdom,GBR,scotland,N,,
runs/Jul29_16-35-55_msc-modeltrain-pod/events.out.tfevents.1722270959.msc-modeltrain-pod.10499.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d705f844566094626f09e436f750a967f03599708c798d76a36bfc00dfd78e5
3
+ size 17477
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa99f7e0c9cee069a0ee479ad9d6186ca6da7c27642e43b0a7cf82a3fc09d7e6
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73cf781d850b973106fdd5e079cb1a7baf30c96f78fa7dac138c5e1e1cf3d9a6
3
  size 4984