Casper0508 commited on
Commit
7b9a13f
1 Parent(s): 336172d

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.7658
20
 
21
  ## Model description
22
 
@@ -32,20 +32,6 @@ More information needed
32
 
33
  ## Training procedure
34
 
35
-
36
- The following `bitsandbytes` quantization config was used during training:
37
- - quant_method: bitsandbytes
38
- - _load_in_8bit: False
39
- - _load_in_4bit: True
40
- - llm_int8_threshold: 6.0
41
- - llm_int8_skip_modules: None
42
- - llm_int8_enable_fp32_cpu_offload: False
43
- - llm_int8_has_fp16_weight: False
44
- - bnb_4bit_quant_type: nf4
45
- - bnb_4bit_use_double_quant: True
46
- - bnb_4bit_compute_dtype: bfloat16
47
- - load_in_4bit: True
48
- - load_in_8bit: False
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
@@ -64,37 +50,37 @@ The following hyperparameters were used during training:
64
 
65
  | Training Loss | Epoch | Step | Validation Loss |
66
  |:-------------:|:-----:|:----:|:---------------:|
67
- | 3.7986 | 1.36 | 10 | 3.3486 |
68
- | 2.781 | 2.71 | 20 | 1.9851 |
69
- | 1.6096 | 4.07 | 30 | 1.3075 |
70
- | 1.2107 | 5.42 | 40 | 1.1210 |
71
- | 1.0597 | 6.78 | 50 | 1.0222 |
72
- | 0.9672 | 8.14 | 60 | 0.9562 |
73
- | 0.8924 | 9.49 | 70 | 0.9131 |
74
- | 0.8189 | 10.85 | 80 | 0.8582 |
75
- | 0.7393 | 12.2 | 90 | 0.7907 |
76
- | 0.6355 | 13.56 | 100 | 0.7136 |
77
- | 0.5683 | 14.92 | 110 | 0.7013 |
78
- | 0.533 | 16.27 | 120 | 0.7011 |
79
- | 0.5155 | 17.63 | 130 | 0.7049 |
80
- | 0.4965 | 18.98 | 140 | 0.7194 |
81
- | 0.4826 | 20.34 | 150 | 0.7222 |
82
- | 0.4617 | 21.69 | 160 | 0.7294 |
83
- | 0.453 | 23.05 | 170 | 0.7347 |
84
- | 0.439 | 24.41 | 180 | 0.7418 |
85
- | 0.4333 | 25.76 | 190 | 0.7473 |
86
- | 0.4261 | 27.12 | 200 | 0.7600 |
87
- | 0.4238 | 28.47 | 210 | 0.7580 |
88
- | 0.4163 | 29.83 | 220 | 0.7646 |
89
- | 0.4158 | 31.19 | 230 | 0.7659 |
90
- | 0.4137 | 32.54 | 240 | 0.7662 |
91
- | 0.4131 | 33.9 | 250 | 0.7658 |
92
 
93
 
94
  ### Framework versions
95
 
96
  - PEFT 0.4.0
97
  - Transformers 4.38.2
98
- - Pytorch 2.3.1+cu121
99
  - Datasets 2.13.1
100
  - Tokenizers 0.15.2
 
16
 
17
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.7124
20
 
21
  ## Model description
22
 
 
32
 
33
  ## Training procedure
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 3.8363 | 1.36 | 10 | 3.5698 |
54
+ | 3.2454 | 2.71 | 20 | 2.7356 |
55
+ | 2.2867 | 4.07 | 30 | 1.7205 |
56
+ | 1.4623 | 5.42 | 40 | 1.2840 |
57
+ | 1.1723 | 6.78 | 50 | 1.0982 |
58
+ | 1.0295 | 8.14 | 60 | 0.9766 |
59
+ | 0.9085 | 9.49 | 70 | 0.8723 |
60
+ | 0.784 | 10.85 | 80 | 0.7651 |
61
+ | 0.717 | 12.2 | 90 | 0.7394 |
62
+ | 0.6745 | 13.56 | 100 | 0.7235 |
63
+ | 0.6402 | 14.92 | 110 | 0.7157 |
64
+ | 0.6251 | 16.27 | 120 | 0.7089 |
65
+ | 0.5961 | 17.63 | 130 | 0.7100 |
66
+ | 0.5871 | 18.98 | 140 | 0.7042 |
67
+ | 0.5714 | 20.34 | 150 | 0.7070 |
68
+ | 0.5582 | 21.69 | 160 | 0.7062 |
69
+ | 0.5457 | 23.05 | 170 | 0.7076 |
70
+ | 0.5392 | 24.41 | 180 | 0.7094 |
71
+ | 0.5354 | 25.76 | 190 | 0.7100 |
72
+ | 0.5278 | 27.12 | 200 | 0.7105 |
73
+ | 0.5275 | 28.47 | 210 | 0.7110 |
74
+ | 0.5249 | 29.83 | 220 | 0.7123 |
75
+ | 0.5204 | 31.19 | 230 | 0.7123 |
76
+ | 0.5198 | 32.54 | 240 | 0.7123 |
77
+ | 0.5195 | 33.9 | 250 | 0.7124 |
78
 
79
 
80
  ### Framework versions
81
 
82
  - PEFT 0.4.0
83
  - Transformers 4.38.2
84
+ - Pytorch 2.4.0+cu121
85
  - Datasets 2.13.1
86
  - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -7,11 +7,11 @@
7
  "init_lora_weights": true,
8
  "layers_pattern": null,
9
  "layers_to_transform": null,
10
- "lora_alpha": 64,
11
  "lora_dropout": 0.3,
12
  "modules_to_save": null,
13
  "peft_type": "LORA",
14
- "r": 32,
15
  "revision": null,
16
  "target_modules": [
17
  "q_proj",
 
7
  "init_lora_weights": true,
8
  "layers_pattern": null,
9
  "layers_to_transform": null,
10
+ "lora_alpha": 32,
11
  "lora_dropout": 0.3,
12
  "modules_to_save": null,
13
  "peft_type": "LORA",
14
+ "r": 16,
15
  "revision": null,
16
  "target_modules": [
17
  "q_proj",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c79b518cd11b8a6534588d5307e8de14d00c864982e240d6fbb4a42c5c073fee
3
- size 75523312
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:853047c7ee6f98c051a455fe53ed043a81d61b9f38d831e7f1d882e9b2d0c0a8
3
+ size 37774528
emissions.csv CHANGED
@@ -1,2 +1,2 @@
1
  timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region
2
- 2024-07-18T15:29:54,95beb92d-f3cb-419a-bd73-35f2aa8381d9,codecarbon,1287.520273923874,0.07720597406987714,0.11487120042662032,United Kingdom,GBR,scotland,N,,
 
1
  timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region
2
+ 2024-07-25T00:29:17,b81b783c-301a-439a-a0a5-4917c65bc6de,codecarbon,6290.977914571762,0.3557443410838277,0.5292955629092988,United Kingdom,GBR,scotland,N,,
runs/Jul24_22-44-22_msc-modeltrain-pod/events.out.tfevents.1721861066.msc-modeltrain-pod.678.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64674eeea02a95ded93bfb6003762d75495f8ca12b599682b5fa2540e8d4fb08
3
+ size 17034
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cef72bd14db9c868844ff3c9d70e303cc81b9c07a101d5f05f0fa45c6adaafe
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b3b3a3d7323b4bda4c1a482867d25717c65236d1bd44bb96cd5c9ce33dd107
3
  size 4984