vaishnavik31 commited on
Commit
bf0e5c0
1 Parent(s): 9238fbd

llama-3-8b-finetuned-peft-exp

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.1404
22
 
23
  ## Model description
24
 
@@ -52,11 +52,11 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:------:|:----:|:---------------:|
55
- | 0.1223 | 0.1998 | 90 | 0.1680 |
56
- | 0.1614 | 0.3996 | 180 | 0.1531 |
57
- | 0.1621 | 0.5993 | 270 | 0.1549 |
58
- | 0.2369 | 0.7991 | 360 | 0.1443 |
59
- | 0.1496 | 0.9989 | 450 | 0.1404 |
60
 
61
 
62
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1550
22
 
23
  ## Model description
24
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:------:|:----:|:---------------:|
55
+ | 0.0902 | 0.1998 | 90 | 0.1856 |
56
+ | 0.1458 | 0.3996 | 180 | 0.1749 |
57
+ | 0.2055 | 0.5993 | 270 | 0.1664 |
58
+ | 0.1414 | 0.7991 | 360 | 0.1581 |
59
+ | 0.1347 | 0.9989 | 450 | 0.1550 |
60
 
61
 
62
  ### Framework versions
adapter_config.json CHANGED
@@ -21,12 +21,12 @@
21
  "revision": null,
22
  "target_modules": [
23
  "down_proj",
24
- "k_proj",
25
  "v_proj",
 
26
  "q_proj",
27
  "gate_proj",
28
- "up_proj",
29
- "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
21
  "revision": null,
22
  "target_modules": [
23
  "down_proj",
24
+ "up_proj",
25
  "v_proj",
26
+ "o_proj",
27
  "q_proj",
28
  "gate_proj",
29
+ "k_proj"
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd6a35221a98099f33d85c3c0a9405aa74ac080560e40a92ebe11219a9f6df8d
3
  size 2185326944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7357ca6c884c4f37f830b269d1ab2df933fb29ed9386aaf6c6f167be5e024c33
3
  size 2185326944
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc6daa4908be123ed36b20847a1ac85a541ef074093784808a519ec03c76420f
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed5e945f2800a83e0c3185e0e81196c8dc2602e1c556c52440bf497f4709ab1a
3
  size 5368