marcus2000 commited on
Commit
4084845
1 Parent(s): bd83dd9

saiga_demo

Browse files
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.7981
19
 
20
  ## Model description
21
 
@@ -42,22 +42,14 @@ The following hyperparameters were used during training:
42
  - total_train_batch_size: 20
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - training_steps: 100
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 1.7044 | 1.43 | 10 | 1.6775 |
52
- | 1.6147 | 2.86 | 20 | 1.6721 |
53
- | 1.5209 | 4.29 | 30 | 1.6832 |
54
- | 1.4711 | 5.71 | 40 | 1.7075 |
55
- | 1.4222 | 7.14 | 50 | 1.7207 |
56
- | 1.3594 | 8.57 | 60 | 1.7500 |
57
- | 1.3276 | 10.0 | 70 | 1.7686 |
58
- | 1.2995 | 11.43 | 80 | 1.7832 |
59
- | 1.2516 | 12.86 | 90 | 1.7999 |
60
- | 1.2647 | 14.29 | 100 | 1.7981 |
61
 
62
 
63
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.8757
19
 
20
  ## Model description
21
 
 
42
  - total_train_batch_size: 20
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - training_steps: 20
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 1.7956 | 1.43 | 10 | 1.9073 |
52
+ | 1.7315 | 2.86 | 20 | 1.8757 |
 
 
 
 
 
 
 
 
53
 
54
 
55
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
  "k_proj",
25
- "o_proj",
26
- "v_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "v_proj",
24
  "k_proj",
25
+ "q_proj",
26
+ "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a01ed7a5c867c3b13d6ac7869e1458ba56faffcfe5a84e9ef6ecf5020461c5c
3
  size 33589040
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:142aa4aadb980f6cd78b8019e923dc97ae7e40161bf41bf0bd3f1861859ee99f
3
  size 33589040
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8073d0ca4ed65ea0558212eef450e90571128ff4c6963fe51228e0494450b50b
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a04c6a0403c6d4de48a545f873fbfab7b3a71b70eb7de9c9c181cd6b8cdeac96
3
  size 4920