Shakhovak commited on
Commit
1422322
1 Parent(s): 3307175

End of training

Browse files
Files changed (3) hide show
  1. README.md +12 -17
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [baffo32/decapoda-research-llama-7B-hf](https://huggingface.co/baffo32/decapoda-research-llama-7B-hf) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.0455
19
 
20
  ## Model description
21
 
@@ -43,28 +43,23 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 2
46
- - training_steps: 600
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 0.1265 | 0.36 | 40 | 0.0377 |
54
- | 0.0356 | 0.72 | 80 | 0.0319 |
55
- | 0.0292 | 1.08 | 120 | 0.0276 |
56
- | 0.0199 | 1.44 | 160 | 0.0264 |
57
- | 0.0211 | 1.8 | 200 | 0.0277 |
58
- | 0.0162 | 2.16 | 240 | 0.0306 |
59
- | 0.012 | 2.52 | 280 | 0.0280 |
60
- | 0.0121 | 2.88 | 320 | 0.0301 |
61
- | 0.0075 | 3.24 | 360 | 0.0370 |
62
- | 0.0052 | 3.6 | 400 | 0.0379 |
63
- | 0.0058 | 3.96 | 440 | 0.0336 |
64
- | 0.0026 | 4.32 | 480 | 0.0484 |
65
- | 0.0016 | 4.68 | 520 | 0.0455 |
66
- | 0.0021 | 5.05 | 560 | 0.0439 |
67
- | 0.0008 | 5.41 | 600 | 0.0455 |
68
 
69
 
70
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [baffo32/decapoda-research-llama-7B-hf](https://huggingface.co/baffo32/decapoda-research-llama-7B-hf) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.0345
19
 
20
  ## Model description
21
 
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 2
46
+ - training_steps: 400
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.1281 | 0.36 | 40 | 0.0380 |
54
+ | 0.0358 | 0.72 | 80 | 0.0314 |
55
+ | 0.0296 | 1.08 | 120 | 0.0263 |
56
+ | 0.0211 | 1.44 | 160 | 0.0254 |
57
+ | 0.0203 | 1.8 | 200 | 0.0236 |
58
+ | 0.0163 | 2.16 | 240 | 0.0273 |
59
+ | 0.0115 | 2.52 | 280 | 0.0276 |
60
+ | 0.0105 | 2.88 | 320 | 0.0265 |
61
+ | 0.0081 | 3.24 | 360 | 0.0306 |
62
+ | 0.0046 | 3.6 | 400 | 0.0345 |
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f392f82d7c267f87c4cb77406f00fb45361a76971532bc7b737a3e114c42edf7
3
  size 268528394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdd1a0f65a71b0f3400daa9be46ae6777b39d3b084d7f83c657919f9fba4a6dd
3
  size 268528394
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34117e73398806caa00c429d63f42be605e6b2f7eb10f4e6751733e23a819385
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2c8e7a60e2d0e02704e415f7f8e4f0f58fa01832d62e0a691607a78a5a2e58a
3
  size 4984