PaulD commited on
Commit
05e7c69
1 Parent(s): aae3347

End of training

Browse files
README.md CHANGED
@@ -14,18 +14,10 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/pauld/huggingface/runs/y7di7l44)
18
  # kto-aligned-model-lora
19
 
20
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.4990
23
- - Eval/rewards/chosen: 0.1561
24
- - Eval/logps/chosen: -0.6624
25
- - Eval/rewards/rejected: 0.1281
26
- - Eval/logps/rejected: -1.9415
27
- - Eval/rewards/margins: 0.0281
28
- - Eval/kl: 1.5643
29
 
30
  ## Model description
31
 
@@ -44,27 +36,19 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 0.0001
48
  - train_batch_size: 1
49
- - eval_batch_size: 4
50
  - seed: 42
51
- - gradient_accumulation_steps: 12
52
- - total_train_batch_size: 12
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: cosine
55
  - lr_scheduler_warmup_ratio: 0.1
56
- - num_epochs: 5
57
- - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
- | Training Loss | Epoch | Step | Validation Loss | |
62
- |:-------------:|:------:|:----:|:---------------:|:------:|
63
- | 0.4994 | 0.9057 | 8 | 0.4997 | 0.8856 |
64
- | 0.5 | 1.9245 | 17 | 0.4994 | 1.5546 |
65
- | 0.501 | 2.9434 | 26 | 0.4992 | 1.5634 |
66
- | 0.5004 | 3.9623 | 35 | 0.4991 | 1.5675 |
67
- | 0.4999 | 4.5283 | 40 | 0.4990 | 1.5643 |
68
 
69
 
70
  ### Framework versions
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/pauld/huggingface/runs/ux1jbcmg)
18
  # kto-aligned-model-lora
19
 
20
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
 
 
 
 
 
 
 
 
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 1e-05
40
  - train_batch_size: 1
41
+ - eval_batch_size: 2
42
  - seed: 42
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 4
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_ratio: 0.1
48
+ - num_epochs: 3.0
 
49
 
50
  ### Training results
51
 
 
 
 
 
 
 
 
52
 
53
 
54
  ### Framework versions
adapter_config.json CHANGED
@@ -20,7 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj"
 
 
 
24
  ],
25
  "task_type": "CAUSAL_LM",
26
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "v_proj",
24
+ "q_proj",
25
+ "k_proj",
26
+ "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9edc78ac5314459cade5002a9b7fbea45d0f906f16037086f352c5e657d6730
3
- size 8397184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b6002063f0dc83edf2921d57aee72eb8bcc61ba9e398e081c33e5b5a8f47075
3
+ size 27297544
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b09854525ea44d0ece38e709aa75664028ad8e7e57216b0f5a61dfec8fa1bb4c
3
- size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f26ba2ab137cd14f08015c761eccd4bbcb59b34f9eb139cb7bae83015a29674
3
+ size 5560