Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- finetune
|
4 |
+
- fine tune
|
5 |
+
- dpo
|
6 |
+
- sft
|
7 |
+
- yi
|
8 |
+
---
|
9 |
+
|
10 |
+
Yi-34B-200K trained via DPO on RAWrr_v1 at ctx 200 (lora_r 4, lora_alpha 8) and then via SFT at ctx 1400 (lora_r 16, lora_alpha 32) on AEZAKMI_v2.
|
11 |
+
It's less prone to refusals than Yi-34B-200K-AEZAKMI-v2 but that's work in progress still - I want to do DPO with higher lora rank and ctx and then repeat SFT training.
|
12 |
+
I haven't tested it too much, but on what I've seen, it's a good model.
|