Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- merge
|
8 |
license: llama3
|
9 |
---
|
10 |
-
# Meta-Llama-3-13B-Instruct
|
11 |
|
12 |
This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
13 |
|
@@ -15,16 +15,20 @@ The model is based on my passthrough merge of [Llama-3-13B-Instruct](https://hug
|
|
15 |
|
16 |
This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset.
|
17 |
|
18 |
-
The
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
A small dataset was used to see how it affects performance. Originally I planned to do a larger dataset (196k samples), but wanted to start with a smaller one first to see how much the model improved with some additional finetuning.
|
|
|
28 |
Next steps would be finetuning on a larger dataset if through further testing, performance improvements are noticed.
|
29 |
|
30 |
## Finetuning details
|
|
|
7 |
- merge
|
8 |
license: llama3
|
9 |
---
|
10 |
+
# Meta-Llama-3-13B-Instruct-ft
|
11 |
|
12 |
This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
13 |
|
|
|
15 |
|
16 |
This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset.
|
17 |
|
18 |
+
The goal was to make a "mid" sized model like Meta has released in the past and the merge method was inspired by [mlabonne's Llama-3-120B](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct).
|
19 |
|
20 |
+
The model was finetuned on **8192 context length** and is likely reliable using RoPE up to 32k.
|
21 |
|
22 |
+
It still cannot do math reliably; neither can Llama-3-8B, and in my tests only Llama-3-70B passes simple arithmetic, but it a better storywriter/RP than Llama-3-8B from some side by side testing I conducted.
|
23 |
|
24 |
+
Further finetuning this model or finetuning the [base model](https://huggingface.co/elinas/Llama-3-13B-Instruct) on more samples is encouraged.
|
25 |
+
|
26 |
+
## Datasets
|
27 |
+
|
28 |
+
* [Chat-Error/Pure-dove-sharegpt](https://huggingface.co/datasets/Chat-Error/Pure-dove-sharegpt)
|
29 |
|
30 |
A small dataset was used to see how it affects performance. Originally I planned to do a larger dataset (196k samples), but wanted to start with a smaller one first to see how much the model improved with some additional finetuning.
|
31 |
+
|
32 |
Next steps would be finetuning on a larger dataset if through further testing, performance improvements are noticed.
|
33 |
|
34 |
## Finetuning details
|