Update README.md
Browse filesadd gguf quant info
README.md
CHANGED
@@ -18,11 +18,11 @@ The model is based on a "zeroed" passthrough merge of [Llama-3-15B-Instruct-zero
|
|
18 |
|
19 |
This was primarily an experiment to see how a passthrough merge will respond to further finetuning of all LoRA modules.
|
20 |
|
21 |
-
The model was finetuned on **8192 context length** and
|
22 |
|
23 |
-
|
24 |
|
25 |
-
**
|
26 |
|
27 |
## Datasets
|
28 |
|
|
|
18 |
|
19 |
This was primarily an experiment to see how a passthrough merge will respond to further finetuning of all LoRA modules.
|
20 |
|
21 |
+
The model was finetuned on **8192 context length** and it can possibly be extended using RoPE up to 32k.
|
22 |
|
23 |
+
**v3 of the model will contain significantly more data, primarily human focused, aimed to excel at writing as well as maintaining logic, coherency, and continuity.**
|
24 |
|
25 |
+
**[GGUF Quants provided by @gelukuMLG](https://huggingface.co/gelukuMLG/Llama-3-15B-Instruct-ft-v2-GGUF)**
|
26 |
|
27 |
## Datasets
|
28 |
|