mradermacher
commited on
Commit
•
87fe93f
1
Parent(s):
f2f5f41
Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,8 @@ tags:
|
|
16 |
<!-- ### vocab_type: -->
|
17 |
static quants of https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1-B
|
18 |
|
|
|
|
|
19 |
<!-- provided-files -->
|
20 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
21 |
## Usage
|
|
|
16 |
<!-- ### vocab_type: -->
|
17 |
static quants of https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1-B
|
18 |
|
19 |
+
You should use `--override-kv tokenizer.ggml.pre=str:llama3` and a current llama.cpp version to work around a bug in llama.cpp that made these quants. (see https://old.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/?share_id=5dBFB9x0cOJi8vbr-Murh)
|
20 |
+
|
21 |
<!-- provided-files -->
|
22 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
23 |
## Usage
|