Lewdiculous
commited on
Commit
•
48d8c92
1
Parent(s):
344d12f
Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,7 @@ This is a very promising roleplay model cooked by the amazing Sao10K!
|
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
> Imatrix data generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
-
> This
|
22 |
-
> Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF. <br>
|
23 |
> If you test them and notice any issues let me know in the discussions.
|
24 |
|
25 |
> [!NOTE]
|
|
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
> Imatrix data generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
+
> This was a bit more disk and compute intensive but hopefully avoided any losses in the model conversion. <br>
|
|
|
22 |
> If you test them and notice any issues let me know in the discussions.
|
23 |
|
24 |
> [!NOTE]
|