Lewdiculous
commited on
Commit
•
adea40d
1
Parent(s):
48d8c92
Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,9 @@ This is a very promising roleplay model cooked by the amazing Sao10K!
|
|
17 |
> [!IMPORTANT]
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
-
> Imatrix data generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
> This was a bit more disk and compute intensive but hopefully avoided any losses in the model conversion. <br>
|
22 |
-
> If you
|
23 |
|
24 |
> [!NOTE]
|
25 |
> **General usage:** <br>
|
|
|
17 |
> [!IMPORTANT]
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
+
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
> This was a bit more disk and compute intensive but hopefully avoided any losses in the model conversion. <br>
|
22 |
+
> If you noticed any issues let me know in the discussions.
|
23 |
|
24 |
> [!NOTE]
|
25 |
> **General usage:** <br>
|