Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ This repo is the result of quantising to 4bit and 5bit GGML for CPU inference us
|
|
21 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML).
|
22 |
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
|
23 |
|
24 |
-
## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
|
25 |
|
26 |
llama.cpp recently made a breaking change to its quantisation methods.
|
27 |
|
@@ -29,6 +29,8 @@ I have quantised the GGML files in this repo with the latest version. Therefore
|
|
29 |
|
30 |
If you are currently unable to update llama.cpp, eg because you use a UI which hasn't updated yet, you can find GGMLs compatible with the older llama.cpp code in branch `previous_llama`
|
31 |
|
|
|
|
|
32 |
## Provided files
|
33 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
34 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
21 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML).
|
22 |
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
|
23 |
|
24 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
|
25 |
|
26 |
llama.cpp recently made a breaking change to its quantisation methods.
|
27 |
|
|
|
29 |
|
30 |
If you are currently unable to update llama.cpp, eg because you use a UI which hasn't updated yet, you can find GGMLs compatible with the older llama.cpp code in branch `previous_llama`
|
31 |
|
32 |
+
![Imgur](https://i.imgur.com/3JYbv9e.png)
|
33 |
+
|
34 |
## Provided files
|
35 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
36 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|