Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,11 @@ This is GGML format quantised 4bit and 5bit models of [junlee's wizard-vicuna 13
|
|
12 |
|
13 |
It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
14 |
|
|
|
|
|
|
|
|
|
|
|
15 |
## Provided files
|
16 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
17 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
12 |
|
13 |
It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
14 |
|
15 |
+
## Repositories available
|
16 |
+
|
17 |
+
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ).
|
18 |
+
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
|
19 |
+
|
20 |
## Provided files
|
21 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
22 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|