Upload README.md
Browse files
README.md
CHANGED
@@ -92,14 +92,14 @@ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches
|
|
92 |
|
93 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
94 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
95 |
-
| [main](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
96 |
-
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
97 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
98 |
-
| [gptq-3bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-64g-actorder_True) | 3 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
99 |
-
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
100 |
-
| [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
101 |
-
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
102 |
-
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
103 |
|
104 |
<!-- README_GPTQ.md-provided-files end -->
|
105 |
|
|
|
92 |
|
93 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
94 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
95 |
+
| [main](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
96 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
97 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
98 |
+
| [gptq-3bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-64g-actorder_True) | 3 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 29.30 GB | No | 3-bit, with group size 64g and act-order. Poor AutoGPTQ CUDA speed. |
|
99 |
+
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
100 |
+
| [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
101 |
+
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
102 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
103 |
|
104 |
<!-- README_GPTQ.md-provided-files end -->
|
105 |
|