AQLM+PV
Collection
Official AQLM quantizations for "PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression": https://arxiv.org/abs/2405.14852
•
21 items
•
Updated
•
18
Official AQLM quantization of meta-llama/Meta-Llama-3.1-8B-Instruct finetuned with PV-Tuning.
For this quantization, we used 1 codebook of 16 bits and groupsize of 16.
Results:
Model | Quantization | MMLU (5-shot) | ArcC | ArcE | Hellaswag | PiQA | Winogrande | Model size, Gb |
---|---|---|---|---|---|---|---|---|
meta-llama/Meta-Llama-3.1-8B-Instruct | None | 0.6817 | 0.5162 | 0.8186 | 0.5909 | 0.8014 | 0.7364 | 16.1 |
1x16g16 | 0.3800 | 0.3558 | 0.6835 | 0.4784 | 0.7388 | 0.6196 | 3.4 |
Note
We used lm-eval=0.4.0
for evaluation.
UPD (09.08.2024)
Uploaded new version finetuned on more data for longer with better quality.