SpiridonSunRotator's picture
Update README.md
bdd0690 verified
|
raw
history blame
846 Bytes
metadata
library_name: transformers
tags:
  - llama
  - facebook
  - meta
  - llama-3
  - conversational
  - text-generation-inference

Official AQLM quantization of meta-llama/Meta-Llama-3.1-8B-Instruct finetuned with PV-Tuning.

For this quantization, we used 1 codebook of 16 bits and groupsize of 16.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Meta-Llama-3.1-8B-Instruct None 0.6817 0.5162 0.8186 0.5909 0.8014 0.7364 16.1
1x16g16 0.3800 0.3558 0.6835 0.4784 0.7388 0.6196 3.4

Note

We used lm-eval=0.4.0 for evaluation.