Mixtral HQQ Quantized Models
Collection
4-bit and 2-bit Mixtral models quantized using https://github.com/mobiusml/hqq
•
9 items
•
Updated
•
14
This is a version of the Mixtral-8x7B-v0.1 model (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) quantized to 4-bit via Half-Quadratic Quantization (HQQ).
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
model_id = 'mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-4bit_g64-HQQ/'
#Load the model
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = HQQModelForCausalLM.from_quantized(model_id)
#Optional
from hqq.core.quantize import *
HQQLinear.set_backend(HQQBackend.PYTORCH_COMPILE)