text-generation-inference documentation

Quantization

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Quantization

TGI offers many quantization schemes to run LLMs effectively and fast based on your use-case. TGI supports GPTQ, AWQ, bits-and-bytes, EETQ, Marlin, EXL2 and fp8 quantization.

To leverage GPTQ, AWQ, Marlin and EXL2 quants, you must provide pre-quantized weights. Whereas for bits-and-bytes, EETQ and fp8, weights are quantized by TGI on the fly.

We recommend using the official quantization scripts for creating your quants:

  1. AWQ
  2. GPTQ/ Marlin
  3. EXL2

For on-the-fly quantization you simply need to pass one of the supported quantization types and TGI takes care of the rest.

Quantization with bitsandbytes, EETQ & fp8

bitsandbytes is a library used to apply 8-bit and 4-bit quantization to models. Unlike GPTQ quantization, bitsandbytes doesn’t require a calibration dataset or any post-processing – weights are automatically quantized on load. However, inference with bitsandbytes is slower than GPTQ or FP16 precision.

8-bit quantization enables multi-billion parameter scale models to fit in smaller hardware without degrading performance too much. In TGI, you can use 8-bit quantization by adding --quantize bitsandbytes like below 👇

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model --quantize bitsandbytes

4-bit quantization is also possible with bitsandbytes. You can choose one of the following 4-bit data types: 4-bit float (fp4), or 4-bit NormalFloat (nf4). These data types were introduced in the context of parameter-efficient fine-tuning, but you can apply them for inference by automatically converting the model weights on load.

In TGI, you can use 4-bit quantization by adding --quantize bitsandbytes-nf4 or --quantize bitsandbytes-fp4 like below 👇

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model --quantize bitsandbytes-nf4

You can get more information about 8-bit quantization by reading this blog post, and 4-bit quantization by reading this blog post.

Similarly you can use pass you can pass --quantize eetq or --quantize fp8 for respective quantization schemes.

In addition to this, TGI allows creating GPTQ quants directly by passing the model weights and a calibration dataset.

Quantization with GPTQ

GPTQ is a post-training quantization method to make the model smaller. It quantizes the layers by finding a compressed version of that weight, that will yield a minimum mean squared error like below 👇

Given a layerll with weight matrixWlW_{l} and layer inputXlX_{l}, find quantized weighthatWl\\hat{W}_{l}: (W^l=argminWl^WlXW^lX22)({\hat{W}_{l}}^{*} = argmin_{\hat{W_{l}}} ||W_{l}X-\hat{W}_{l}X||^{2}_{2})

TGI allows you to both run an already GPTQ quantized model (see available models here) or quantize a model of your choice using quantization script. You can run a quantized model by simply passing —quantize like below 👇

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model --quantize gptq

Note that TGI’s GPTQ implementation doesn’t use AutoGPTQ under the hood. However, models quantized using AutoGPTQ or Optimum can still be served by TGI.

To quantize a given model using GPTQ with a calibration dataset, simply run

text-generation-server quantize tiiuae/falcon-40b /data/falcon-40b-gptq
# Add --upload-to-model-id MYUSERNAME/falcon-40b to push the created model to the hub directly

This will create a new directory with the quantized files which you can use with,

text-generation-launcher --model-id /data/falcon-40b-gptq/ --sharded true --num-shard 2 --quantize gptq

You can learn more about the quantization options by running text-generation-server quantize --help.

If you wish to do more with GPTQ models (e.g. train an adapter on top), you can read about transformers GPTQ integration here. You can learn more about GPTQ from the paper.

< > Update on GitHub