Integrations
bitsandbytes is widely integrated with many of the libraries in the Hugging Face and wider PyTorch ecosystem. This guide provides a brief overview of the integrations and how to use bitsandbytes with them. For more details, you should refer to the linked documentation for each library.
Transformers
Learn more in the bitsandbytes Transformers integration guide.
With Transformers, it’s very easy to load any model in 4 or 8-bit and quantize them on the fly. To configure the quantization parameters, specify them in the BitsAndBytesConfig class.
For example, to load and quantize a model to 4-bits and use the bfloat16 data type for compute:
bfloat16 is the ideal compute_dtype
if your hardware supports it. While the default compute_dtype
, float32, ensures backward compatibility (due to wide-ranging hardware support) and numerical stability, it is large and slows down computations. In contrast, float16 is smaller and faster but can lead to numerical instabilities. bfloat16 combines the best aspects of both; it offers the numerical stability of float32 and the reduced memory footprint and speed of a 16-bit data type. Check if your hardware supports bfloat16 and configure it using the bnb_4bit_compute_dtype
parameter in BitsAndBytesConfig!
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
model_4bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
8-bit optimizers
You can use any of the 8-bit or paged optimizers with Transformers by passing them to the Trainer class on initialization. All bitsandbytes optimizers are supported by passing the correct string in the TrainingArguments optim
parameter. For example, to load a PagedAdamW32bit optimizer:
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
...,
optim="paged_adamw_32bit",
)
trainer = Trainer(model, training_args, ...)
trainer.train()
PEFT
Learn more in the bitsandbytes PEFT integration guide.
PEFT builds on the bitsandbytes Transformers integration, and extends it for training with a few more steps. Let’s prepare the 4-bit model from the section above for training.
Call the ~peft.prepare_model_for_kbit_training
method to prepare the model for training. This only works for Transformers models!
from peft import prepare_model_for_kbit_training
model_4bit = prepare_model_for_kbit_training(model_4bit)
Setup a ~peft.LoraConfig
to use QLoRA:
from peft import LoraConfig
config = LoraConfig(
r=16,
lora_alpha=8,
target_modules="all-linear",
lora_dropout=0.05
bias="none",
task_type="CAUSAL_LM"
)
Now call the ~peft.get_peft_model
function on your model and config to create a trainable PeftModel
.
from peft import get_peft_model
model = get_peft_model(model_4bit, config)
Accelerate
Learn more in the bitsandbytes Accelerate integration guide.
bitsandbytes is also easily usable from Accelerate and you can quantize any PyTorch model by passing a BnbQuantizationConfig with your desired settings, and then calling the load_and_quantize_model function to quantize it.
from accelerate import init_empty_weights
from accelerate.utils import BnbQuantizationConfig, load_and_quantize_model
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
empty_model = GPT(model_config)
bnb_quantization_config = BnbQuantizationConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16, # optional
bnb_4bit_use_double_quant=True, # optional
bnb_4bit_quant_type="nf4" # optional
)
quantized_model = load_and_quantize_model(
empty_model,
weights_location=weights_location,
bnb_quantization_config=bnb_quantization_config,
device_map = "auto"
)
PyTorch Lightning and Lightning Fabric
bitsandbytes is available from:
- PyTorch Lightning, a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale.
- Lightning Fabric, a fast and lightweight way to scale PyTorch models without boilerplate.
Learn more in the bitsandbytes PyTorch Lightning integration guide.
Lit-GPT
bitsandbytes is integrated with Lit-GPT, a hackable implementation of state-of-the-art open-source large language models. Lit-GPT is based on Lightning Fabric, and it can be used for quantization during training, finetuning, and inference.
Learn more in the bitsandbytes Lit-GPT integration guide.
Blog posts
To learn in more detail about some of bitsandbytes integrations, take a look at the following blog posts:
- Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
- A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes