Stable Diffusion text-to-image fine-tuning
The train_text_to_image.py
script shows how to fine-tune stable diffusion model on your own dataset.
Note:
This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.
Running locally with PyTorch
Installing the dependencies
Before running the scripts, make sure to install the library's training dependencies:
Important
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
Then cd in the example folder and run
pip install -r requirements.txt
And initialize an 🤗Accelerate environment with:
accelerate config
Pokemon example
You need to accept the model license before downloading or using the weights. In this example we'll use model version v1-4
, so you'll need to visit its card, read the license and tick the checkbox if you agree.
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation.
Run the following command to authenticate your token
huggingface-cli login
If you have already cloned the repo, then you won't need to go through these steps.
Hardware
With gradient_checkpointing
and mixed_precision
it should be possible to fine tune the model on a single 24GB GPU. For higher batch_size
and faster training it's better to use GPUs with >30GB memory.
Note: Change the resolution
to 768 if you are using the stable-diffusion-2 768x768 model.
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export dataset_name="lambdalabs/pokemon-blip-captions"
accelerate launch --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
To run on your own training files prepare the dataset according to the format required by datasets
, you can find the instructions for how to do that in this document.
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export TRAIN_DIR="path_to_your_dataset"
accelerate launch --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
Once the training is finished the model will be saved in the output_dir
specified in the command. In this example it's sd-pokemon-model
. To load the fine-tuned model for inference just pass that path to StableDiffusionPipeline
from diffusers import StableDiffusionPipeline
model_path = "path_to_saved_model"
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt="yoda").images[0]
image.save("yoda-pokemon.png")
Training with LoRA
Low-Rank Adaption of Large Language Models was first introduced by Microsoft in LoRA: Low-Rank Adaptation of Large Language Models by Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen.
In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and only training those newly added weights. This has a couple of advantages:
- Previous pretrained weights are kept frozen so that model is not prone to catastrophic forgetting.
- Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable.
- LoRA attention layers allow to control to which extent the model is adapted toward new training images via a
scale
parameter.
cloneofsimo was the first to try out LoRA training for Stable Diffusion in the popular lora GitHub repository.
With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset on consumer GPUs like Tesla T4, Tesla V100.
Training
First, you need to set up your development environment as is explained in the installation section. Make sure to set the MODEL_NAME
and DATASET_NAME
environment variables. Here, we will use Stable Diffusion v1-4 and the Pokemons dataset.
Note: Change the resolution
to 768 if you are using the stable-diffusion-2 768x768 model.
Note: It is quite useful to monitor the training progress by regularly generating sample images during training. Weights and Biases is a nice solution to easily see generating images during training. All you need to do is to run pip install wandb
before training to automatically log images.
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
For this example we want to directly store the trained LoRA embeddings on the Hub, so
we need to be logged in and add the --push_to_hub
flag.
huggingface-cli login
Now we can start training!
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME --caption_column="text" \
--resolution=512 --random_flip \
--train_batch_size=1 \
--num_train_epochs=100 --checkpointing_steps=5000 \
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
--seed=42 \
--output_dir="sd-pokemon-model-lora" \
--validation_prompt="cute dragon creature" --report_to="wandb"
The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.
Note: When using LoRA we can use a much higher learning rate compared to non-LoRA fine-tuning. Here we use 1e-4 instead of the usual 1e-5. Also, by using LoRA, it's possible to run train_text_to_image_lora.py
in consumer GPUs like T4 or V100.
The final LoRA embedding weights have been uploaded to sayakpaul/sd-model-finetuned-lora-t4. Note: The final weights are only 3 MB in size, which is orders of magnitudes smaller than the original model.
You can check some inference samples that were logged during the course of the fine-tuning process here.
Inference
Once you have trained a model using above command, the inference can be done simply using the StableDiffusionPipeline
after loading the trained LoRA weights. You
need to pass the output_dir
for loading the LoRA weights which, in this case, is sd-pokemon-model-lora
.
from diffusers import StableDiffusionPipeline
import torch
model_path = "sayakpaul/sd-model-finetuned-lora-t4"
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
prompt = "A pokemon with green eyes and red legs."
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
image.save("pokemon.png")
Training with Flax/JAX
For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
Note: The flax example doesn't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards or TPU v3.
Before running the scripts, make sure to install the library's training dependencies:
pip install -U -r requirements_flax.txt
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export dataset_name="lambdalabs/pokemon-blip-captions"
python train_text_to_image_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--output_dir="sd-pokemon-model"
To run on your own training files prepare the dataset according to the format required by datasets
, you can find the instructions for how to do that in this document.
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export TRAIN_DIR="path_to_your_dataset"
python train_text_to_image_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--output_dir="sd-pokemon-model"
Training with xFormers:
You can enable memory efficient attention by installing xFormers and passing the --enable_xformers_memory_efficient_attention
argument to the script.
xFormers training is not available for Flax/JAX.
Note:
According to this issue, xFormers v0.0.16
cannot be used for training in some GPUs. If you observe that problem, please install a development version as indicated in that comment.