UNet
Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead.
The UNet2DConditionLoadersMixin
class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.
To learn more about how to load LoRA weights, see the LoRA loading guide.
UNet2DConditionLoadersMixin
Load LoRA layers into a UNet2DCondtionModel
.
delete_adapters
< source >( adapter_names: Union )
Delete an adapter’s LoRA layers from the UNet.
Example:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
"jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic"
)
pipeline.delete_adapters("cinematic")
Disable the UNet’s active LoRA layers.
Example:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
"jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.disable_lora()
Enable the UNet’s active LoRA layers.
Example:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
"jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.enable_lora()
load_attn_procs
< source >( pretrained_model_name_or_path_or_dict: Union **kwargs )
Parameters
- pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — Can be either:- A string, the model id (for example
google/ddpm-celebahq-256
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_model_directory
) containing the model weights saved with ModelMixin.save_pretrained(). - A torch state dict.
- A string, the model id (for example
- cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. - force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. - resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. - proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. - local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. - token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. - low_cpu_mem_usage (
bool
, optional, defaults toTrue
if torch version >= 1.9.0 elseFalse
) — Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument toTrue
will raise an error. - revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. - subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. - mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be
defined in
attention_processor.py
and be a torch.nn.Module
class.
Example:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.unet.load_attn_procs(
"jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
save_attn_procs
< source >( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs )
Parameters
- save_directory (
str
oros.PathLike
) — Directory to save an attention processor to (will be created if it doesn’t exist). - is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. - save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or withpickle
.
Save attention processor layers to a directory so that it can be reloaded with the load_attn_procs() method.
Example:
import torch
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
).to("cuda")
pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
set_adapters
< source >( adapter_names: Union weights: Union = None )
Set the currently active adapters for use in the UNet.
Example:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
"jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5])