Text-to-Image Generation
StableDiffusionPipeline
The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionPipeline is capable of generating photo-realistic images given any text input using Stable Diffusion.
The original codebase can be found here:
- Stable Diffusion V1: CompVis/stable-diffusion
- Stable Diffusion v2: Stability-AI/stablediffusion
Available Checkpoints are:
- stable-diffusion-v1-4 (512x512 resolution) CompVis/stable-diffusion-v1-4
- stable-diffusion-v1-5 (512x512 resolution) runwayml/stable-diffusion-v1-5
- stable-diffusion-2-base (512x512 resolution): stabilityai/stable-diffusion-2-base
- stable-diffusion-2 (768x768 resolution): stabilityai/stable-diffusion-2
- stable-diffusion-2-1-base (512x512 resolution) stabilityai/stable-diffusion-2-1-base
- stable-diffusion-2-1 (768x768 resolution): stabilityai/stable-diffusion-2-1
class diffusers.StableDiffusionPipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
-
text_encoder (
CLIPTextModel
) — Frozen text-encoder. Stable Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant. -
tokenizer (
CLIPTokenizer
) — Tokenizer of class CLIPTokenizer. - unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
-
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. -
safety_checker (
StableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details. -
feature_extractor (
CLIPImageProcessor
) — Model that extracts features from generated images to be used as inputs for thesafety_checker
.
Pipeline for text-to-image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
In addition the pipeline inherits the following loading methods:
- Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion()
- LoRA: loaders.LoraLoaderMixin.load_lora_weights()
- Ckpt: loaders.FromSingleFileMixin.from_single_file()
as well as the following saving methods:
__call__
< source >(
prompt: typing.Union[str, typing.List[str]] = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[typing.List[str], str, NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
guidance_rescale: float = 0.0
)
→
StableDiffusionPipelineOutput or tuple
Parameters
-
prompt (
str
orList[str]
, optional) — The prompt or prompts to guide the image generation. If not defined, one has to passprompt_embeds
. instead. -
height (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. -
width (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. -
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. -
guidance_scale (
float
, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.guidance_scale
is defined asw
of equation 2. of Imagen Paper. Guidance scale is enabled by settingguidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the textprompt
, usually at the expense of lower image quality. -
negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to passnegative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). -
num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. -
eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others. -
generator (
torch.Generator
orList[torch.Generator]
, optional) — One or a list of torch generator(s) to make generation deterministic. -
latents (
torch.FloatTensor
, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied randomgenerator
. -
prompt_embeds (
torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated fromprompt
input argument. -
negative_prompt_embeds (
torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated fromnegative_prompt
input argument. -
output_type (
str
, optional, defaults to"pil"
) — The output format of the generate image. Choose between PIL:PIL.Image.Image
ornp.array
. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. -
callback (
Callable
, optional) — A function that will be called everycallback_steps
steps during inference. The function will be called with the following arguments:callback(step: int, timestep: int, latents: torch.FloatTensor)
. -
callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function will be called. If not specified, the callback will be called at every step. -
cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined underself.processor
in diffusers.cross_attention. -
guidance_rescale (
float
, optional, defaults to 0.7) — Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are Flawedguidance_scale
is defined asφ
in equation 16. of Common Diffusion Noise Schedules and Sample Steps are Flawed. Guidance rescale factor should fix overexposure when using zero terminal SNR.
Returns
StableDiffusionPipelineOutput or tuple
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
enable_attention_slicing
< source >( slice_size: typing.Union[str, int, NoneType] = 'auto' )
Parameters
-
slice_size (
str
orint
, optional, defaults to"auto"
) — When"auto"
, halves the input to the attention heads, so attention will be computed in two steps. If"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices asattention_head_dim // slice_size
. In this case,attention_head_dim
must be a multiple ofslice_size
.
Enable sliced attention computation.
When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
Enable sliced VAE decoding.
When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Disable sliced VAE decoding. If enable_vae_slicing
was previously invoked, this method will go back to
computing decoding in one step.
enable_xformers_memory_efficient_attention
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
-
attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
Enable tiled VAE decoding.
When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow the processing of larger images.
Disable tiled VAE decoding. If enable_vae_tiling
was previously invoked, this method will go back to
computing decoding in one step.
load_textual_inversion
< source >( pretrained_model_name_or_path: typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]] token: typing.Union[str, typing.List[str], NoneType] = None **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
orList[str or os.PathLike]
orDict
orList[Dict]
) — Can be either one of the following or a list of them:- A string, the model id (for example
sd-concepts-library/low-poly-hd-logos-icons
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_text_inversion_directory/
) containing the textual inversion weights. - A path to a file (for example
./my_text_inversions.pt
) containing textual inversion weights. - A torch state dict.
- A string, the model id (for example
-
token (
str
orList[str]
, optional) — Override the token to use for the textual inversion weights. Ifpretrained_model_name_or_path
is a list, thentoken
must also be a list of equal length. -
weight_name (
str
, optional) — Name of a custom weight file. This should be used when:- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
text_inv.bin
. - The saved textual inversion file is in the Automatic1111 format.
- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
-
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. -
subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. -
mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load textual inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and Automatic1111 formats are supported).
Example:
To load a textual inversion embedding vector in 🤗 Diffusers format:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
To load a textual inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector
locally:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
from_single_file
< source >( pretrained_model_link_or_path **kwargs )
Parameters
-
pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:- A link to the
.ckpt
file (for example"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub. - A path to a file containing all pipeline weights.
- A link to the
-
torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. If"auto"
is passed, the dtype is automatically derived from the model’s weights. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub. -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. -
use_safetensors (
bool
, optional, defaults toNone
) — If set toNone
, the safetensors weights are downloaded if they’re available and if the safetensors library is installed. If set toTrue
, the model is forcibly loaded from safetensors weights. If set toFalse
, safetensors weights are not loaded. -
extract_ema (
bool
, optional, defaults toFalse
) — Whether to extract the EMA weights or not. PassTrue
to extract the EMA weights which usually yield higher quality images for inference. Non-EMA weights are usually better to continue finetuning. -
upcast_attention (
bool
, optional, defaults toNone
) — Whether the attention computation should always be upcasted. -
image_size (
int
, optional, defaults to 512) — The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2. -
prediction_type (
str
, optional) — The prediction type the model was trained on. Use'epsilon'
for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use'v_prediction'
for Stable Diffusion v2. -
num_in_channels (
int
, optional, defaults toNone
) — The number of input channels. IfNone
, it will be automatically inferred. -
scheduler_type (
str
, optional, defaults to"pndm"
) — Type of scheduler to use. Should be one of["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]
. -
load_safety_checker (
bool
, optional, defaults toTrue
) — Whether to load the safety checker or not. -
text_encoder (
CLIPTextModel
, optional, defaults toNone
) — An instance of CLIP to use, specifically the clip-vit-large-patch14 variant. If this parameter isNone
, the function will load a new instance of [CLIP] by itself, if needed. -
tokenizer (
CLIPTokenizer
, optional, defaults toNone
) — An instance of CLIPTokenizer to use. If this parameter isNone
, the function will load a new instance of [CLIPTokenizer] by itself, if needed. -
kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to overwrite load and saveable variables (for example the pipeline components of the
specific pipeline class). The overwritten components are directly passed to the pipelines
__init__
method. See example below for more information.
Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt
format. The pipeline
is set in evaluation mode (model.eval()
) by default.
Examples:
>>> from diffusers import StableDiffusionPipeline
>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )
>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly")
>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
... torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
load_lora_weights
< source >( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )
Parameters
-
pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — Can be either:- A string, the model id (for example
google/ddpm-celebahq-256
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_model_directory
) containing the model weights saved with ModelMixin.save_pretrained(). - A torch state dict.
- A string, the model id (for example
-
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. -
subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. -
mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load pretrained LoRA attention processor layers into UNet2DConditionModel and
CLIPTextModel
.
save_lora_weights
< source >( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = False )
Parameters
-
save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist. -
unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to the UNet. -
text_encoder_lora_layers (
Dict[str, torch.nn.Module] or
Dict[str, torch.Tensor]) -- State dict of the LoRA layers corresponding to the
text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes 🤗 Transformers. -
is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. -
save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
.
Save the LoRA parameters corresponding to the UNet and text encoder.
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload
, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload
, but performance is much better due to the iterative execution of the unet
.
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its
forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than with
enable_model_cpu_offload`, but performance is lower.
class diffusers.FlaxStableDiffusionPipeline
< source >( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = <class 'jax.numpy.float32'> )
Parameters
- vae (FlaxAutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
-
text_encoder (
FlaxCLIPTextModel
) — Frozen text-encoder. Stable Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant. -
tokenizer (
CLIPTokenizer
) — Tokenizer of class CLIPTokenizer. - unet (FlaxUNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
-
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one ofFlaxDDIMScheduler
,FlaxLMSDiscreteScheduler
,FlaxPNDMScheduler
, orFlaxDPMSolverMultistepScheduler
. -
safety_checker (
FlaxStableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details. -
feature_extractor (
CLIPImageProcessor
) — Model that extracts features from generated images to be used as inputs for thesafety_checker
.
Pipeline for text-to-image generation using Stable Diffusion.
This model inherits from FlaxDiffusionPipeline
. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__
< source >(
prompt_ids: array
params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict]
prng_seed: PRNGKeyArray
num_inference_steps: int = 50
height: typing.Optional[int] = None
width: typing.Optional[int] = None
guidance_scale: typing.Union[float, array] = 7.5
latents: array = None
neg_prompt_ids: array = None
return_dict: bool = True
jit: bool = False
)
→
FlaxStableDiffusionPipelineOutput
or tuple
Parameters
-
prompt (
str
orList[str]
) — The prompt or prompts to guide the image generation. -
height (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. -
width (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. -
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. -
guidance_scale (
float
, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.guidance_scale
is defined asw
of equation 2. of Imagen Paper. Guidance scale is enabled by settingguidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the textprompt
, usually at the expense of lower image quality. -
latents (
jnp.array
, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. tensor will ge generated by sampling using the supplied randomgenerator
. -
jit (
bool
, defaults toFalse
) — Whether to runpmap
versions of the generation and safety scoring functions. NOTE: This argument exists because__call__
is not yet end-to-end pmap-able. It will be removed in a future release. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return aFlaxStableDiffusionPipelineOutput
instead of a plain tuple.
Returns
FlaxStableDiffusionPipelineOutput
or tuple
FlaxStableDiffusionPipelineOutput
if return_dict
is True, otherwise a
tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.
Examples:
>>> import jax
>>> import numpy as np
>>> from flax.jax_utils import replicate
>>> from flax.training.common_utils import shard
>>> from diffusers import FlaxStableDiffusionPipeline
>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> prng_seed = jax.random.PRNGKey(0)
>>> num_inference_steps = 50
>>> num_samples = jax.device_count()
>>> prompt = num_samples * [prompt]
>>> prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
>>> params = replicate(params)
>>> prng_seed = jax.random.split(prng_seed, jax.device_count())
>>> prompt_ids = shard(prompt_ids)
>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))