Text-to-image
The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract from the paper is:
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.
Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations!
StableDiffusionPipeline
class diffusers.StableDiffusionPipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder (CLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
-
tokenizer (CLIPTokenizer) —
A
CLIPTokenizer
to tokenize text. -
unet (UNet2DConditionModel) —
A
UNet2DConditionModel
to denoise the encoded image latents. -
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. -
safety_checker (
StableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the model card for more details about a model’s potential harms. -
feature_extractor (CLIPImageProcessor) —
A
CLIPImageProcessor
to extract features from generated images; used as inputs to thesafety_checker
.
Pipeline for text-to-image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
- load_textual_inversion() for loading textual inversion embeddings
- load_lora_weights() for loading LoRA weights
- save_lora_weights() for saving LoRA weights
- from_single_file() for loading
.ckpt
files
__call__
< source >(
prompt: typing.Union[str, typing.List[str]] = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[typing.List[str], str, NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
guidance_rescale: float = 0.0
)
→
StableDiffusionPipelineOutput or tuple
Parameters
-
prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. -
height (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated image. -
width (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated image. -
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. -
guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. -
negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). -
num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. -
eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers. -
generator (
torch.Generator
orList[torch.Generator]
, optional) — Atorch.Generator
to make generation deterministic. -
latents (
torch.FloatTensor
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied randomgenerator
. -
prompt_embeds (
torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. -
negative_prompt_embeds (
torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. -
output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. -
callback (
Callable
, optional) — A function that calls everycallback_steps
steps during inference. The function is called with the following arguments:callback(step: int, timestep: int, latents: torch.FloatTensor)
. -
callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function is called. If not specified, the callback is called at every step. -
cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. -
guidance_rescale (
float
, optional, defaults to 0.7) — Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are Flawed. Guidance rescale factor should fix overexposure when using zero terminal SNR.
Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned where the first element is a list with the generated images and the
second element is a list of bool
s indicating whether the corresponding generated image contains
“not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
enable_attention_slicing
< source >( slice_size: typing.Union[str, int, NoneType] = 'auto' )
Parameters
-
slice_size (
str
orint
, optional, defaults to"auto"
) — When"auto"
, halves the input to the attention heads, so attention will be computed in two steps. If"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices asattention_head_dim // slice_size
. In this case,attention_head_dim
must be a multiple ofslice_size
.
Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention
(SDPA) from PyTorch
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5",
... torch_dtype=torch.float16,
... use_safetensors=True,
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Disable sliced VAE decoding. If enable_vae_slicing
was previously enabled, this method will go back to
computing decoding in one step.
enable_xformers_memory_efficient_attention
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
-
attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
Disable tiled VAE decoding. If enable_vae_tiling
was previously enabled, this method will go back to
computing decoding in one step.
load_textual_inversion
< source >( pretrained_model_name_or_path: typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]] token: typing.Union[str, typing.List[str], NoneType] = None **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
orList[str or os.PathLike]
orDict
orList[Dict]
) — Can be either one of the following or a list of them:- A string, the model id (for example
sd-concepts-library/low-poly-hd-logos-icons
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_text_inversion_directory/
) containing the textual inversion weights. - A path to a file (for example
./my_text_inversions.pt
) containing textual inversion weights. - A torch state dict.
- A string, the model id (for example
-
token (
str
orList[str]
, optional) — Override the token to use for the textual inversion weights. Ifpretrained_model_name_or_path
is a list, thentoken
must also be a list of equal length. -
weight_name (
str
, optional) — Name of a custom weight file. This should be used when:- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
text_inv.bin
. - The saved textual inversion file is in the Automatic1111 format.
- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
-
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. -
subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. -
mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load textual inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and Automatic1111 formats are supported).
Example:
To load a textual inversion embedding vector in 🤗 Diffusers format:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
To load a textual inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector
locally:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
from_single_file
< source >( pretrained_model_link_or_path **kwargs )
Parameters
-
pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:- A link to the
.ckpt
file (for example"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub. - A path to a file containing all pipeline weights.
- A link to the
-
torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. If"auto"
is passed, the dtype is automatically derived from the model’s weights. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. -
use_safetensors (
bool
, optional, defaults toNone
) — If set toNone
, the safetensors weights are downloaded if they’re available and if the safetensors library is installed. If set toTrue
, the model is forcibly loaded from safetensors weights. If set toFalse
, safetensors weights are not loaded. -
extract_ema (
bool
, optional, defaults toFalse
) — Whether to extract the EMA weights or not. PassTrue
to extract the EMA weights which usually yield higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. -
upcast_attention (
bool
, optional, defaults toNone
) — Whether the attention computation should always be upcasted. -
image_size (
int
, optional, defaults to 512) — The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2. -
prediction_type (
str
, optional) — The prediction type the model was trained on. Use'epsilon'
for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use'v_prediction'
for Stable Diffusion v2. -
num_in_channels (
int
, optional, defaults toNone
) — The number of input channels. IfNone
, it is automatically inferred. -
scheduler_type (
str
, optional, defaults to"pndm"
) — Type of scheduler to use. Should be one of["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]
. -
load_safety_checker (
bool
, optional, defaults toTrue
) — Whether to load the safety checker or not. -
text_encoder (CLIPTextModel, optional, defaults to
None
) — An instance ofCLIPTextModel
to use, specifically the clip-vit-large-patch14 variant. If this parameter isNone
, the function loads a new instance ofCLIPTextModel
by itself if needed. -
vae (
AutoencoderKL
, optional, defaults toNone
) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If this parameter isNone
, the function will load a new instance of [CLIP] by itself, if needed. -
tokenizer (CLIPTokenizer, optional, defaults to
None
) — An instance ofCLIPTokenizer
to use. If this parameter isNone
, the function loads a new instance ofCLIPTokenizer
by itself if needed. -
kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to overwrite load and saveable variables (for example the pipeline components of the
specific pipeline class). The overwritten components are directly passed to the pipelines
__init__
method. See example below for more information.
Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt
or .safetensors
format. The pipeline is set in evaluation mode (model.eval()
) by default.
Examples:
>>> from diffusers import StableDiffusionPipeline
>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )
>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly")
>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
... torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
load_lora_weights
< source >( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )
Parameters
-
pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — See lora_state_dict(). -
kwargs (
dict
, optional) — See lora_state_dict().
Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.
save_lora_weights
< source >( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )
Parameters
-
save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist. -
unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to theunet
. -
text_encoder_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder
. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. -
is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. -
save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
. -
safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save the LoRA parameters corresponding to the UNet and text encoder.
Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs.
Memory savings are lower than using enable_sequential_cpu_offload
, but performance is much better due to the
iterative execution of the unet
.
StableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput
< source >( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )
Parameters
-
images (
List[PIL.Image.Image]
ornp.ndarray
) — List of denoised PIL images of lengthbatch_size
or NumPy array of shape(batch_size, height, width, num_channels)
. -
nsfw_content_detected (
List[bool]
) — List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content orNone
if safety checking could not be performed.
Output class for Stable Diffusion pipelines.
FlaxStableDiffusionPipeline
class diffusers.FlaxStableDiffusionPipeline
< source >( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = <class 'jax.numpy.float32'> )
Parameters
- vae (FlaxAutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder (FlaxCLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
-
tokenizer (CLIPTokenizer) —
A
CLIPTokenizer
to tokenize text. -
unet (FlaxUNet2DConditionModel) —
A
FlaxUNet2DConditionModel
to denoise the encoded image latents. -
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one ofFlaxDDIMScheduler
,FlaxLMSDiscreteScheduler
,FlaxPNDMScheduler
, orFlaxDPMSolverMultistepScheduler
. -
safety_checker (
FlaxStableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the model card for more details about a model’s potential harms. -
feature_extractor (CLIPImageProcessor) —
A
CLIPImageProcessor
to extract features from generated images; used as inputs to thesafety_checker
.
Flax-based pipeline for text-to-image generation using Stable Diffusion.
This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
__call__
< source >(
prompt_ids: array
params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict]
prng_seed: PRNGKeyArray
num_inference_steps: int = 50
height: typing.Optional[int] = None
width: typing.Optional[int] = None
guidance_scale: typing.Union[float, array] = 7.5
latents: array = None
neg_prompt_ids: array = None
return_dict: bool = True
jit: bool = False
)
→
FlaxStableDiffusionPipelineOutput or tuple
Parameters
-
prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. -
height (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated image. -
width (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated image. -
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. -
guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. -
latents (
jnp.array
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents array is generated by sampling using the supplied randomgenerator
. -
jit (
bool
, defaults toFalse
) — Whether to runpmap
versions of the generation and safety scoring functions.This argument exists because
__call__
is not yet end-to-end pmap-able. It will be removed in a future release. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a FlaxStableDiffusionPipelineOutput instead of a plain tuple.
Returns
FlaxStableDiffusionPipelineOutput or tuple
If return_dict
is True
, FlaxStableDiffusionPipelineOutput is
returned, otherwise a tuple
is returned where the first element is a list with the generated images
and the second element is a list of bool
s indicating whether the corresponding generated image
contains “not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation.
Examples:
>>> import jax
>>> import numpy as np
>>> from flax.jax_utils import replicate
>>> from flax.training.common_utils import shard
>>> from diffusers import FlaxStableDiffusionPipeline
>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> prng_seed = jax.random.PRNGKey(0)
>>> num_inference_steps = 50
>>> num_samples = jax.device_count()
>>> prompt = num_samples * [prompt]
>>> prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
>>> params = replicate(params)
>>> prng_seed = jax.random.split(prng_seed, jax.device_count())
>>> prompt_ids = shard(prompt_ids)
>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
FlaxStableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput
< source >( images: ndarray nsfw_content_detected: typing.List[bool] )
Output class for Flax-based Stable Diffusion pipelines.
“Returns a new object replacing the specified fields with new values.