VAE Image Processor
The VaeImageProcessor
provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
All pipelines with VaeImageProcessor
accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type
argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type
argument (for example output_type="latent"
). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.
VaeImageProcessor
class diffusers.image_processor.VaeImageProcessor
< source >( do_resize: bool = True vae_scale_factor: int = 8 vae_latent_channels: int = 4 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. Can acceptheight
andwidth
arguments from image_processor.VaeImageProcessor.preprocess() method. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image to [-1,1]. - do_binarize (
bool
, optional, defaults toFalse
) — Whether to binarize the image to 0/1. - do_convert_rgb (
bool
, optional, defaults to beFalse
) — Whether to convert the images to RGB format. - do_convert_grayscale (
bool
, optional, defaults to beFalse
) — Whether to convert the images to grayscale format.
Image processor for VAE.
apply_overlay
< source >( mask: Image init_image: Image image: Image crop_coords: typing.Optional[typing.Tuple[int, int, int, int]] = None ) → PIL.Image.Image
Parameters
- mask (
PIL.Image.Image
) — The mask image that highlights regions to overlay. - init_image (
PIL.Image.Image
) — The original image to which the overlay is applied. - image (
PIL.Image.Image
) — The image to overlay onto the original. - crop_coords (
Tuple[int, int, int, int]
, optional) — Coordinates to crop the image. If provided, the image will be cropped accordingly.
Returns
PIL.Image.Image
The final image with the overlay applied.
Applies an overlay of the mask and the inpainted image on the original image.
binarize
< source >( image: Image ) → PIL.Image.Image
Create a mask.
blur
< source >( image: Image blur_factor: int = 4 ) → PIL.Image.Image
Applies Gaussian blur to an image.
convert_to_grayscale
< source >( image: Image ) → PIL.Image.Image
Converts a given PIL image to grayscale.
convert_to_rgb
< source >( image: Image ) → PIL.Image.Image
Converts a PIL image to RGB format.
denormalize
< source >( images: typing.Union[numpy.ndarray, torch.Tensor] ) → np.ndarray
or torch.Tensor
Denormalize an image array to [0,1].
get_crop_region
< source >( mask_image: Image width: int height: int pad = 0 ) → tuple
Parameters
- mask_image (PIL.Image.Image) — Mask image.
- width (int) — Width of the image to be processed.
- height (int) — Height of the image to be processed.
- pad (int, optional) — Padding to be added to the crop region. Defaults to 0.
Returns
tuple
(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio.
Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128.
get_default_height_width
< source >( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor] height: typing.Optional[int] = None width: typing.Optional[int] = None ) → Tuple[int, int]
Parameters
- image (
Union[PIL.Image.Image, np.ndarray, torch.Tensor]
) — The image input, which can be a PIL image, NumPy array, or PyTorch tensor. If it is a NumPy array, it should have shape[batch, height, width]
or[batch, height, width, channels]
. If it is a PyTorch tensor, it should have shape[batch, channels, height, width]
. - height (
Optional[int]
, optional, defaults toNone
) — The height of the preprocessed image. IfNone
, the height of theimage
input will be used. - width (
Optional[int]
, optional, defaults toNone
) — The width of the preprocessed image. IfNone
, the width of theimage
input will be used.
Returns
Tuple[int, int]
A tuple containing the height and width, both resized to the nearest integer multiple of
vae_scale_factor
.
Returns the height and width of the image, downscaled to the next integer multiple of vae_scale_factor
.
normalize
< source >( images: typing.Union[numpy.ndarray, torch.Tensor] ) → np.ndarray
or torch.Tensor
Normalize an image array to [-1,1].
numpy_to_pil
< source >( images: ndarray ) → List[PIL.Image.Image]
Convert a numpy image or a batch of images to a PIL image.
numpy_to_pt
< source >( images: ndarray ) → torch.Tensor
Convert a NumPy image to a PyTorch tensor.
pil_to_numpy
< source >( images: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image] ) → np.ndarray
Convert a PIL image or a list of PIL images to NumPy arrays.
postprocess
< source >( image: Tensor output_type: str = 'pil' do_denormalize: typing.Optional[typing.List[bool]] = None ) → PIL.Image.Image
, np.ndarray
or torch.Tensor
Parameters
- image (
torch.Tensor
) — The image input, should be a pytorch tensor with shapeB x C x H x W
. - output_type (
str
, optional, defaults topil
) — The output type of the image, can be one ofpil
,np
,pt
,latent
. - do_denormalize (
List[bool]
, optional, defaults toNone
) — Whether to denormalize the image to [0,1]. IfNone
, will use the value ofdo_normalize
in theVaeImageProcessor
config.
Returns
PIL.Image.Image
, np.ndarray
or torch.Tensor
The postprocessed image.
Postprocess the image output from tensor to output_type
.
preprocess
< source >( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] height: typing.Optional[int] = None width: typing.Optional[int] = None resize_mode: str = 'default' crops_coords: typing.Optional[typing.Tuple[int, int, int, int]] = None ) → torch.Tensor
Parameters
- image (
PipelineImageInput
) — The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. - height (
int
, optional) — The height in preprocessed image. IfNone
, will use theget_default_height_width()
to get default height. - width (
int
, optional) — The width in preprocessed. IfNone
, will use get_default_height_width()` to get the default width. - resize_mode (
str
, optional, defaults todefault
) — The resize mode, can be one ofdefault
orfill
. Ifdefault
, will resize the image to fit within the specified width and height, and it may not maintaining the original aspect ratio. Iffill
, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, filling empty with data from image. Ifcrop
, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, cropping the excess. Note that resize_modefill
andcrop
are only supported for PIL image input. - crops_coords (
List[Tuple[int, int, int, int]]
, optional, defaults toNone
) — The crop coordinates for each image in the batch. IfNone
, will not crop the image.
Returns
torch.Tensor
The preprocessed image.
Preprocess the image input.
pt_to_numpy
< source >( images: Tensor ) → np.ndarray
Convert a PyTorch tensor to a NumPy image.
resize
< source >( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor] height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image
, np.ndarray
or torch.Tensor
Parameters
- image (
PIL.Image.Image
,np.ndarray
ortorch.Tensor
) — The image input, can be a PIL image, numpy array or pytorch tensor. - height (
int
) — The height to resize to. - width (
int
) — The width to resize to. - resize_mode (
str
, optional, defaults todefault
) — The resize mode to use, can be one ofdefault
orfill
. Ifdefault
, will resize the image to fit within the specified width and height, and it may not maintaining the original aspect ratio. Iffill
, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, filling empty with data from image. Ifcrop
, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, cropping the excess. Note that resize_modefill
andcrop
are only supported for PIL image input.
Returns
PIL.Image.Image
, np.ndarray
or torch.Tensor
The resized image.
Resize image.
VaeImageProcessorLDM3D
The VaeImageProcessorLDM3D
accepts RGB and depth inputs and returns RGB and depth outputs.
class diffusers.image_processor.VaeImageProcessorLDM3D
< source >( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image to [-1,1].
Image processor for VAE LDM3D.
depth_pil_to_numpy
< source >( images: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image] ) → np.ndarray
Convert a PIL image or a list of PIL images to NumPy arrays.
numpy_to_depth
< source >( images: ndarray ) → List[PIL.Image.Image]
Convert a NumPy depth image or a batch of images to a list of PIL images.
numpy_to_pil
< source >( images: ndarray ) → List[PIL.Image.Image]
Convert a NumPy image or a batch of images to a list of PIL images.
preprocess
< source >( rgb: typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray] depth: typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray] height: typing.Optional[int] = None width: typing.Optional[int] = None target_res: typing.Optional[int] = None ) → Tuple[torch.Tensor, torch.Tensor]
Parameters
- rgb (
Union[torch.Tensor, PIL.Image.Image, np.ndarray]
) — The RGB input image, which can be a single image or a batch. - depth (
Union[torch.Tensor, PIL.Image.Image, np.ndarray]
) — The depth input image, which can be a single image or a batch. - height (
Optional[int]
, optional, defaults toNone
) — The desired height of the processed image. IfNone
, defaults to the height of the input image. - width (
Optional[int]
, optional, defaults toNone
) — The desired width of the processed image. IfNone
, defaults to the width of the input image. - target_res (
Optional[int]
, optional, defaults toNone
) — Target resolution for resizing the images. If specified, overrides height and width.
Returns
Tuple[torch.Tensor, torch.Tensor]
A tuple containing the processed RGB and depth images as PyTorch tensors.
Preprocess the image input. Accepted formats are PIL images, NumPy arrays, or PyTorch tensors.
rgblike_to_depthmap
< source >( image: typing.Union[numpy.ndarray, torch.Tensor] ) → Union[np.ndarray, torch.Tensor]
Convert an RGB-like depth image to a depth map.
PixArtImageProcessor
class diffusers.image_processor.PixArtImageProcessor
< source >( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_grayscale: bool = False )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. Can acceptheight
andwidth
arguments from image_processor.VaeImageProcessor.preprocess() method. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image to [-1,1]. - do_binarize (
bool
, optional, defaults toFalse
) — Whether to binarize the image to 0/1. - do_convert_rgb (
bool
, optional, defaults to beFalse
) — Whether to convert the images to RGB format. - do_convert_grayscale (
bool
, optional, defaults to beFalse
) — Whether to convert the images to grayscale format.
Image processor for PixArt image resize and crop.
classify_height_width_bin
< source >( height: int width: int ratios: dict ) → Tuple[int, int]
Returns the binned height and width based on the aspect ratio.
resize_and_crop_tensor
< source >( samples: Tensor new_width: int new_height: int ) → torch.Tensor
Parameters
- samples (
torch.Tensor
) — A tensor of shape (N, C, H, W) where N is the batch size, C is the number of channels, H is the height, and W is the width. - new_width (
int
) — The desired width of the output images. - new_height (
int
) — The desired height of the output images.
Returns
torch.Tensor
A tensor containing the resized and cropped images.
Resizes and crops a tensor of images to the specified dimensions.
IPAdapterMaskProcessor
class diffusers.image_processor.IPAdapterMaskProcessor
< source >( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = False do_binarize: bool = True do_convert_grayscale: bool = True )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toFalse
) — Whether to normalize the image to [-1,1]. - do_binarize (
bool
, optional, defaults toTrue
) — Whether to binarize the image to 0/1. - do_convert_grayscale (
bool
, optional, defaults to beTrue
) — Whether to convert the images to grayscale format.
Image processor for IP Adapter image masks.
downsample
< source >( mask: Tensor batch_size: int num_queries: int value_embed_dim: int ) → torch.Tensor
Parameters
- mask (
torch.Tensor
) — The input mask tensor generated withIPAdapterMaskProcessor.preprocess()
. - batch_size (
int
) — The batch size. - num_queries (
int
) — The number of queries. - value_embed_dim (
int
) — The dimensionality of the value embeddings.
Returns
torch.Tensor
The downsampled mask tensor.
Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. If the aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued.