Diffusers documentation

Video Processor

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.31.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Video Processor

The VideoProcessor provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they’re decoded. The class inherits VaeImageProcessor so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.

VideoProcessor

diffusers.video_processor.VideoProcessor.preprocess_video

< >

( video height: typing.Optional[int] = None width: typing.Optional[int] = None )

Parameters

  • video (List[PIL.Image], List[List[PIL.Image]], torch.Tensor, np.array, List[torch.Tensor], List[np.array]) — The input video. It can be one of the following:
    • List of the PIL images.
    • List of list of PIL images.
    • 4D Torch tensors (expected shape for each tensor (num_frames, num_channels, height, width)).
    • 4D NumPy arrays (expected shape for each array (num_frames, height, width, num_channels)).
    • List of 4D Torch tensors (expected shape for each tensor (num_frames, num_channels, height, width)).
    • List of 4D NumPy arrays (expected shape for each array (num_frames, height, width, num_channels)).
    • 5D NumPy arrays: expected shape for each array (batch_size, num_frames, height, width, num_channels).
    • 5D Torch tensors: expected shape for each array (batch_size, num_frames, num_channels, height, width).
  • height (int, optional, defaults to None) — The height in preprocessed frames of the video. If None, will use the get_default_height_width() to get default height.
  • width (int, optional, defaults to None) -- The width in preprocessed frames of the video. If None, will use get_default_height_width() to get the default width.

Preprocesses input video(s).

diffusers.video_processor.VideoProcessor.postprocess_video

< >

( video: Tensor output_type: str = 'np' )

Parameters

  • video (torch.Tensor) — The video as a tensor.
  • output_type (str, defaults to "np") — Output type of the postprocessed video tensor.

Converts a video tensor to a list of frames for export.

< > Update on GitHub