SeanScripts's picture
Create README.md
4e1a9f6 verified
|
raw
history blame
1.11 kB
metadata
base_model:
  - rain1011/pyramid-flow-sd3
pipeline_tag: text-to-video
library_name: diffusers

Converted to bfloat16 from rain1011/pyramid-flow-sd3. Use the text encoders and tokenizers from that repo (or from SD3), no point reuploading them over and over unchanged.

Inference code is available here: github.com/jy0205/Pyramid-Flow.

Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.

In diffusion_schedulers/scheduling_flow_matching.py, in the function init_sigmas_for_each_stage, one small change needs to be made:

Change this line:

self.timesteps_per_stage[i_s] = torch.from_numpy(timesteps[:-1])

To this:

self.timesteps_per_stage[i_s] = timesteps[:-1]

This will allow the model to be compatible with newer versions of pytorch and other libraries than is shown in the requirements.

Working with torch2.4.1+cu124.