Expected speed

#3
by adamo1139 - opened

Hi,

What's the expected inference speed one might see on various GPUs? I ran it on 1x A100 80GB using gradio built around provided code and I have speeds of around 1 iteration per 28.5s, so around 48 minutes per single 6s video. That's without cpu offloading.

On rtx 3090 / 3090 ti me and another person saw estimation of 100 iterations taking about 120/135 minutes to complete. That's with cpu offloading.

Do those performance numbers match your experiences? Is this a bug? To be honest, since this model is fairly small, I was expecting to see inference speed of a few minutes per generation, similar to CogVideoX 2B&5B.

Rhymes.AI org

Yes, we have tested the model on H100. It will take around 25 minutes per video (100 steps).

We plan to release a multi-GPUs inference code with context parallel.

hyang0511 changed discussion status to closed

Sign up or log in to comment