Edit model card

License: Apache-2.0

LongAnimateDiff

Sapir Weissbuch, Naomi Ken Korem, Daniel Shalem, Yoav HaCohen | Lightricks Research

We are pleased to release the "LongAnimateDiff" model, which has been trained to generate videos with a variable frame count, ranging from 16 to 64 frames. This model is compatible with the original AnimateDiff model.

We release two models:

  1. The LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. For optimal results, we recommend using a motion scale of 1.28.
  2. A specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. For better results, use a motion scale of 1.15.

Ths original AnimateDiff model can be found here: https://huggingface.co/guoyww/animatediff

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Spaces using Lightricks/LongAnimateDiff 2