Edit model card

FlowingFrames

FlowingFrames is a text-to-video model that leverages past frames for conditioning, enabling the generation of infinite-length videos. It supports flexible resolutions, various configurations for frames and inference steps, and prompt interpolation for creating smooth scene changes.

Installation

Clone the Repository

git clone https://github.com/motexture/FlowingFrames.git
cd FlowingFrames
python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
python run.py

Visit the provided URL in your browser to interact with the interface and start generating videos.

Samples

Extras

You can use this GPT to generate highly detailed image prompts -> Image Prompter

Additional Info

Spatial layers are from Stable Diffusion XL 1.0

Downloads last month
292
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.