Mora: Enabling Generalist Video Generation via A Multi-Agent Framework
Abstract
Sora is the first large-scale generalist video generation model that garnered significant attention across society. Since its launch by OpenAI in February 2024, no other video generation models have paralleled {Sora}'s performance or its capacity to support a broad spectrum of video generation tasks. Additionally, there are only a few fully published video generation models, with the majority being closed-source. To address this gap, this paper proposes a new multi-agent framework Mora, which incorporates several advanced visual AI agents to replicate generalist video generation demonstrated by Sora. In particular, Mora can utilize multiple visual agents and successfully mimic Sora's video generation capabilities in various tasks, such as (1) text-to-video generation, (2) text-conditional image-to-video generation, (3) extend generated videos, (4) video-to-video editing, (5) connect videos and (6) simulate digital worlds. Our extensive experimental results show that Mora achieves performance that is proximate to that of Sora in various tasks. However, there exists an obvious performance gap between our work and Sora when assessed holistically. In summary, we hope this project can guide the future trajectory of video generation through collaborative AI agents.
Community
We will release it in a short time. Thank you very much!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- WorldGPT: A Sora-Inspired Video AI Agent as Rich World Models from Text and Image Inputs (2024)
- Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models (2024)
- AesopAgent: Agent-driven Evolutionary System on Story-to-Video Production (2024)
- EffiVED:Efficient Video Editing via Text-instruction Diffusion Models (2024)
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hello, there is a typo in the Mora paper.
In the conclusion, you have repeated the following sentence twice:
"Our thorough evaluation reveals that Mora not only competes with but also exceeds the capabilities of current leading models in certain areas. Our thorough evaluation reveals that Mora not only competes with but also exceeds the capabilities of current leading models in certain areas."
Noticed. Thank you for sharing.
Awesome, Can't wait to try this out! When is the full release coming out?
Revolutionizing Video Generation: Mora's Multi-Agent Framework Explained
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper