Generalized Predictive Model for Autonomous Driving
Abstract
In this paper, we introduce the first large-scale video prediction model in the autonomous driving discipline. To eliminate the restriction of high-cost data collection and empower the generalization ability of our model, we acquire massive data from the web and pair it with diverse and high-quality text descriptions. The resultant dataset accumulates over 2000 hours of driving videos, spanning areas all over the world with diverse weather conditions and traffic scenarios. Inheriting the merits from recent latent diffusion models, our model, dubbed GenAD, handles the challenging dynamics in driving scenes with novel temporal reasoning blocks. We showcase that it can generalize to various unseen driving datasets in a zero-shot manner, surpassing general or driving-specific video prediction counterparts. Furthermore, GenAD can be adapted into an action-conditioned prediction model or a motion planner, holding great potential for real-world driving applications.
Community
OpenDV dataset available here: https://github.com/OpenDriveLab/DriveAGI
Our follow-up work:
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
arXiv: https://arxiv.org/abs/2405.17398
Open release: https://github.com/OpenDriveLab/Vista
Video demo: https://vista-demo.github.io/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper