Diffusion4D / README.md
Hanwen Liang
readme
a3fc62a
|
raw
history blame
2.81 kB
metadata
license: apache-2.0
task_categories:
  - text-to-3d
  - image-to-3d
language:
  - en
tags:
  - 4d
  - 3d
  - text-to-4d
  - image-to-4d
  - 3d-to-4d
size_categories:
  - 1M<n<10M

Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models

[Project Page] | [Arxiv] | [Code]

News

  • 2024.6.28: Released rendered data from curated objaverse-xl.
  • 2024.6.4: Released rendered data from curated objaverse-1.0, including orbital videos of dynamic 3D, orbital videos of static 3D, and monocular videos from front view.
  • 2024.5.27: Released metadata for objects!

Overview

We collect a large-scale, high-quality dynamic 3D(4D) dataset sourced from the vast 3D data corpus of Objaverse-1.0 and Objaverse-XL. We apply a series of empirical rules to filter the dataset. You can find more details in our paper. In this part, we will release the selected 4D assets, including:

  1. Selected high-quality 4D object ID.
  2. A render script using Blender, providing optional settings to render your personalized data.
  3. Rendered 4D images by our team to save your GPU time. With 8 GPUs and a total of 16 threads, it took 5.5 days to render the curated objaverse-1.0 dataset.

4D Dataset ID/Metadata

We collect 365k dynamic 3D assets from Objaverse-1.0 (42k) and Objaverse-xl (323k). Then we curate a high-quality subset to train our models.

Metadata of animated objects (323k) from objaverse-xl can be found in meta_xl_animation_tot.csv. We also release the metadata of all successfully rendered objects from objaverse-xl's Github subset in meta_xl_tot.csv.

For text-to-4D generation, the captions are obtained from the work Cap3D.

Citation

If you find this repository/work/dataset helpful in your research, please consider citing the paper and starring the repo ⭐.

@article{liang2024diffusion4d,
  title={Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models},
  author={Liang, Hanwen and Yin, Yuyang and Xu, Dejia and Liang, Hanxue and Wang, Zhangyang and Plataniotis, Konstantinos N and Zhao, Yao and Wei, Yunchao},
  journal={arXiv preprint arXiv:2405.16645},
  year={2024}
}