SirRa1zel
's Collections
Text to 3D and Motion Papers
updated
Text-Guided 3D Face Synthesis -- From Generation to Editing
Paper
•
2312.00375
•
Published
•
8
X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap
Between Text-to-2D and Text-to-3D Generation
Paper
•
2312.00085
•
Published
•
6
GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Paper
•
2312.00093
•
Published
•
14
MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry
and Texture
Paper
•
2311.10123
•
Published
•
15
LRM: Large Reconstruction Model for Single Image to 3D
Paper
•
2311.04400
•
Published
•
47
Single-Image 3D Human Digitization with Shape-Guided Diffusion
Paper
•
2311.09221
•
Published
•
20
Instant3D: Fast Text-to-3D with Sparse-View Generation and Large
Reconstruction Model
Paper
•
2311.06214
•
Published
•
29
One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View
Generation and 3D Diffusion
Paper
•
2311.07885
•
Published
•
39
PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape
Prediction
Paper
•
2311.12024
•
Published
•
18
Story-to-Motion: Synthesizing Infinite and Controllable Character
Animation from Long Text
Paper
•
2311.07446
•
Published
•
28
GPT4Point: A Unified Framework for Point-Language Understanding and
Generation
Paper
•
2312.02980
•
Published
•
7
StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D
Paper
•
2312.02189
•
Published
•
8
ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation
Paper
•
2312.02201
•
Published
•
31
MVHumanNet: A Large-scale Dataset of Multi-view Daily Dressing Human
Captures
Paper
•
2312.02963
•
Published
•
9
Generating Fine-Grained Human Motions Using ChatGPT-Refined Descriptions
Paper
•
2312.02772
•
Published
•
6
LooseControl: Lifting ControlNet for Generalized Depth Conditioning
Paper
•
2312.03079
•
Published
•
12
Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic
Gaussians
Paper
•
2312.03029
•
Published
•
23
MotionCtrl: A Unified and Flexible Motion Controller for Video
Generation
Paper
•
2312.03641
•
Published
•
20
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a
Single Image
Paper
•
2312.04543
•
Published
•
21
DreamComposer: Controllable 3D Object Generation via Multi-View
Conditions
Paper
•
2312.03611
•
Published
•
7
Controllable Human-Object Interaction Synthesis
Paper
•
2312.03913
•
Published
•
22
Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
Paper
•
2312.03818
•
Published
•
32
Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D
priors
Paper
•
2312.04963
•
Published
•
16
MVDD: Multi-View Depth Diffusion Models
Paper
•
2312.04875
•
Published
•
9
Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D
Prior
Paper
•
2312.06655
•
Published
•
23
Fast Training of Diffusion Transformer with Extreme Masking for 3D Point
Clouds Generation
Paper
•
2312.07231
•
Published
•
6
FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
Paper
•
2312.08344
•
Published
•
9
Holodeck: Language Guided Generation of 3D Embodied AI Environments
Paper
•
2312.09067
•
Published
•
13
SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained
Geometry and Appearance
Paper
•
2312.08889
•
Published
•
11
Mosaic-SDF for 3D Generative Models
Paper
•
2312.09222
•
Published
•
15
UniDream: Unifying Diffusion Priors for Relightable Text-to-3D
Generation
Paper
•
2312.08754
•
Published
•
6
SHAP-EDITOR: Instruction-guided Latent 3D Editing in Seconds
Paper
•
2312.09246
•
Published
•
5
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning
Paper
•
2312.11461
•
Published
•
18
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles
Paper
•
2312.11666
•
Published
•
12
Repaint123: Fast and High-quality One Image to 3D Generation with
Progressive Controllable 2D Repainting
Paper
•
2312.13271
•
Published
•
4
Splatter Image: Ultra-Fast Single-View 3D Reconstruction
Paper
•
2312.13150
•
Published
•
14
Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model
Paper
•
2312.13252
•
Published
•
27
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion
Models with RL Finetuning
Paper
•
2312.13980
•
Published
•
13
DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View
Synthesis
Paper
•
2312.13016
•
Published
•
6
ControlRoom3D: Room Generation using Semantic Proxy Rooms
Paper
•
2312.05208
•
Published
•
8
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
Paper
•
2312.13913
•
Published
•
22
ZeroShape: Regression-based Zero-shot Shape Reconstruction
Paper
•
2312.14198
•
Published
•
7
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D
Paper
•
2312.15980
•
Published
•
10
Make-A-Character: High Quality Text-to-3D Character Generation within
Minutes
Paper
•
2312.15430
•
Published
•
28
DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaption
by Combining 3D GANs and Diffusion Priors
Paper
•
2312.16837
•
Published
•
5
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via
Stein Identity
Paper
•
2401.00604
•
Published
•
4
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Paper
•
2401.01885
•
Published
•
27
HexaGen3D: StableDiffusion is just one step away from Fast and Diverse
Text-to-3D Generation
Paper
•
2401.07727
•
Published
•
8
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content
Creation
Paper
•
2402.05054
•
Published
•
25
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided
Generative Gaussian Splatting
Paper
•
2402.07207
•
Published
•
7