Region-Aware Text-to-Image Generation via Hard Binding and Soft Refinement
Abstract
In this paper, we present RAG, a Regional-Aware text-to-image Generation method conditioned on regional descriptions for precise layout composition. Regional prompting, or compositional generation, which enables fine-grained spatial control, has gained increasing attention for its practicality in real-world applications. However, previous methods either introduce additional trainable modules, thus only applicable to specific models, or manipulate on score maps within cross-attention layers using attention masks, resulting in limited control strength when the number of regions increases. To handle these limitations, we decouple the multi-region generation into two sub-tasks, the construction of individual region (Regional Hard Binding) that ensures the regional prompt is properly executed, and the overall detail refinement (Regional Soft Refinement) over regions that dismiss the visual boundaries and enhance adjacent interactions. Furthermore, RAG novelly makes repainting feasible, where users can modify specific unsatisfied regions in the last generation while keeping all other regions unchanged, without relying on additional inpainting models. Our approach is tuning-free and applicable to other frameworks as an enhancement to the prompt following property. Quantitative and qualitative experiments demonstrate that RAG achieves superior performance over attribute binding and object relationship than previous tuning-free methods.
Community
Precise Regional Control: Generate better complex layouts than the powerful Flux-1.dev and RPG.
Repainting Capability: Modify specific regions without affecting others.
Tuning-Free: Works seamlessly with existing DiT-based frameworks.
Paper summary is here: https://www.aimodels.fyi/papers/arxiv/region-aware-text-to-image-generation-via
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Training-free Regional Prompting for Diffusion Transformers (2024)
- 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation (2024)
- IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation (2024)
- Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis (2024)
- HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
A simplified demo of our RAG-Diffusion is available on Hugging Face: https://huggingface.co/spaces/NJU/RAG-Diffusion
For more complex layouts, please run our code directly.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper