NeuroPrompts: An Adaptive Framework to Optimize Prompts for Text-to-Image Generation
Abstract
Despite impressive recent advances in text-to-image diffusion models, obtaining high-quality images often requires prompt engineering by humans who have developed expertise in using them. In this work, we present NeuroPrompts, an adaptive framework that automatically enhances a user's prompt to improve the quality of generations produced by text-to-image models. Our framework utilizes constrained text decoding with a pre-trained language model that has been adapted to generate prompts similar to those produced by human prompt engineers. This approach enables higher-quality text-to-image generations and provides user control over stylistic features via constraint set specification. We demonstrate the utility of our framework by creating an interactive application for prompt enhancement and image generation using Stable Diffusion. Additionally, we conduct experiments utilizing a large dataset of human-engineered prompts for text-to-image generation and show that our approach automatically produces enhanced prompts that result in superior image quality. We make our code, a screencast video demo and a live demo instance of NeuroPrompts publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis (2023)
- Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting (2023)
- Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack (2023)
- Improving Compositional Text-to-image Generation with Large Vision-Language Models (2023)
- Text-to-Sticker: Style Tailoring Latent Diffusion Models for Human Expression (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper