Tasks

Text-to-Image

Text-to-image is the task of generating images from input text. These pipelines can also be used to modify and edit images based on text prompts.

Inputs
Input

A city above clouds, pastel colors, Victorian style

Text-to-Image Model
Output

About Text-to-Image

Use Cases

Data Generation

Businesses can generate data for their use cases by inputting text and getting image outputs.

Immersive Conversational Chatbots

Chatbots can be made more immersive if they provide contextual images based on the input provided by the user.

Creative Ideas for Fashion Industry

Different patterns can be generated to obtain unique pieces of fashion. Text-to-image models make creations easier for designers to conceptualize their design before actually implementing it.

Architecture Industry

Architects can utilise the models to construct an environment based out on the requirements of the floor plan. This can also include the furniture that has to be placed in that environment.

Task Variants

Image Editing

Image editing with text-to-image models involves modifying an image following edit instructions provided in a text prompt.

Personalization

Personalization refers to techniques used to customize text-to-image models. We introduce new subjects or concepts to the model, which the model can then generate when we refer to them with a text prompt.

For example, you can use these techniques to generate images of your dog in imaginary settings, after you have taught the model using a few reference images of the subject (or just one in some cases). Teaching the model a new concept can be achieved through fine-tuning, or by using training-free techniques.

Inference

You can use diffusers pipelines to infer with text-to-image models.

from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

model_id = "stabilityai/stable-diffusion-2"
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

You can use huggingface.js to infer text-to-image models on Hugging Face Hub.

import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_TOKEN);
await inference.textToImage({
    model: "stabilityai/stable-diffusion-2",
    inputs: "award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]",
    parameters: {
        negative_prompt: "blurry",
    },
});

Useful Resources

Model Inference

Model Fine-tuning

This page was made possible thanks to the efforts of Ishan Dutta, Enrique Elias Ubaldo and Oğuz Akif.

Compatible libraries

Text-to-Image demo
Examples
Models for Text-to-Image
Browse Models (45,727)
Datasets for Text-to-Image
Browse Datasets (4,573)

No example dataset is defined for this task.

Note Contribute by proposing a dataset for this task !

Spaces using Text-to-Image

Note A powerful text-to-image application.

Note A text-to-image application to generate comics.

Note An application to match multiple custom image generation models.

Note A powerful yet very fast image generation application.

Note A gallery to explore various text-to-image models.

Note An application to generate realistic images given photos of a person and a prompt.

Metrics for Text-to-Image
IS
The Inception Score (IS) measure assesses diversity and meaningfulness. It uses a generated image sample to predict its label. A higher score signifies more diverse and meaningful images.
FID
The Fréchet Inception Distance (FID) calculates the distance between distributions between synthetic and real samples. A lower FID score indicates better similarity between the distributions of real and generated images.
R-Precision
R-precision assesses how the generated image aligns with the provided text description. It uses the generated images as queries to retrieve relevant text descriptions. The top 'r' relevant descriptions are selected and used to calculate R-precision as r/R, where 'R' is the number of ground truth descriptions associated with the generated images. A higher R-precision value indicates a better model.