SimianLuo commited on
Commit
c7f9b67
1 Parent(s): 2adbf2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -9,10 +9,17 @@ tags:
9
 
10
  # Latent Consistency Models
11
 
12
- Official Repository of the paper: *[Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378)*.
13
 
14
  Project Page: https://latent-consistency-models.github.io
15
 
 
 
 
 
 
 
 
16
 
17
  <p align="center">
18
  <img src="teaser.png">
@@ -40,7 +47,7 @@ pip install diffusers transformers accelerate
40
  from diffusers import DiffusionPipeline
41
  import torch
42
 
43
- pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img")
44
 
45
  # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
46
  pipe.to(torch_device="cuda", torch_dtype=torch.float32)
@@ -50,7 +57,7 @@ prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
50
  # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
51
  num_inference_steps = 4
52
 
53
- images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil", custom_revision=main).images
54
  ```
55
 
56
  ## BibTeX
 
9
 
10
  # Latent Consistency Models
11
 
12
+ Official Repository of the paper: *[Latent Consistency Models](https://arxiv.org/abs/2310.04378)*.
13
 
14
  Project Page: https://latent-consistency-models.github.io
15
 
16
+ ## Try our Hugging Face demos:
17
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model)
18
+
19
+ ## Model Descriptions:
20
+ Distilled from [Dreamshaper v7](https://huggingface.co/Lykon/dreamshaper-7) fine-tune of [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with only 4,000 training iterations (~32 A100 GPU Hours).
21
+
22
+ ## Generation Results:
23
 
24
  <p align="center">
25
  <img src="teaser.png">
 
47
  from diffusers import DiffusionPipeline
48
  import torch
49
 
50
+ pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
51
 
52
  # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
53
  pipe.to(torch_device="cuda", torch_dtype=torch.float32)
 
57
  # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
58
  num_inference_steps = 4
59
 
60
+ images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
61
  ```
62
 
63
  ## BibTeX