Spaces:
Build error
Build error
new instructions
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ suggested_hardware: a10g-small
|
|
10 |
|
11 |
# Real-Time Latent Consistency Model
|
12 |
|
13 |
-
This demo showcases [Latent Consistency Model (LCM)](https://
|
14 |
|
15 |
You need a webcam to run this demo. 🤗
|
16 |
|
@@ -18,12 +18,7 @@ See a collecting with live demos [here](https://huggingface.co/collections/laten
|
|
18 |
|
19 |
## Running Locally
|
20 |
|
21 |
-
You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
|
22 |
-
|
23 |
-
`TIMEOUT`: limit user session timeout
|
24 |
-
`SAFETY_CHECKER`: disabled if you want NSFW filter off
|
25 |
-
`MAX_QUEUE_SIZE`: limit number of users on current app instance
|
26 |
-
`TORCH_COMPILE`: enable if you want to use torch compile for faster inference works well on A100 GPUs
|
27 |
|
28 |
|
29 |
## Install
|
@@ -32,29 +27,39 @@ You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
|
|
32 |
python -m venv venv
|
33 |
source venv/bin/activate
|
34 |
pip3 install -r requirements.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|
36 |
|
37 |
# LCM
|
38 |
### Image to Image
|
39 |
|
40 |
```bash
|
41 |
-
|
42 |
```
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
Based pipeline from [taabata](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy)
|
47 |
|
48 |
```bash
|
49 |
-
|
50 |
```
|
51 |
|
52 |
-
###
|
|
|
53 |
|
54 |
```bash
|
55 |
-
|
56 |
```
|
57 |
|
|
|
58 |
# LCM + LoRa
|
59 |
|
60 |
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
|
@@ -63,34 +68,59 @@ Using LCM-LoRA, giving it the super power of doing inference in as little as 4 s
|
|
63 |
|
64 |
### Image to Image ControlNet Canny LoRa
|
65 |
|
|
|
|
|
|
|
|
|
66 |
|
67 |
```bash
|
68 |
-
|
69 |
```
|
70 |
|
71 |
### Text to Image
|
72 |
|
73 |
```bash
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
```
|
76 |
|
77 |
|
78 |
### Setting environment variables
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
```bash
|
81 |
-
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4
|
82 |
```
|
83 |
|
84 |
-
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS.
|
85 |
|
86 |
```bash
|
87 |
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
|
88 |
-
|
89 |
```
|
90 |
|
91 |
## Docker
|
92 |
|
93 |
-
You need NVIDIA Container Toolkit for Docker
|
94 |
|
95 |
```bash
|
96 |
docker build -t lcm-live .
|
@@ -100,7 +130,7 @@ docker run -ti -p 7860:7860 --gpus all lcm-live
|
|
100 |
or with environment variables
|
101 |
|
102 |
```bash
|
103 |
-
docker run -ti -e
|
104 |
```
|
105 |
# Development Mode
|
106 |
|
|
|
10 |
|
11 |
# Real-Time Latent Consistency Model
|
12 |
|
13 |
+
This demo showcases [Latent Consistency Model (LCM)](https://latent-consistency-models.github.io/) using [Diffusers](https://huggingface.co/docs/diffusers/using-diffusers/lcm) with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers [here](https://huggingface.co/blog/lcm_lora).
|
14 |
|
15 |
You need a webcam to run this demo. 🤗
|
16 |
|
|
|
18 |
|
19 |
## Running Locally
|
20 |
|
21 |
+
You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
|
24 |
## Install
|
|
|
27 |
python -m venv venv
|
28 |
source venv/bin/activate
|
29 |
pip3 install -r requirements.txt
|
30 |
+
cd frontend && npm install && npm run build && cd ..
|
31 |
+
python run.py --reload --pipeline controlnet
|
32 |
+
```
|
33 |
+
|
34 |
+
# Pipelines
|
35 |
+
You can build your own pipeline following examples here [here](pipelines),
|
36 |
+
don't forget to fuild the frontend first
|
37 |
+
```bash
|
38 |
+
cd frontend && npm install && npm run build && cd ..
|
39 |
```
|
40 |
|
41 |
# LCM
|
42 |
### Image to Image
|
43 |
|
44 |
```bash
|
45 |
+
python run.py --reload --pipeline img2img
|
46 |
```
|
47 |
|
48 |
+
# LCM
|
49 |
+
### Text to Image
|
|
|
50 |
|
51 |
```bash
|
52 |
+
python run.py --reload --pipeline txt2img
|
53 |
```
|
54 |
|
55 |
+
### Image to Image ControlNet Canny
|
56 |
+
|
57 |
|
58 |
```bash
|
59 |
+
python run.py --reload --pipeline controlnet
|
60 |
```
|
61 |
|
62 |
+
|
63 |
# LCM + LoRa
|
64 |
|
65 |
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
|
|
|
68 |
|
69 |
### Image to Image ControlNet Canny LoRa
|
70 |
|
71 |
+
```bash
|
72 |
+
python run.py --reload --pipeline controlnetLoraSD15
|
73 |
+
```
|
74 |
+
or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
|
75 |
|
76 |
```bash
|
77 |
+
python run.py --reload --pipeline controlnetLoraSDXL
|
78 |
```
|
79 |
|
80 |
### Text to Image
|
81 |
|
82 |
```bash
|
83 |
+
python run.py --reload --pipeline txt2imgLora
|
84 |
+
```
|
85 |
+
|
86 |
+
or
|
87 |
+
|
88 |
+
```bash
|
89 |
+
python run.py --reload --pipeline txt2imgLoraSDXL
|
90 |
```
|
91 |
|
92 |
|
93 |
### Setting environment variables
|
94 |
|
95 |
+
|
96 |
+
`TIMEOUT`: limit user session timeout
|
97 |
+
`SAFETY_CHECKER`: disabled if you want NSFW filter off
|
98 |
+
`MAX_QUEUE_SIZE`: limit number of users on current app instance
|
99 |
+
`TORCH_COMPILE`: enable if you want to use torch compile for faster inference works well on A100 GPUs
|
100 |
+
`USE_TAESD`: enable if you want to use Autoencoder Tiny
|
101 |
+
|
102 |
+
If you run using `bash build-run.sh` you can set `PIPELINE` variables to choose the pipeline you want to run
|
103 |
+
|
104 |
+
```bash
|
105 |
+
PIPELINE=txt2imgLoraSDXL bash build-run.sh
|
106 |
+
```
|
107 |
+
|
108 |
+
and setting environment variables
|
109 |
+
|
110 |
```bash
|
111 |
+
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python run.py --reload --pipeline txt2imgLoraSDXL
|
112 |
```
|
113 |
|
114 |
+
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my [comment](https://github.com/radames/Real-Time-Latent-Consistency-Model/issues/17#issuecomment-1811957196)
|
115 |
|
116 |
```bash
|
117 |
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
|
118 |
+
python run.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
|
119 |
```
|
120 |
|
121 |
## Docker
|
122 |
|
123 |
+
You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``
|
124 |
|
125 |
```bash
|
126 |
docker build -t lcm-live .
|
|
|
130 |
or with environment variables
|
131 |
|
132 |
```bash
|
133 |
+
docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live
|
134 |
```
|
135 |
# Development Mode
|
136 |
|