Add v2 models description
Browse files
README.md
CHANGED
@@ -18,12 +18,12 @@ tags:
|
|
18 |
This repository provides a collection of ControlNet checkpoints for
|
19 |
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
|
20 |
|
21 |
-
![Example Picture 1](
|
22 |
|
23 |
[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
|
24 |
![Example Picture 1](https://github.com/XLabs-AI/x-flux-comfyui/blob/main/assets/image1.png?raw=true)
|
25 |
|
26 |
-
[See our github](https://github.com/XLabs-AI/x-flux) for train script and
|
27 |
|
28 |
# Models
|
29 |
|
@@ -32,8 +32,11 @@ Our collection supports 3 models:
|
|
32 |
- HED
|
33 |
- Depth (Midas)
|
34 |
|
35 |
-
Each ControlNet is trained on 1024x1024 resolution.
|
36 |
-
|
|
|
|
|
|
|
37 |
|
38 |
# Examples
|
39 |
|
@@ -41,13 +44,29 @@ See examples of our models results below.
|
|
41 |
Also, some generation results with input images are provided in "Files and versions"
|
42 |
|
43 |
# Inference
|
44 |
-
Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
See examples how to launch our models:
|
49 |
|
50 |
-
## Canny ControlNet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
```bash
|
52 |
python3 main.py \
|
53 |
--prompt "a viking man with white hair looking, cinematic, MM full HD" \
|
@@ -60,7 +79,18 @@ python3 main.py \
|
|
60 |
```
|
61 |
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/canny_example_1.png?raw=true)
|
62 |
|
63 |
-
## Depth ControlNet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
```bash
|
65 |
python3 main.py \
|
66 |
--prompt "Photo of the bold man with beard and laptop, full hd, cinematic photo" \
|
@@ -98,7 +128,7 @@ python3 main.py \
|
|
98 |
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
|
99 |
|
100 |
|
101 |
-
## HED ControlNet
|
102 |
```bash
|
103 |
python3 main.py \
|
104 |
--prompt "2d art of a sitting african rich woman, full hd, cinematic photo" \
|
|
|
18 |
This repository provides a collection of ControlNet checkpoints for
|
19 |
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
|
20 |
|
21 |
+
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
|
22 |
|
23 |
[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
|
24 |
![Example Picture 1](https://github.com/XLabs-AI/x-flux-comfyui/blob/main/assets/image1.png?raw=true)
|
25 |
|
26 |
+
[See our github](https://github.com/XLabs-AI/x-flux) for train script, train configs and demo script for inference.
|
27 |
|
28 |
# Models
|
29 |
|
|
|
32 |
- HED
|
33 |
- Depth (Midas)
|
34 |
|
35 |
+
Each ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution.
|
36 |
+
We release **v2 versions** - better and realistic versions, which can be used directly in ComfyUI!
|
37 |
+
|
38 |
+
Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)
|
39 |
+
|
40 |
|
41 |
# Examples
|
42 |
|
|
|
44 |
Also, some generation results with input images are provided in "Files and versions"
|
45 |
|
46 |
# Inference
|
|
|
47 |
|
48 |
+
To try our models, you have 2 options:
|
49 |
+
1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
|
50 |
+
2. Use our custom nodes for ComfyUI and test it with provided workflows
|
51 |
|
52 |
See examples how to launch our models:
|
53 |
|
54 |
+
## Canny ControlNet (version 2)
|
55 |
+
|
56 |
+
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
|
57 |
+
2. Launch ComfyUI
|
58 |
+
3. Try our canny_workflow.json
|
59 |
+
|
60 |
+
![Example Picture 1](./assets/canny_v2_res1.png?raw=true)
|
61 |
+
![Example Picture 1](./assets/canny_v2_res2.png?raw=true)
|
62 |
+
![Example Picture 1](./assets/canny_v2_res3.png?raw=true)
|
63 |
+
|
64 |
+
|
65 |
+
## Canny ControlNet (version 1)
|
66 |
+
|
67 |
+
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
|
68 |
+
2. Launch main.py in command line with parameters
|
69 |
+
|
70 |
```bash
|
71 |
python3 main.py \
|
72 |
--prompt "a viking man with white hair looking, cinematic, MM full HD" \
|
|
|
79 |
```
|
80 |
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/canny_example_1.png?raw=true)
|
81 |
|
82 |
+
## Depth ControlNet (version 2)
|
83 |
+
|
84 |
+
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
|
85 |
+
2. Launch ComfyUI
|
86 |
+
3. Try our depth_workflow.json
|
87 |
+
|
88 |
+
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
|
89 |
+
![Example Picture 1](./assets/depth_v2_res2.png?raw=true)
|
90 |
+
|
91 |
+
## Depth ControlNet (version 1)
|
92 |
+
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
|
93 |
+
2. Launch main.py in command line with parameters
|
94 |
```bash
|
95 |
python3 main.py \
|
96 |
--prompt "Photo of the bold man with beard and laptop, full hd, cinematic photo" \
|
|
|
128 |
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
|
129 |
|
130 |
|
131 |
+
## HED ControlNet (version 1)
|
132 |
```bash
|
133 |
python3 main.py \
|
134 |
--prompt "2d art of a sitting african rich woman, full hd, cinematic photo" \
|