wanghaofan commited on
Commit
9f87f0b
1 Parent(s): 7af662d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -5
README.md CHANGED
@@ -1,5 +1,75 @@
1
- ---
2
- license: other
3
- license_name: flux-1-dev-non-commercial-license
4
- license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: flux-1-dev-non-commercial-license
4
+ license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
5
+
6
+ language:
7
+ - en
8
+ library_name: diffusers
9
+ pipeline_tag: text-to-image
10
+
11
+ tags:
12
+ - Text-to-Image
13
+ - ControlNet
14
+ - Diffusers
15
+ - Flux.1-dev
16
+ - image-generation
17
+ - Stable Diffusion
18
+ base_model: black-forest-labs/FLUX.1-dev
19
+ ---
20
+
21
+ # FLUX.1-dev-ControlNet-Depth
22
+
23
+ This repository contains a Depth ControlNet for FLUX.1-dev model jointly trained by researchers from [InstantX Team](https://huggingface.co/InstantX) and [Shakker Labs](https://huggingface.co/Shakker-Labs).
24
+
25
+ # Model Cards
26
+ - The model consists of 4 FluxTransformerBlock and 1 FluxSingleTransformerBlock.
27
+ - This checkpoint is trained on both real and generated image datasets. with 16*A800 for 50K steps. The batch size 16*4=64 with resolution=1024. The learning rate is set to 5e-6.
28
+ - The recommended controlnet_conditioning_scale is 0.3-0.7.
29
+
30
+ # Showcases
31
+
32
+ <div class="container">
33
+ <img src="./assets/teaser1.png" width="1024"/>
34
+ </div>
35
+
36
+ <div class="container">
37
+ <img src="./assets/teaser2.png" width="1024"/>
38
+ </div>
39
+
40
+
41
+ <div class="container">
42
+ <img src="./assets/teaser3.png" width="1024"/>
43
+ </div>
44
+
45
+
46
+ # Inference
47
+ ```python
48
+ import torch
49
+ from diffusers.utils import load_image
50
+ from diffusers import FluxControlNetPipeline, FluxControlNetModel
51
+
52
+ controlnet_model = "black-forest-labs/FLUX.1-dev"
53
+ base_model = "Shakker-Labs/FLUX.1-dev-ControlNet-Depth"
54
+
55
+ controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
56
+ pipe = FluxControlNetPipeline.from_pretrained(
57
+ base_model, controlnet=controlnet, torch_dtype=torch.bfloat16
58
+ )
59
+ pipe.to("cuda")
60
+
61
+ control_image = load_image("https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth/resolve/main/assets/cond1.png")
62
+ prompt = "an old man with white hair"
63
+
64
+ image = pipe(prompt,
65
+ control_image=control_image,
66
+ controlnet_conditioning_scale=0.5,
67
+ width=control_image.size[0],
68
+ height=control_image.size[1],
69
+ num_inference_steps=24,
70
+ guidance_scale=3.5,
71
+ ).images[0]
72
+ ```
73
+
74
+ # Acknowledgements
75
+ This project is sponsored by [Shakker AI](https://www.shakker.ai/). All copyright reserved.