File size: 2,653 Bytes
c7be9b2
f19be83
 
c7be9b2
f19be83
 
 
 
 
 
 
c7be9b2
f7eb896
ec94579
f7eb896
ec94579
f19be83
 
 
ec94579
f19be83
 
ec94579
f19be83
 
 
 
 
ec94579
f19be83
 
 
 
 
 
 
 
 
 
ec94579
 
f19be83
 
 
 
ec94579
f19be83
ec94579
f19be83
 
ec94579
f19be83
 
 
 
 
 
 
 
 
 
 
ec94579
f19be83
 
 
ec94579
f19be83
ec94579
f19be83
 
 
ec94579
f19be83
ec94579
f19be83
 
 
 
ec94579
f19be83
 
ec94579
f19be83
 
ec94579
f19be83
ec94579
f19be83
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
base_model:
- black-forest-labs/FLUX.1-dev
library_name: diffusers
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
pipeline_tag: image-to-image
tags:
- ControlNet
- super-resolution
- upscaler
---
# ⚡ Flux.1-dev: Upscaler ControlNet ⚡

This is [Flux.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) ControlNet for low resolution images developped by Jasper research team.

<p align="center">
   <img style="width:700px;" src="examples/showcase.jpg">
</p>

# How to use
This model can be used directly with the `diffusers` library

```python
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetModel
from diffusers.pipelines import FluxControlNetPipeline

# Load pipeline
controlnet = FluxControlNetModel.from_pretrained(
  "jasperai/Flux.1-dev-Controlnet-Upscaler",
  torch_dtype=torch.bfloat16
)
pipe = FluxControlNetPipeline.from_pretrained(
  "black-forest-labs/FLUX.1-dev",
  controlnet=controlnet,
  torch_dtype=torch.bfloat16
)


# Load a control image
control_image = load_image(
  "https://huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/resolve/main/examples/depth.jpg"
)

w, h = control_image.size

# Upscale x4
control_image = control_image.resize((w * 4, h * 4))

image = pipe(
    "", 
    control_image=control_image,
    controlnet_conditioning_scale=0.6,
    num_inference_steps=28, 
    guidance_scale=3.5,
    height=control_image.size[1],
    width=control_image.size[0]
).images[0]
image
```

<p align="center">
   <img style="width:500px;" src="examples/output.jpg">
</p>

💡 Note: You can compute the conditioning map using for instance the `MidasDetector` from the `controlnet_aux` library

```python
from controlnet_aux import MidasDetector
from diffusers.utils import load_image 

midas = MidasDetector.from_pretrained("lllyasviel/Annotators")

# Load an image
im = load_image(
  "https://huggingface.co/jasperai/jasperai/Flux.1-dev-Controlnet-Depth/resolve/main/examples/output.jpg"
)

surface = midas(im)
```

# Training
This model was trained with a synthetic complex data degradation scheme taking as input a *real-life* image and artificially degrading it by combining several degradations such as amongst other image noising (Gaussian, Poisson), image blurring and JPEG compression. In a similar spirit as [1]

[1] Wang, Xintao, et al. "Real-esrgan: Training real-world blind super-resolution with pure synthetic data." Proceedings of the IEEE/CVF international conference on computer vision. 2021.

# Licence
The licence under the Flux.1-dev model applies to this model.