A little more info
#2
by
sayakpaul
HF staff
- opened
README.md
CHANGED
@@ -32,6 +32,14 @@ prompt: a tornado hitting grass field, 1980's film grain. overcast, muted colors
|
|
32 |
|
33 |
## Usage
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```python
|
36 |
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
|
37 |
from diffusers.utils import load_image
|
@@ -74,8 +82,12 @@ images[0].save(f"hug_lab.png")
|
|
74 |
|
75 |
![images_10)](./out_hug_lab_7.png)
|
76 |
|
|
|
|
|
77 |
### Training
|
78 |
|
|
|
|
|
79 |
#### Training data
|
80 |
This checkpoint was first trained for 20,000 steps on laion 6a resized to a max minimum dimension of 384.
|
81 |
It was then further trained for 20,000 steps on laion 6a resized to a max minimum dimension of 1024 and
|
|
|
32 |
|
33 |
## Usage
|
34 |
|
35 |
+
Make sure to first install the libraries:
|
36 |
+
|
37 |
+
```bash
|
38 |
+
pip install accelerate transformers opencv-python diffusers
|
39 |
+
```
|
40 |
+
|
41 |
+
And then we're ready to go:
|
42 |
+
|
43 |
```python
|
44 |
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
|
45 |
from diffusers.utils import load_image
|
|
|
82 |
|
83 |
![images_10)](./out_hug_lab_7.png)
|
84 |
|
85 |
+
To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
|
86 |
+
|
87 |
### Training
|
88 |
|
89 |
+
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
|
90 |
+
|
91 |
#### Training data
|
92 |
This checkpoint was first trained for 20,000 steps on laion 6a resized to a max minimum dimension of 384.
|
93 |
It was then further trained for 20,000 steps on laion 6a resized to a max minimum dimension of 1024 and
|