dverdu-freepik
commited on
Commit
•
e07acca
1
Parent(s):
e45409a
feat: Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,18 @@ Our goal is to further reduce FLUX.1-dev transformer parameters up to 24Gb to ma
|
|
21 |
|
22 |
![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## Text-to-Image Usage
|
25 |
|
26 |
It is recommended to use a `guidance_scale` of 3.5 and a `n_steps` between 22 and 30 for best results.
|
@@ -40,7 +52,7 @@ pipe = FluxPipeline.from_pretrained(
|
|
40 |
).to(device)
|
41 |
|
42 |
# Inference
|
43 |
-
prompt = "
|
44 |
|
45 |
guidance_scale = 3.5 # Important to keep guidance_scale to 3.5
|
46 |
n_steps = 28
|
@@ -64,11 +76,20 @@ image.save("output.png")
|
|
64 |
* `transformers/`: Contains distilled 8B transformer model, in diffusers format.
|
65 |
|
66 |
## Try our Hugging Face demos:
|
67 |
-
Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)
|
68 |
|
69 |
## News🔥🔥🔥
|
70 |
* Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
|
74 |
|
|
|
21 |
|
22 |
![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)
|
23 |
|
24 |
+
## Motivation
|
25 |
+
|
26 |
+
As stated by other members of the community like [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/), it seems that each of the blocks of the Flux1.dev transformer is not contributing equally to the final image generation. To confirm this hypothesis, we can measure the MSE between the input and the output of each block. As we can see in the following images, the mse differs a lot between the different blocks.
|
27 |
+
|
28 |
+
![Flux.1 Lite generated image](./sample_images/skip_blocks/generated_img.png)
|
29 |
+
![MSE MMDIT](./sample_images/skip_blocks/mse_mmdit_img.png)
|
30 |
+
![MSE DIT](./sample_images/skip_blocks/mse_dit_img.png)
|
31 |
+
|
32 |
+
Furthermore, as displayed in the following image, only when you skip one of the first MMDIT blocks, the performance of the model is severely affected.
|
33 |
+
![Skip one MMDIT block](./sample_images/skip_blocks/skip_one_MMDIT_block.png)
|
34 |
+
![Skip one DIT block](./sample_images/skip_blocks/skip_one_DIT_block.png)
|
35 |
+
|
36 |
## Text-to-Image Usage
|
37 |
|
38 |
It is recommended to use a `guidance_scale` of 3.5 and a `n_steps` between 22 and 30 for best results.
|
|
|
52 |
).to(device)
|
53 |
|
54 |
# Inference
|
55 |
+
prompt = "A close-up image of a green alien with fluorescent skin in the middle of a dark purple forest"
|
56 |
|
57 |
guidance_scale = 3.5 # Important to keep guidance_scale to 3.5
|
58 |
n_steps = 28
|
|
|
76 |
* `transformers/`: Contains distilled 8B transformer model, in diffusers format.
|
77 |
|
78 |
## Try our Hugging Face demos:
|
79 |
+
Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)
|
80 |
|
81 |
## News🔥🔥🔥
|
82 |
* Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
|
83 |
|
84 |
+
## Citation
|
85 |
+
If you find our work helpful, please cite it!
|
86 |
+
|
87 |
+
@article{flux1-lite,
|
88 |
+
title={Flux.1 Lite: Distilling Flux1.dev for Efficient Text-to-Image Generation},
|
89 |
+
author={Daniel Verdú, Javier Martín},
|
90 |
+
email={[email protected], [email protected]},
|
91 |
+
year={2024},
|
92 |
+
}
|
93 |
|
94 |
|
95 |
|
sample_images/skip_blocks/generated_img.png
ADDED
Git LFS Details
|
sample_images/skip_blocks/mse_dit_img.png
ADDED
Git LFS Details
|
sample_images/skip_blocks/mse_mmdit_img.png
ADDED
Git LFS Details
|
sample_images/skip_blocks/skip_one_DIT_block.png
ADDED
Git LFS Details
|
sample_images/skip_blocks/skip_one_MMDIT_block.png
ADDED
Git LFS Details
|