Edit model card

Brief introduction:

Also on CivitAI

可能是目前快速出图(10步以内)的 Flux 微调模型中,遵循原版 Flux.1 Dev 风格,提示词还原能力强、出图质量最好、出图细节超越 Flux.1 Dev 模型,最接近 Flux.1 Pro 的基础模型。

May be the Best Quality Step 6-10 Model, In some details, it surpasses the Flux.1 Dev model and approaches the Flux.1 Pro model. and have good ability of prompt following, good of the original Flux.1 Dev style following.

Based on Flux-Fusion-V2, Merge of flux-dev-de-distill, finetuned by ComfyUI, Block_Patcher_ComfyUI, ComfyUI_essentials and other tools. Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.

GGUF Q8_0 量化版本模型文件,经过几天的测试,没问题。其出图质量与 fp8 相同,在某些细节体现方面,略优于 fp8 格式,GGUF 模型文件加载方式和模型转换方法,已在下面附加描述。另:应网友要求,已转换和上传了 GGUF Q4_1 格式模型,与 Q8_0 的对比测试情况如下图所示:

GGUF Q8_0 quantized model file, had tested, no problem. The quality of the image is the same as fp8 format, and in some details, it is slightly better than the fp8 format, and the loading method of GGUF model and conversion method are described below. Another: At the request, the GGUF Q4_1 format model had uploded, and the test result compare with Q8_0 show as follows:

初步测试结果看:Q4_1 模型在人物等较大对象的输出时,表现较好,但在需要体现更多细节时,发现细节丢失较多,因此,建议网友们如果硬件允许,尽量选择 fp8/Q8_0 模型,Q4_1 模型经初步测试后已上传,NF4格式由于已经失去支持,所以不会提供。

According to the preliminary test results, the Q4_1 model performs well in the output of larger objects such as characters, but when it is necessary more details, it is found that more details are lost, so it is recommended try to choose the FP8/Q8_0 model if the hardware allows, and the Q4_1 model had uploaded. The NF4 format will not be available as it has deprecated.

过度量化将失去本高精细模型的优势,所以,除 Q4_1 外,将不再提供别的量化版本,如有需要,朋友们可根据下面提示信息,自己下载 fp8 后量化。

Over-quantization will lose the advantages of this high-precision model, so in addition to Q4_1, no other quantization will be provided, if necessary, you can download FP8 and quantizate it according to the following tips.

Recommend:

UNET versions (Model only) need Text Encoders and VAE, I recommend use below CLIP and Text Encoder model, will get better prompt guidance:

Sample workflow: a very simple workflow as below, needn't any other comfy custom nodes(For GGUF version, please use UNET Loader(GGUF) node of city96's):

Thanks for:

https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model.

https://huggingface.co/nyanko7, Flux-dev-de-distill is a great experimental project! thanks for the inference.py scripts.

https://huggingface.co/MonsterMMORPG, Furkan share a lot of Flux.1 model testing and tuning courses, some special test for the de-distill model.

https://github.com/cubiq/Block_Patcher_ComfyUI, cubiq's Flux blocks patcher sampler let me do a lot of test to know how the Flux.1 block parameter value change the image gerentrating. His ComfyUI_essentials have a FluxBlocksBuster node, let me can adjust the blocks value easy. that is a great work!

https://huggingface.co/twodgirl, Share the model quantization script and the test dataset.

https://huggingface.co/John6666, Share the model convert script and the model collections.

https://github.com/city96/ComfyUI-GGUF, Native support GGUF Quantization Model.

https://github.com/leejet/stable-diffusion.cpp, Provider pure C/C++ GGUF model convert scripts.

Attn: For easy convert to GGUF Q5/Q4, you can use https://github.com/ruSauron/to-gguf-bat script, download it and put to the same directory with sd.exe file, then just pull my fp8.safetensors model file to bat file in exploer, will pop a CMD windows, and follow the menu to conver the one you want.

LICENSE

The weights fall under the FLUX.1 [dev] Non-Commercial License.

Downloads last month
624
GGUF
Model size
11.9B params
Architecture
undefined

4-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.