bdsqlsz's picture
Update README.md
ea80b80
|
raw
history blame
5.07 kB
---
license: cc-by-nc-sa-4.0
---
Pre-trained models and output samples of ControlNet-LLLite form bdsqlsz
Inference with ComfyUI: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI
For 1111's Web UI, [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) extension supports ControlNet-LLLite.
Training: https://github.com/kohya-ss/sd-scripts/blob/sdxl/docs/train_lllite_README.md
The recommended preprocessing for the animeface model is [Anime-Face-Segmentation](https://github.com/siyeong0/Anime-Face-Segmentation)
# Models
## Trained on anime model
AnimeFaceSegment、Normal、T2i-Color/Shuffle、lineart_anime_denoise、recolor_luminance
Base Model use[Kohaku-XL](https://civitai.com/models/136389?modelVersionId=150441)
MLSD
Base Model use[ProtoVision XL - High Fidelity 3D](https://civitai.com/models/125703?modelVersionId=144229)
# Samples
## AnimeFaceSegmentV1
![source 1](./sample/00000-1254802172.png) ![sample 1-1](./sample/00153-1415397694.png)
![sample 1-2](./sample/00155-541628598.png) ![sample 1-3](./sample/00156-3563138011.png)
![source 2](./sample/00013-1254802185.png) ![sample 2-1](./sample/00157-172216875.png)
![sample 2-2](./sample/00161-125697048.png) ![sample 2-3](./sample/00163-3802019239.png)
## AnimeFaceSegmentV2
![source 1](./sample/00015-882327104.png)
![sample 1](./sample/grid-0000-656896882.png)
![source 2](./sample/00081-882327170.png)
![sample 2](./sample/grid-0000-2857388239.png)
## MLSDV2
![source 1](./sample/0-73.png)
![preprocess 1](./sample/mlsd-0000.png)
![sample 1](./sample/grid-0001-496872924.png)
![source 2](./sample/0-151.png)
![preprocess 2](./sample/mlsd-0001.png)
![sample 2](./sample/grid-0002-906633402.png)
## Normal
![source 1](./sample/test.png)
![preprocess 1](./sample/normal_bae-0004.png)
![sample 1](./sample/grid-0007-2668683255.png)
![source 2](./sample/zelda_rgba.png)
![preprocess 2](./sample/normal_bae-0005.png)
![sample 2](./sample/grid-0008-2191923130.png)
## T2i-Color/Shuffle
![source 1](./sample/sample_0_525_c9a3a20fa609fe4bbf04.png)
![preprocess 1](./sample/color-0008.png)
![sample 1](./sample/grid-0017-751452001.jpg)
![source 2](./sample/F8LQ75WXoAETQg3.jpg)
![preprocess 2](./sample/color-0009.png)
![sample 2](./sample/grid-0018-2976518185.jpg)
## Lineart_Anime_Denoise
![source 1](./sample/20230826131545.png)
![preprocess 1](./sample/lineart_anime_denoise-1308.png)
![sample 1](./sample/grid-0028-1461058306.png)
![source 2](./sample/Snipaste_2023-08-10_23-33-53.png)
![preprocess 2](./sample/lineart_anime_denoise-1309.png)
![sample 2](./sample/grid-0030-1612754720.png)
## Recolor_Luminance
![source 1](./sample/F8LQ75WXoAETQg3.jpg)
![preprocess 1](./sample/recolor_luminance-0014.png)
![sample 1](./sample/grid-0060-2359545755.png)
![source 2](./sample/Snipaste_2023-08-15_02-38-05.png)
![preprocess 2](./sample/recolor_luminance-0016.png)
![sample 2](./sample/grid-0061-448628292.png)
## Canny
![source 1](./sample/Snipaste_2023-08-10_23-33-53.png)
![preprocess 1](./sample/canny-0034.png)
![sample 1](./sample/grid-0100-2599077425.png)
![source 2](./sample/00021-210474367.jpeg)
![preprocess 2](./sample/canny-0021.png)
![sample 2](./sample/grid-0084-938772089.png)
## DW_OpenPose
![preprocess 1](./sample/dw_openpose_full-0015.png)
![sample 1](./sample/grid-0015-4163265662.png)
![preprocess 2](./sample/dw_openpose_full-0030.png)
![sample 2](./sample/grid-0030-2839828192.png)
## Tile_Anime
![source 1](./sample/03476-424776255.png)
![sample 1](./sample/grid-0008-3461355229.png)
![sample 2](./sample/grid-0016-1162724588.png)
![sample 3](./sample/00094-188618111.png)
和其他模型不同,我需要简单解释一下tile模型的用法。
总的来说,tile模型有三个用法,
1、不输入任何提示词,它可以直接还原参考图的大致效果,然后略微重新修改局部细节,可以用于V2V。(图2)
2、权重设定为0.55~0.75,它可以保持原本构图和姿势的基础上,接受提示词和LoRA的修改。(图3)
3、使用配合放大效果,对每个tiling进行细节增加的同时保持一致性。(图4)
因为训练时使用的数据集为动漫模型,所以目前对真实摄影风格的重绘效果并不好,需要等待完成最终版本。
Unlike other models, I need to briefly explain the usage of the tile model.
In general, there are three uses for the tile model,
1. Without entering any prompt words, it can directly restore the approximate effect of the reference image and then slightly modify local details. It can be used for V2V (Figure 2).
2. With a weight setting of 0.55~0.75, it can maintain the original composition and pose while accepting modifications from prompt words and LoRA (Figure 3).
3. Use in conjunction with magnification effects to increase detail for each tiling while maintaining consistency (Figure 4).
Since the dataset used during training is an anime model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version.