Linaqruf commited on
Commit
409f854
1 Parent(s): 9f0dafc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -45,7 +45,14 @@ e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, c
45
  ## How to Use
46
  - Download the `hitokomoru-v2.ckpt` [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.ckpt), or download the safetensors version [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.safetensors).
47
  - This model is fine-tuned from [waifu-diffusion-v1-4-epoch-2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e2.ckpt), which is also fine-tuned from [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). So in order to run this model in [`Automatic1111's Stable Diffusion Webui`](https://github.com/AUTOMATIC1111/stable-diffusion-webui), you need to put inference config .YAML file next to the model, you can find it [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.yaml)
48
-
 
 
 
 
 
 
 
49
  ## 🧨 Diffusers
50
 
51
  This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
 
45
  ## How to Use
46
  - Download the `hitokomoru-v2.ckpt` [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.ckpt), or download the safetensors version [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.safetensors).
47
  - This model is fine-tuned from [waifu-diffusion-v1-4-epoch-2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e2.ckpt), which is also fine-tuned from [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). So in order to run this model in [`Automatic1111's Stable Diffusion Webui`](https://github.com/AUTOMATIC1111/stable-diffusion-webui), you need to put inference config .YAML file next to the model, you can find it [here](https://huggingface.co/Linaqruf/hitokomoru-diffusion-v2/resolve/main/hitokomoru-v2.yaml)
48
+ - You need to adjust your prompt using aesthetic tags, Based [Official Waifu Diffusion 1.4 release notes](https://gist.github.com/harubaru/8581e780a1cf61352a739f2ec2eef09b#prompting), an ideal negative prompt to guide the model towards high aesthetic generations would look like:
49
+ ```
50
+ worst quality, low quality, medium quality, deleted, lowres, comic, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry
51
+ ```
52
+ - And, the following should also be prepended to prompts to get high aesthetic results:
53
+ ```
54
+ masterpiece, best quality, high quality, absurdres
55
+ ```
56
  ## 🧨 Diffusers
57
 
58
  This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().