Reimu Hakurei commited on
Commit
3802058
1 Parent(s): 60ddcb5

redo readme

Browse files
Files changed (1) hide show
  1. README.md +5 -9
README.md CHANGED
@@ -20,13 +20,11 @@ waifu-diffusion is a latent text-to-image diffusion model that has been conditio
20
 
21
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
22
 
23
- The current model is based from [Yasu Seno](https://twitter.com/naclbbr)'s [TrinArt Stable Diffusion](https://huggingface.co/naclbit/trinart_stable_diffusion) which has been fine-tuned on 30,000 high-resolution manga/anime-style images for 3.5 epochs.
24
-
25
- With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
26
 
27
  ## Training Data & Annotative Prompting
28
 
29
- The data used for Textual Inversion has come from a random sample of 25k Danbooru images, which were then filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
30
 
31
  Captions are Danbooru-style captions.
32
 
@@ -45,10 +43,10 @@ model_id = "hakurei/waifu-diffusion"
45
  device = "cuda"
46
 
47
 
48
- pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True)
49
  pipe = pipe.to(device)
50
 
51
- prompt = "a photo of reimu hakurei. anime style"
52
  with autocast("cuda"):
53
  image = pipe(prompt, guidance_scale=7.5)["sample"][0]
54
 
@@ -57,9 +55,7 @@ image.save("reimu_hakurei.png")
57
 
58
  ## Team Members and Acknowledgements
59
 
60
- This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/) and the author of the original finetuned model that this work was based upon, [Yasu Seno](https://twitter.com/naclbbr)!
61
-
62
- Additionally, the methods presented in the [Textual Inversion](https://github.com/rinongal/textual_inversion) repo was an incredible help.
63
 
64
  - [Anthony Mercurio](https://github.com/harubaru)
65
  - [Salt](https://github.com/sALTaccount/)
 
20
 
21
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
22
 
23
+ The current model has been fine-tuned with a learning rate of 5.0e-5 for 1 epoch on 56k Danbooru text-image pairs which all have an aesthetic rating greater than `6.0`.
 
 
24
 
25
  ## Training Data & Annotative Prompting
26
 
27
+ The data used for Textual Inversion has come from a random sample of 56k Danbooru images, which were filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
28
 
29
  Captions are Danbooru-style captions.
30
 
 
43
  device = "cuda"
44
 
45
 
46
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16')
47
  pipe = pipe.to(device)
48
 
49
+ prompt = "touhou hakurei_reimu 1girl solo portrait"
50
  with autocast("cuda"):
51
  image = pipe(prompt, guidance_scale=7.5)["sample"][0]
52
 
 
55
 
56
  ## Team Members and Acknowledgements
57
 
58
+ This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
 
 
59
 
60
  - [Anthony Mercurio](https://github.com/harubaru)
61
  - [Salt](https://github.com/sALTaccount/)