hakurei commited on
Commit
d135a7f
1 Parent(s): a9e1781

update to 1.3

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -9,10 +9,14 @@ inference: false
9
 
10
  ---
11
 
12
- # waifu-diffusion - Diffusion for Weebs
13
 
14
  waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
15
 
 
 
 
 
16
  # Gradio
17
 
18
  We also support a [Gradio](https://github.com/gradio-app/gradio) web ui with diffusers to run Waifu Diffusion:
@@ -20,10 +24,6 @@ We also support a [Gradio](https://github.com/gradio-app/gradio) web ui with dif
20
 
21
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
22
 
23
- <img src=https://cdn.discordapp.com/attachments/930559077170421800/1017265913231327283/unknown.png width=40% height=40%>
24
-
25
- [Original PyTorch Model Download Link](https://thisanimedoesnotexist.ai/downloads/wd-v1-2-full-ema.ckpt)
26
-
27
  ## Model Description
28
 
29
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
 
9
 
10
  ---
11
 
12
+ # waifu-diffusion v1.3 - Diffusion for Weebs
13
 
14
  waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
15
 
16
+ <img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%>
17
+
18
+ [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
19
+
20
  # Gradio
21
 
22
  We also support a [Gradio](https://github.com/gradio-app/gradio) web ui with diffusers to run Waifu Diffusion:
 
24
 
25
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
26
 
 
 
 
 
27
  ## Model Description
28
 
29
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).