lavaman131 commited on
Commit
6315993
1 Parent(s): 488f8f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -8
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  - stable-diffusion-diffusers
10
  base_model: stabilityai/stable-diffusion-2-1-base
11
  inference: true
12
- instance_prompt: disney
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the training script had access to. You
@@ -18,12 +18,11 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # DreamBooth - lavaman131/cartoonify
20
 
21
- This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on disney using [DreamBooth](https://dreambooth.github.io/).
22
- You can find some example images in the following.
23
 
 
24
 
25
-
26
- DreamBooth for the text encoder was enabled: True.
27
 
28
 
29
  ## Intended uses & limitations
@@ -31,13 +30,19 @@ DreamBooth for the text encoder was enabled: True.
31
  #### How to use
32
 
33
  ```python
34
- # TODO: add an example code snippet for running this diffusion pipeline
 
 
 
 
35
  ```
36
 
37
  #### Limitations and bias
38
 
39
- [TODO: provide examples of latent issues and potential remediations]
40
 
41
  ## Training details
42
 
43
- [TODO: describe the data used to train the model]
 
 
 
9
  - stable-diffusion-diffusers
10
  base_model: stabilityai/stable-diffusion-2-1-base
11
  inference: true
12
+ instance_prompt: disney style
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the training script had access to. You
 
18
 
19
  # DreamBooth - lavaman131/cartoonify
20
 
21
+ This is a dreambooth model derived from runwayml/stable-diffusion-v1-5 with fine-tuning of the text encoder. The weights were trained from a popular animation studio using [DreamBooth](https://dreambooth.github.io/). Use the tokens **_disney style_** in your prompts for the effect.
 
22
 
23
+ You can find some example images below:
24
 
25
+ ![](./images/king.png)
 
26
 
27
 
28
  ## Intended uses & limitations
 
30
  #### How to use
31
 
32
  ```python
33
+ # basic usage
34
+ repo_id = "lavaman131/cartoonify"
35
+ torch_dtype = torch.float16 if device.type in ["mps", "cuda"] else torch.float32
36
+ pipeline = StableDiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch_dtype).to(device)
37
+ image = pipeline("PROMPT GOES HERE").images[0]
38
  ```
39
 
40
  #### Limitations and bias
41
 
42
+ As with any diffusion model, playing around with the prompt and classifier-free guidance parameter is required until you get the results you want. For additional safety in image generation, we use the Stable Diffusion safety checker.
43
 
44
  ## Training details
45
 
46
+ The model was fine-tuned for 3500 steps on around 200 images of modern Disney characters, backgrounds, and animals. The ratios for each were 70%, 20%, and 10% respectively on an RTX A5000 GPU (24GB VRAM).
47
+
48
+ The training code used can be found [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py). The regularization images used for training can be found [here](https://github.com/aitrepreneur/SD-Regularization-Images-Style-Dreambooth/tree/main/style_ddim).