Difficulty training with DreamBooth

#3
by bryanbblewis11 - opened

I was previously using the HassanBlend1.4 model and training via DreamBooth using the TheLastBen's DreamBooth Google Colab. This was working amazing! I've been trying to use the new HassanBlend1.5.1.2, taking the exact same steps using that Google Colab, but am noticing that all of the outputs from the trained .ckpt model, after DreamBooth completes, have a severe blurry-ness and out-of-focus quality to them. It definitely seems like something is "wrong" with the outputs.

I am curious if there was any guidance for using the new HassanBlend1.5.1.2 with DreamBooth, or if this issue might be isolated to the linked Google Colab above? I tried providing that Google Colab with the direct path to the HassanBlend1.5.1.2.ckpt file, and also tried just providing it the hassanblend/HassanBlend1.5.1.2 as the Path_to_HuggingFace value, both of which indicate this issue.

Not sure if you've tried DreamBooth using that Google Colab, but curious if you might know a reason why I'm seeing this abnormal behavior?

There is an issue with the upload where no VAE is included for the model. You can fix the output by explicitly adding the VAE back-in via the below.

import torch
import diffusers

vae = diffusers.AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
pipe = diffusers.StableDiffusionPipeline.from_pretrained(
    "hassanblend/HassanBlend1.5.1.2",
    torch_dtype=torch.float16,
    vae=vae,
)
pipe = pipe.to("cuda")

Thanks for the reply! Just for clarity, would I be adding this in the DreamBooth training step, or would I be adding this after DreamBooth training, when trying to generate an image from the trained model? I'm a little confused also, because I do see a VAE folder in the model's files; is that not the VAE I should expect?

Thanks for the reply! Just for clarity, would I be adding this in the DreamBooth training step, or would I be adding this after DreamBooth training, when trying to generate an image from the trained model? I'm a little confused also, because I do see a VAE folder in the model's files; is that not the VAE I should expect?

Did you manage to solve this?

Using the Dreambooth space here on HuggingFace. Any insight on how to roll the VAE into the outputted ckpt file? I'm ultimately using DiffusionBee which doens't let me mate a model with a VAE after the fact. When training a model on 1.5.1.2, I get the fuzzy/artifact image issue due to what I believe is the missing VAE.

@djn93 You can use my code to add the VAE to the model. You can then use the below code to save the model.

pipe.save_pretrained("model-with-vae", safe_serialization=True)

You can then use that saved model, which will have the new VAE baked-in.

There's already a version of the model here with the vae baked in, uploaded 8 days ago https://huggingface.co/hassanblend/HassanBlend1.5.1.2/blob/main/HassanBlend1.5.1.2-withVae.ckpt

Sign up or log in to comment