🍰 Hybrid-sd-tinyvae for Stable Diffusion
Hybrid-sd-tinyvae is very tiny autoencoder which uses the same "latent API" as SD. Hybrid-sd-tinyvae is a finetuned model based on the excellent work on TAESD. In general, we mainly fix the low-saturation problem encountering in SD1.5 base model, by which we strengthening the saturation and contrast of images to deliver more clarity and colorfulness. The model is useful for real-time previewing of the SD1.5 generation process. It saves 11x decoder inference time (16.38ms,fp16,V100) compared to using the SD1.5 decoder (186.6ms,fp16,V100), and you are very welcome to try it !!!!!!
T2I Comparison using one A100 GPU, The image order from left to right : SD1.5 -> TAESD -> Hybrid-sd-tinyvae
This repo contains .safetensors
versions of the Hybrid-sd-tinyvae weights.
For SDXL, use Hybrid-sd-tinyvae-xl instead (the SD and SDXL VAEs are incompatible).
Using in 🧨 diffusers
import torch
from diffusers.models import AutoencoderTiny
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
vae = AutoencoderTiny.from_pretrained('cqyan/hybrid-sd-tinyvae', torch_dtype=torch.float16)
pipe.vae = vae
pipe = pipe.to("cuda")
prompt = "A warm and loving family portrait, highly detailed, hyper-realistic, 8k resolution, photorealistic, soft and natural lighting"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("family.png")
- Downloads last month
- 22
Model tree for cqyan/hybrid-sd-tinyvae
Base model
madebyollin/taesd