Text-to-Image
Diffusers
stable-diffusion

1-Step UNet quality?!

#4
by froilo - opened

Is this the intended result quality?
I am using the sample code from the card.

took12.4s_0221182938.753202_6_cfg0_iter1.png
took14.6s_0221182925.971125_5_cfg0_iter1.png

ByteDance org
edited Feb 22

As said in the doc, one step is experimental. It has a higher probability generating bad shapes. Some seeds and prompts perform better, but you should stick to 2+ steps for stable results. And use UNet instead of LoRA unless you are using other base models for the best result :)

Prompt outputs failed validation
UNETLoader:

  • Value not in list: unet_name: 'sdxl_lightning_1step_unet_x0.pth' not in ['sdxl_lightning_1step_unet_x0.safetensors', 'sdxl_lightning_8step_unet.safetensors']
    ModelSamplingDiscrete:
  • Value not in list: sampling: 'x0' not in ['eps', 'v_prediction', 'lcm']
ByteDance org
edited Feb 22

@caradepato

  1. The workflow got updated. Redownload your workflow
  2. You didn't download the correct checkpoint
  3. You are not using the latest comfyui

@PeterL1n i use the unet and the model for 1 step , worflow for 1 step and eps

image.png

There was an update recently today in ComfyUI that adds "x0" which is needed in order to not get that terrible random noise output.
I haven't checked if the uploader updated the workflow, but here is mine embedded in this image:
ComfyUI_00069_.png

and this is what mine looks like: (below image doesn't include workflow)

SDXL-Lightning-1-step-layout.png

Also be sure you are using an SDXL VAE

Try
https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors

Or this FP16 version:
https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl.vae.safetensors

froilo changed discussion status to closed

YEP now work

image.png

try a FACE

Sign up or log in to comment