Diffusers not pulling the fp16 model?
#5
by
jsmidt
- opened
Using the example, on the model page:
from diffusers import AuraFlowPipeline
import torch
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
downloaded the normal safetensors file and not the fp16.safetensors file. This also happened when I updated to the to the latest diffusers on git as the model page suggests.
After downloading the non fp16 transformer files, running the pipeline generated this warning:
A mixture of fp16 and non-fp16 filenames will be loaded.
Loaded fp16 filenames:
[vae/diffusion_pytorch_model.fp16.safetensors, text_encoder/model.fp16.safetensors]
Loaded non-fp16 filenames:
[transformer/diffusion_pytorch_model-00001-of-00003.safetensors, transformer/diffusion_pytorch_model-00002-of-00003.safetensors, transformer/diffusion_pytorch_model-00003-of-00003.safetensors
If this behavior is not expected, please check your folder structure.
Any ideas what the problem might be or how to fix this? Thanks.