issue: is the vae model right?
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video
pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
Occur the following bug:
pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
File "/opt/conda/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 953, in from_pretrained
loaded_sub_model = load_sub_model(
File "/opt/conda/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 394, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/opt/conda/lib/python3.9/site-packages/diffusers/models/modeling_utils.py", line 426, in from_pretrained
allow_pickle = True
ValueError: Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'> from cerspense/zeroscope_v2_576w/vae because the following keys are missing:
encoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.value.bias, decoder.mid_block.attentions.0.key.bias, decoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.proj_attn.bias, decoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.key.weight, encoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.value.bias, encoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.proj_attn.bias, encoder.mid_block.attentions.0.query.bias.
Please make sure to pass low_cpu_mem_usage=False
and device_map=None
if you want to randomly initialize those weights or else make sure your checkpoint file is correct.
Got the same question, the downloaded model's size seems mismatched with the online posted one.
got the same problem, have someone solved it?