ip-adapter-plus_sdxl_vit-h gives error when used with any SDXL checkpoint

#6
by eniora - opened

Hello, I am using A1111 (latest with the most recent controlnet version)
I downloaded the ip-adapter-plus_sdxl_vit-h.bin file but it doesn't appear in the Controlnet model list until I rename it to .pth and when I do so, I get error when I press generate:

Error running process: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 619, in process
script.process(p, *script_args)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 977, in process
self.controlnet_hack(p)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 966, in controlnet_hack
self.controlnet_main_entry(p)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 688, in controlnet_main_entry
model_net = Script.load_control_model(p, unet, unit.model)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 321, in load_control_model
model_net = Script.build_control_model(p, unet, model)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 350, in build_control_model
network = build_model_by_guess(state_dict, unet, model_path)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_model_guess.py", line 233, in build_model_by_guess
network = PlugableIPAdapter(state_dict, channel, plus)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 299, in init
self.ipadapter = IPAdapterModel(state_dict, clip_embeddings_dim=clip_embeddings_dim, is_plus=is_plus)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 191, in init
self.load_ip_adapter(state_dict)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 194, in load_ip_adapter
self.image_proj_model.load_state_dict(state_dict["image_proj"])
File "c:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Resampler:
size mismatch for latents: copying a param with shape torch.Size([1, 16, 1280]) from checkpoint, the shape in current model is torch.Size([1, 16, 2048]).
size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 1280]).
size mismatch for proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for proj_out.weight: copying a param with shape torch.Size([2048, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 2048]).
size mismatch for layers.0.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.0.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 2048]).
size mismatch for layers.0.0.to_kv.weight: copying a param with shape torch.Size([2560, 1280]) from checkpoint, the shape in current model is torch.Size([1536, 2048]).
size mismatch for layers.0.0.to_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 768]).
size mismatch for layers.0.1.0.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.1.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.0.1.1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 2048]).
size mismatch for layers.0.1.3.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([2048, 8192]).
size mismatch for layers.1.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.0.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 2048]).
size mismatch for layers.1.0.to_kv.weight: copying a param with shape torch.Size([2560, 1280]) from checkpoint, the shape in current model is torch.Size([1536, 2048]).
size mismatch for layers.1.0.to_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 768]).
size mismatch for layers.1.1.0.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.1.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.1.1.1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 2048]).
size mismatch for layers.1.1.3.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([2048, 8192]).
size mismatch for layers.2.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.0.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 2048]).
size mismatch for layers.2.0.to_kv.weight: copying a param with shape torch.Size([2560, 1280]) from checkpoint, the shape in current model is torch.Size([1536, 2048]).
size mismatch for layers.2.0.to_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 768]).
size mismatch for layers.2.1.0.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.1.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.2.1.1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 2048]).
size mismatch for layers.2.1.3.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([2048, 8192]).
size mismatch for layers.3.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.0.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 2048]).
size mismatch for layers.3.0.to_kv.weight: copying a param with shape torch.Size([2560, 1280]) from checkpoint, the shape in current model is torch.Size([1536, 2048]).
size mismatch for layers.3.0.to_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([2048, 768]).
size mismatch for layers.3.1.0.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.1.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for layers.3.1.1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 2048]).
size mismatch for layers.3.1.3.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([2048, 8192]).

I tried using SDXL VAE and tried on automatic, tried without refiner or hi-res fix and always the same error.

Thanks in advance.

OK thank you!

are there any plans to support AUTO1111 in the future?

are there any plans to support AUTO1111 in the future?

hi, you can track this https://github.com/Mikubill/sd-webui-controlnet/pull/2158

I have the same issue. any solution?

hi, you should follow https://huggingface.co/h94/IP-Adapter#ip-adapter-for-sdxl-10, and use the right image encoder model.

It works fine now with the latest CN Web UI updates. Thanks h94 and everyone.

Hey, eniora how did you manage to make it work in webui?, the ip adapter doesn't appear to me.

When I use the IPAdapter Plus on ComfyUI,an error pops up. lora trained with 512512 is normal,but lora trained with 10241024 reports this error!

Error occurred when executing IPAdapterAdvanced:

Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 679, in apply_ipadapter
return (ipadapter_execute(model.clone(), ipadapter_model, clip_vision, **ipa_args), )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 329, in ipadapter_execute
ipa = IPAdapter(
^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 69, in init
self.image_proj_model.load_state_dict(ipadapter_model["image_proj"])
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

I searched the clipvision models in the manager. and download all clipvision models from the comfyui manager . and it work now.

image.png

image.png

Sign up or log in to comment