RuntimeError: Error(s) in loading state_dict for CustomTextCLIP: Unexpected key(s) in state_dict: "text.transformer.embeddings.position_ids".
When I was executing "model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')", the following error was encountered.
File ".../open_clip/factory.py", line 104, in load_checkpoint
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
RuntimeError: Error(s) in loading state_dict for CustomTextCLIP:
Unexpected key(s) in state_dict: "text.transformer.embeddings.position_ids".
Package versions:
open-clip-torch 2.20.0 pypi_0 pypi
pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
pytorch-cuda 11.8 h7e8668a_5 pytorch
pytorch-mutex 1.0 cuda pytorch
torchaudio 2.0.2 py311_cu118 pytorch
torchtriton 2.0.0 py311 pytorch
torchvision 0.15.2 py311_cu118 pytorch
transformers 4.31.0 pypi_0 pypi
same! I have same versions as @jyx-su reported, downgrading to transformers-4.30.2 fixed the issue. Think it might be the "tied weights load" bit in the changelog https://github.com/huggingface/transformers/releases
If you cannot downgrade to a lower transformer version and need a quick fix, try this:
in site-packages/open_clip/factory.py
fix the load_checkpoint function by deleting the unexpected key from the checkpoint:
def load_checkpoint(model, checkpoint_path, strict=True):
state_dict = load_state_dict(checkpoint_path)
# detect old format and make compatible with new format
if 'positional_embedding' in state_dict and not hasattr(model, 'positional_embedding'):
state_dict = convert_to_custom_text_state_dict(state_dict)
resize_pos_embed(state_dict, model)
del state_dict["text.transformer.embeddings.position_ids"] # <----- This line is new
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
return incompatible_keys
Hope this PR fixes this issue!
If you cannot downgrade to a lower transformer version and need a quick fix, try this:
insite-packages/open_clip/factory.py
fix the load_checkpoint function by deleting the unexpected key from the checkpoint:
def load_checkpoint(model, checkpoint_path, strict=True): state_dict = load_state_dict(checkpoint_path) # detect old format and make compatible with new format if 'positional_embedding' in state_dict and not hasattr(model, 'positional_embedding'): state_dict = convert_to_custom_text_state_dict(state_dict) resize_pos_embed(state_dict, model) del state_dict["text.transformer.embeddings.position_ids"] # <----- This line is new incompatible_keys = model.load_state_dict(state_dict, strict=strict) return incompatible_keys
After modifying the part, I still met the same error. How to solve it?
This was fixed. Please check out the latest ipynb: https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/blob/main/biomed_clip_example.ipynb