['esm.contact_head.regression.weight', 'esm.contact_head.regression.bias'] of EsmForProteinFolding were not initialized from the model checkpoint
Hi,
When downloading the EsmFold model to the local environment and attempting to run the following code:
"
from transformers import AutoTokenizer, EsmForProteinFolding
import torch
model_path = "/home/ubuntu/Desktop/github/esm/Untitled_Folder/esmfold_v1/"
model = EsmForProteinFolding.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt", add_special_tokens=False) # A tiny random peptide
outputs = model(**inputs)
folded_positions = outputs.positions
print(folded_positions)
"
I consistently encounter the warning:
"Some weights of EsmForProteinFolding were not initialized from the model checkpoint at /home/ubuntu/Desktop/github/esm/Untitled_Folder/esmfold_v1/ and are newly initialized: ['esm.contact_head.regression.weight', 'esm.contact_head.regression.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference."
However, the parameters related to 'contact_head' are not found in the current file. How can I address this warning? If I intend to further train EsmFold, will the 'contact_head' parameters participate in the training process?
Looking forward to your response!