模型参数不全
这个1.3B的二郎神embedding是不是不全,我加载的时候有很多missing keys
代码:
MODEL_NAME_OR_PATH = "/home/inspur/nas_data/pretrain/Erlangshen-TCBert-1.3B-Sentence-Embedding-Chinese"
self.tokenizer = BertTokenizer.from_pretrained(MODEL_NAME_OR_PATH)
self.model = BertForMaskedLM.from_pretrained(MODEL_NAME_OR_PATH)
报错:
missing_keys ['bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.19.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.attention.output.LayerNorm.weight', 'bert.encoder.layer.20.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.13.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.22.output.LayerNorm.weight', 'bert.encoder.layer.20.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.19.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.weight', 'cls.predictions.bias', 'bert.embeddings.position_ids', 'bert.encoder.layer.16.attention.output.LayerNorm.weight', 'bert.encoder.layer.20.output.LayerNorm.weight', 'bert.encoder.layer.22.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.22.attention.output.LayerNorm.weight', 'bert.encoder.layer.21.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.output.LayerNorm.bias', 'bert.encoder.layer.15.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.bias', 'bert.encoder.layer.23.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.bias', 'bert.encoder.layer.12.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.23.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.21.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.weight', 'bert.encoder.layer.14.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.19.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.13.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.weight']
Traceback (most recent call last):
File "/home/inspur/test_llms/m3e/erlangshen_embedding.py", line 83, in
sentence_trans = ErLangShenEmbedding()
File "/home/inspur/test_llms/m3e/erlangshen_embedding.py", line 12, in init
self.model = BertForMaskedLM.from_pretrained(MODEL_NAME_OR_PATH) #.cuda().eval()
File "/home/inspur/anaconda3/envs/rlhf_py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2777, in from_pretrained
) = cls._load_pretrained_model(
File "/home/inspur/anaconda3/envs/rlhf_py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2931, in _load_pretrained_model
group.remove(k)
AttributeError: 'str' object has no attribute 'remove'
这个1.3B的二郎神embedding是不是不全,我加载的时候有很多missing keys
代码:
MODEL_NAME_OR_PATH = "/home/inspur/nas_data/pretrain/Erlangshen-TCBert-1.3B-Sentence-Embedding-Chinese"
self.tokenizer = BertTokenizer.from_pretrained(MODEL_NAME_OR_PATH)
self.model = BertForMaskedLM.from_pretrained(MODEL_NAME_OR_PATH)报错:
missing_keys ['bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.19.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.attention.output.LayerNorm.weight', 'bert.encoder.layer.20.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.13.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.22.output.LayerNorm.weight', 'bert.encoder.layer.20.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.19.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.weight', 'cls.predictions.bias', 'bert.embeddings.position_ids', 'bert.encoder.layer.16.attention.output.LayerNorm.weight', 'bert.encoder.layer.20.output.LayerNorm.weight', 'bert.encoder.layer.22.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.22.attention.output.LayerNorm.weight', 'bert.encoder.layer.21.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.output.LayerNorm.bias', 'bert.encoder.layer.15.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.bias', 'bert.encoder.layer.23.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.bias', 'bert.encoder.layer.12.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.23.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.21.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.weight', 'bert.encoder.layer.14.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.19.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.13.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.weight']
Traceback (most recent call last):
File "/home/inspur/test_llms/m3e/erlangshen_embedding.py", line 83, in
sentence_trans = ErLangShenEmbedding()
File "/home/inspur/test_llms/m3e/erlangshen_embedding.py", line 12, in init
self.model = BertForMaskedLM.from_pretrained(MODEL_NAME_OR_PATH) #.cuda().eval()
File "/home/inspur/anaconda3/envs/rlhf_py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2777, in from_pretrained
) = cls._load_pretrained_model(
File "/home/inspur/anaconda3/envs/rlhf_py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2931, in _load_pretrained_model
group.remove(k)
AttributeError: 'str' object has no attribute 'remove'
请使用MegatronBertForMaskedLM或者AutoModelForMaskedLM,我们会尽快修改示例。
使用MegatronBertForMaskedLM:missing_keys ['cls.predictions.bias', 'bert.embeddings.position_ids']
使用AutoModelForMaskedLM:missing_keys ['bert.encoder.layer.13.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.16.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.23.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'cls.predictions.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.17.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.weight', 'bert.encoder.layer.13.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.15.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.21.output.LayerNorm.weight', 'bert.encoder.layer.22.attention.output.LayerNorm.bias', 'bert.encoder.layer.22.attention.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.12.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.weight', 'bert.encoder.layer.19.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.21.output.LayerNorm.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.21.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.bias', 'bert.encoder.layer.12.output.LayerNorm.weight', 'bert.encoder.layer.23.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.weight', 'bert.embeddings.position_ids', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.attention.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.20.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.attention.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.17.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.20.attention.output.LayerNorm.bias', 'bert.encoder.layer.19.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.weight', 'bert.encoder.layer.13.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.bias', 'bert.encoder.layer.15.output.LayerNorm.bias', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.output.LayerNorm.weight']
看了一下,加载的bin参数文件有很多名为.ln的参数而不是LayerNorm。
cls.predictions参数文件中是:
cls.predictions.transform.dense.weight torch.Size([2048, 2048])
cls.predictions.transform.dense.bias torch.Size([2048])
cls.predictions.transform.LayerNorm.weight torch.Size([2048])
cls.predictions.transform.LayerNorm.bias torch.Size([2048])
cls.predictions.decoder.weight torch.Size([21248, 2048])
cls.predictions.decoder.bias torch.Size([21248])
使用MegatronBertForMaskedLM:missing_keys ['cls.predictions.bias', 'bert.embeddings.position_ids']
使用AutoModelForMaskedLM:missing_keys ['bert.encoder.layer.13.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.16.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.23.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'cls.predictions.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.17.output.LayerNorm.weight', 'bert.encoder.layer.14.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.weight', 'bert.encoder.layer.13.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.15.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.21.output.LayerNorm.weight', 'bert.encoder.layer.22.attention.output.LayerNorm.bias', 'bert.encoder.layer.22.attention.output.LayerNorm.weight', 'bert.encoder.layer.17.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.12.attention.output.LayerNorm.weight', 'bert.encoder.layer.23.attention.output.LayerNorm.weight', 'bert.encoder.layer.19.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.16.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.21.output.LayerNorm.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.21.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.bias', 'bert.encoder.layer.14.output.LayerNorm.bias', 'bert.encoder.layer.12.output.LayerNorm.weight', 'bert.encoder.layer.23.output.LayerNorm.weight', 'bert.encoder.layer.16.attention.output.LayerNorm.weight', 'bert.encoder.layer.12.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.20.output.LayerNorm.weight', 'bert.embeddings.position_ids', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.14.attention.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.20.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.bias', 'bert.encoder.layer.17.attention.output.LayerNorm.bias', 'bert.encoder.layer.21.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.17.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.19.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.22.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.20.attention.output.LayerNorm.bias', 'bert.encoder.layer.19.attention.output.LayerNorm.bias', 'bert.encoder.layer.18.attention.output.LayerNorm.weight', 'bert.encoder.layer.13.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.18.output.LayerNorm.bias', 'bert.encoder.layer.15.output.LayerNorm.bias', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.15.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.13.output.LayerNorm.weight']
看了一下,加载的bin参数文件有很多名为.ln的参数而不是LayerNorm。
cls.predictions参数文件中是:
cls.predictions.transform.dense.weight torch.Size([2048, 2048])
cls.predictions.transform.dense.bias torch.Size([2048])
cls.predictions.transform.LayerNorm.weight torch.Size([2048])
cls.predictions.transform.LayerNorm.bias torch.Size([2048])
cls.predictions.decoder.weight torch.Size([21248, 2048])
cls.predictions.decoder.bias torch.Size([21248])
missing_keys ['cls.predictions.bias', 'bert.embeddings.position_ids'],这两个参数可能是transformers的版本问题,对结果应该不影响。可以提供transformers的版本供参考。
我的transformers的版本是4.29.2。请问可以提问一个官方的transformers版本或环境吗?
我的transformers的版本是4.29.2。请问可以提问一个官方的transformers版本或环境吗?
您可以尝试transformers==4.18.0