Some weights of the model checkpoint at microsoft/table-transformer-detection were not used when initializing TableTransformerForObjectDetection

#8
by alexneakameni - opened

08/24/2023 11:39:12 - INFO - timm.models._builder - Loading pretrained weights from Hugging Face hub (timm/resnet18.a1_in1k)
08/24/2023 11:39:12 - INFO - timm.models._hub - [timm/resnet18.a1_in1k] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
Some weights of the model checkpoint at microsoft/table-transformer-detection were not used when initializing TableTransformerForObjectDetection: ['model.backbone.conv_encoder.model.layer2.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer4.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer3.0.downsample.1.num_batches_tracked']

  • This IS expected if you are initializing TableTransformerForObjectDetection from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing TableTransformerForObjectDetection from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
    The max_size parameter is deprecated and will be removed in v4.26. Please specify in size['longest_edge'] instead.

Hi,

I think this warning is fine since those are only batch normalization metadata parameters.

Can we still use the model without fine-tuning please ?

Sign up or log in to comment