Edit model card

CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.

Training Information

  • Model Name: openai/clip-vit-base-patch32
  • Dataset: oxford-pets
  • Training Epochs: 4
  • Batch Size: 256
  • Learning Rate: 3e-6
  • Test Accuracy: 93.74%

Parameters Information

Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%

Bias, Risks, and Limitations

Refer to the original CLIP repository.

License

[MIT]

Downloads last month
35
Safetensors
Model size
151M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train DGurgurov/clip-vit-base-patch32-oxford-pets

Space using DGurgurov/clip-vit-base-patch32-oxford-pets 1