Safetensors
clip_vision_model
Edit model card

[Paper] [GitHub]

This model is trained using the COYO700M dataset. The results below are from linear probe evaluations, demonstrating the model's performance on various benchmarks.

Dataset CLIP MLCD
Food101 88.8 90.2
CIFAR10 95.1 96.9
CIFAR100 80.5 86.8
Birdsnap 58.5 72.1
SUN397 76.6 77.4
Cars 81.8 93.5
Aircraft 52.0 74.7
VOC2007 87.7 90.4
DTD 76.5 83.5
Pets 90.0 93.6
Cal101 93.0 97.7
Flowers 96.9 98.8
ImageNet 76.1 79.1
Downloads last month
7
Safetensors
Model size
87.5M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train DeepGlint-AI/mlcd-vit-base-patch32-224

Collection including DeepGlint-AI/mlcd-vit-base-patch32-224