--- license: apache-2.0 datasets: - Ejafa/ye-pop --- A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [CogVLM](https://huggingface.co/THUDM/cogvlm-chat-hf)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks). Note: likely not directly useful as it is severely undertrained.