-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 25 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 12 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 38 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 19
Collections
Discover the best community collections!
Collections including paper arxiv:2404.06773
-
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 64 -
OmniFusion Technical Report
Paper • 2404.06212 • Published • 74 -
Adapting LLaMA Decoder to Vision Transformer
Paper • 2404.06773 • Published • 17 -
BRAVE: Broadening the visual encoding of vision-language models
Paper • 2404.07204 • Published • 18
-
Realism in Action: Anomaly-Aware Diagnosis of Brain Tumors from Medical Images Using YOLOv8 and DeiT
Paper • 2401.03302 • Published • 1 -
MLP Can Be A Good Transformer Learner
Paper • 2404.05657 • Published • 1 -
Detecting and recognizing characters in Greek papyri with YOLOv8, DeiT and SimCLR
Paper • 2401.12513 • Published • 1 -
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Paper • 2404.02900 • Published • 1
-
Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss
Paper • 2404.02731 • Published • 1 -
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Paper • 2309.12284 • Published • 18 -
RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
Paper • 2404.03204 • Published • 7 -
Adapting LLaMA Decoder to Vision Transformer
Paper • 2404.06773 • Published • 17