-
Visual Instruction Tuning
Paper • 2304.08485 • Published • 13 -
LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Paper • 2311.05437 • Published • 45 -
Improved Baselines with Visual Instruction Tuning
Paper • 2310.03744 • Published • 37 -
Aligning Large Multimodal Models with Factually Augmented RLHF
Paper • 2309.14525 • Published • 29
Collections
Discover the best community collections!
Collections including paper arxiv:2308.12966
-
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Paper • 2306.17107 • Published • 11 -
On the Hidden Mystery of OCR in Large Multimodal Models
Paper • 2305.07895 • Published -
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Paper • 2308.12966 • Published • 6 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 48
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 14 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 25 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 6 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 20