-
Visual Instruction Tuning
Paper • 2304.08485 • Published • 13 -
LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Paper • 2311.05437 • Published • 45 -
Improved Baselines with Visual Instruction Tuning
Paper • 2310.03744 • Published • 37 -
Aligning Large Multimodal Models with Factually Augmented RLHF
Paper • 2309.14525 • Published • 29
Collections
Discover the best community collections!
Collections including paper arxiv:2310.03744
-
DocGraphLM: Documental Graph Language Model for Information Extraction
Paper • 2401.02823 • Published • 34 -
Understanding LLMs: A Comprehensive Overview from Training to Inference
Paper • 2401.02038 • Published • 61 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 180 -
Attention Where It Matters: Rethinking Visual Document Understanding with Selective Region Concentration
Paper • 2309.01131 • Published • 1
-
ImageBind: One Embedding Space To Bind Them All
Paper • 2305.05665 • Published • 3 -
ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth
Paper • 2302.12288 • Published -
HuggingFaceM4/howto100m
Updated • 39 • 4 -
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Paper • 2201.12086 • Published • 3
-
Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs
Paper • 2310.13961 • Published • 4 -
Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs
Paper • 2309.09582 • Published • 4 -
Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models
Paper • 2310.13127 • Published • 11 -
Evaluating the Robustness to Instructions of Large Language Models
Paper • 2308.14306 • Published • 1
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 14 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 25 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 6 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 20
-
The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)
Paper • 2309.17421 • Published • 4 -
Improved Baselines with Visual Instruction Tuning
Paper • 2310.03744 • Published • 37 -
Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Paper • 2310.03734 • Published • 14 -
Idea2Img: Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation
Paper • 2310.08541 • Published • 17
-
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Paper • 2309.09958 • Published • 18 -
TextBind: Multi-turn Interleaved Multimodal Instruction-following
Paper • 2309.08637 • Published • 7 -
Improved Baselines with Visual Instruction Tuning
Paper • 2310.03744 • Published • 37 -
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Paper • 2312.08578 • Published • 16