Building and better understanding vision-language models: insights and future directions Paper • 2408.12637 • Published 28 days ago • 109
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model Paper • 2408.11039 • Published about 1 month ago • 54
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming Paper • 2408.16725 • Published 21 days ago • 49
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders Paper • 2408.15998 • Published 22 days ago • 81
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Paper • 2406.16860 • Published Jun 24 • 55
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training Paper • 2403.09611 • Published Mar 14 • 123
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture Paper • 2409.02889 • Published 15 days ago • 53
FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation Paper • 2409.03525 • Published 14 days ago • 11
PiTe: Pixel-Temporal Alignment for Large Video-Language Model Paper • 2409.07239 • Published 8 days ago • 11
One missing piece in Vision and Language: A Survey on Comics Understanding Paper • 2409.09502 • Published 5 days ago • 23