Visual Haystacks: Answering Harder Questions About Sets of Images
Abstract
Recent advancements in Large Multimodal Models (LMMs) have made significant progress in the field of single-image visual question answering. However, these models face substantial challenges when tasked with queries that span extensive collections of images, similar to real-world scenarios like searching through large photo albums, finding specific information across the internet, or monitoring environmental changes through satellite imagery. This paper explores the task of Multi-Image Visual Question Answering (MIQA): given a large set of images and a natural language query, the task is to generate a relevant and grounded response. We propose a new public benchmark, dubbed "Visual Haystacks (VHs)," specifically designed to evaluate LMMs' capabilities in visual retrieval and reasoning over sets of unrelated images, where we perform comprehensive evaluations demonstrating that even robust closed-source models struggle significantly. Towards addressing these shortcomings, we introduce MIRAGE (Multi-Image Retrieval Augmented Generation), a novel retrieval/QA framework tailored for LMMs that confronts the challenges of MIQA with marked efficiency and accuracy improvements over baseline methods. Our evaluation shows that MIRAGE surpasses closed-source GPT-4o models by up to 11% on the VHs benchmark and offers up to 3.4x improvements in efficiency over text-focused multi-stage approaches.
Community
🚨 Launching The Visual Haystacks (VHs) Benchmark: the first "visual-centric" Needle-In-A-Haystack (NIAH) benchmark to assess LMMs' capability in long-context visual retrieval and reasoning.
Hi @davidchan congrats on this work!
Would you be up for making your benchmark available as a HF dataset?
See here for a guide: https://huggingface.co/docs/datasets/loading.
It could then also be linked to this paper, improving discoverability. See here on how to do that: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper
Cheers,
Niels
Open-source @ HF
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Losing Visual Needles in Image Haystacks: Vision Language Models are Easily Distracted in Short and Long Contexts (2024)
- EchoSight: Advancing Visual-Language Models with Wiki Knowledge (2024)
- RAVEN: Multitask Retrieval Augmented Vision-Language Learning (2024)
- OLIVE: Object Level In-Context Visual Embeddings (2024)
- OmChat: A Recipe to Train Multimodal Language Models with Strong Long Context and Video Understanding (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper