Visual Question Decomposition on Multimodal Large Language Models
Abstract
Question decomposition has emerged as an effective strategy for prompting Large Language Models (LLMs) to answer complex questions. However, while existing methods primarily focus on unimodal language models, the question decomposition capability of Multimodal Large Language Models (MLLMs) has yet to be explored. To this end, this paper explores visual question decomposition on MLLMs. Specifically, we introduce a systematic evaluation framework including a dataset and several evaluation criteria to assess the quality of the decomposed sub-questions, revealing that existing MLLMs struggle to produce high-quality sub-questions. To address this limitation, we propose a specific finetuning dataset, DecoVQA+, for enhancing the model's question decomposition capability. Aiming at enabling models to perform appropriate selective decomposition, we propose an efficient finetuning pipeline. The finetuning pipeline consists of our proposed dataset and a training objective for selective decomposition. Finetuned MLLMs demonstrate significant improvements in the quality of sub-questions and the policy of selective question decomposition. Additionally, the models also achieve higher accuracy with selective decomposition on VQA benchmark datasets.
Community
TL;DR: This paper explores visual question decomposition in Multimodal Large Language Models (MLLMs), revealing that existing models struggle with producing high-quality sub-questions. To improve this, we introduce DecoVQA+, a finetuning dataset, and propose an efficient training pipeline for selective decomposition. The finetuned models show improved sub-question quality and decomposition policies, leading to higher accuracy on VQA benchmarks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Attention Prompting on Image for Large Vision-Language Models (2024)
- Language Models Benefit from Preparation with Elicited Knowledge (2024)
- ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue (2024)
- A Survey on Multimodal Benchmarks: In the Era of Large AI Models (2024)
- AudioBERT: Audio Knowledge Augmented Language Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper