Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
MMRA / README.md
SiweiWu's picture
Update README.md
2d86f4f verified

MMRA

This is the repo for the paper: 'MMRA: A Benchmark for Multi-granularity Multi-image Relational Association'.

Our benchmark dataset is released: Huggingface Dataset: m-a-p/MMRA, Google Drive, and Baidu Netdisk.

The MMRA.zip in Google Drive and Baidu Netdisk contains a metadata.json file, which includes all the sample information. We can input the relevant questions, options, and image pairs to LVMLs through it.


Introduction

We define a multi-image relation association task, and meticulously curate MMRA benchmark, a Multi-granularity Multi-image Relational Association benchmark, consisted of 1,024 samples. In order to systematically and comprehensively evaluate mainstream LVLMs, we establish an associational relation system among images that contain 11 subtasks (e.g, UsageSimilarity, SubEvent, etc.) at two granularity levels (i.e., "image" and "entity") according to the relations in ConceptNet. Our experiments reveal that on the MMRA benchmark, current multi-image LVLMs exhibit distinct advantages and disadvantages across various subtasks. Notably, fine-grained, entity-level multi-image perception tasks pose a greater challenge for LVLMs compared to image-level tasks. Tasks that involve spatial perception are especially difficult for LVLMs to handle. Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component. Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.

framework

main_result


Evaluateion Codes

The codes of this paper can be found in our GitHub


Using Datasets

You can load our datasets by following codes:

MMRA_data = datasets.load_dataset('m-a-p/MMRA')['train']
print(MMRA_data[0])

Citation

BibTeX:

@article{wu2024mmra,
  title={MMRA: A Benchmark for Multi-granularity Multi-image Relational Association},
  author={Wu, Siwei and Zhu, Kang and Bai, Yu and Liang, Yiming and Li, Yizhi and Wu, Haoning and Liu, Jiaheng and Liu, Ruibo and Qu, Xingwei and Cheng, Xuxin and others},
  journal={arXiv preprint arXiv:2407.17379},
  year={2024}
}