|
# MMRA |
|
This is the repo for the paper: '[MMRA: A Benchmark for Multi-granularity Multi-image Relational Association](https://arxiv.org/pdf/2407.17379)'. |
|
|
|
Our benchmark dataset is released: [Huggingface Dataset: m-a-p/MMRA](https://huggingface.co/datasets/m-a-p/MMRA), [Google Drive](https://drive.google.com/file/d/1XhyCfCM6McC_umSEQJ4NvCZGMUnMdzj2/view?usp=sharing), and [Baidu Netdisk](https://pan.baidu.com/s/1deQOSzpX_-Y6-IjlSiN1OA?pwd=zb3s). |
|
|
|
The MMRA.zip in Google Drive and Baidu Netdisk contains a metadata.json file, which includes all the sample information. We can input the relevant questions, options, and image pairs to LVMLs through it. |
|
|
|
--- |
|
|
|
# Introduction |
|
|
|
We define a multi-image relation association task, and meticulously curate **MMRA** benchmark, a **M**ulti-granularity **M**ulti-image **R**elational **A**ssociation benchmark, consisted of **1,024** samples. |
|
In order to systematically and comprehensively evaluate mainstream LVLMs, we establish an associational relation system among images that contain **11 subtasks** (e.g, UsageSimilarity, SubEvent, etc.) at two granularity levels (i.e., "**image**" and "**entity**") according to the relations in ConceptNet. |
|
Our experiments reveal that on the MMRA benchmark, current multi-image LVLMs exhibit distinct advantages and disadvantages across various subtasks. Notably, fine-grained, entity-level multi-image perception tasks pose a greater challenge for LVLMs compared to image-level tasks. Tasks that involve spatial perception are especially difficult for LVLMs to handle. |
|
Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component. |
|
Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process. |
|
|
|
![framework](./imgs/framework.png) |
|
|
|
![main_result](./imgs/main_result.png) |
|
|
|
|
|
--- |
|
# Evaluateion Codes |
|
|
|
The codes of this paper can be found in our [GitHub](https://github.com/Wusiwei0410/MMRA/tree/main) |
|
|
|
|
|
|
|
--- |
|
|
|
# Using Datasets |
|
|
|
You can load our datasets by following codes: |
|
|
|
```python |
|
MMRA_data = datasets.load_dataset('m-a-p/MMRA')['train'] |
|
print(MMRA_data[0]) |
|
``` |
|
|
|
--- |
|
# Citation |
|
|
|
BibTeX: |
|
``` |
|
@article{wu2024mmra, |
|
title={MMRA: A Benchmark for Multi-granularity Multi-image Relational Association}, |
|
author={Wu, Siwei and Zhu, Kang and Bai, Yu and Liang, Yiming and Li, Yizhi and Wu, Haoning and Liu, Jiaheng and Liu, Ruibo and Qu, Xingwei and Cheng, Xuxin and others}, |
|
journal={arXiv preprint arXiv:2407.17379}, |
|
year={2024} |
|
} |
|
``` |
|
|