Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
SiweiWu commited on
Commit
464c221
1 Parent(s): 6d889c9

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +55 -30
  2. imgs/framework.png +3 -0
  3. imgs/main_result.png +3 -0
README.md CHANGED
@@ -1,30 +1,55 @@
1
- ---
2
- license: unknown
3
- dataset_info:
4
- features:
5
- - name: Task
6
- dtype: string
7
- - name: QA_type
8
- dtype: string
9
- - name: question
10
- dtype: string
11
- - name: image1
12
- dtype: image
13
- - name: image2
14
- dtype: image
15
- - name: options
16
- dtype: string
17
- - name: answer
18
- dtype: string
19
- splits:
20
- - name: train
21
- num_bytes: 587125417.48
22
- num_examples: 1024
23
- download_size: 570636511
24
- dataset_size: 587125417.48
25
- configs:
26
- - config_name: default
27
- data_files:
28
- - split: train
29
- path: data/train-*
30
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MMRA
2
+ This is the repo for the paper: '[MMRA: A Benchmark for Multi-granularity Multi-image Relational Association](https://arxiv.org/pdf/2407.17379)'.
3
+
4
+ Our benchmark dataset is released: [Huggingface Dataset: m-a-p/MMRA](https://huggingface.co/datasets/m-a-p/MMRA), [Google Drive](https://drive.google.com/file/d/1XhyCfCM6McC_umSEQJ4NvCZGMUnMdzj2/view?usp=sharing), and [Baidu Netdisk](https://pan.baidu.com/s/1deQOSzpX_-Y6-IjlSiN1OA?pwd=zb3s).
5
+
6
+ The MMRA.zip in Google Drive and Baidu Netdisk contains a metadata.json file, which includes all the sample information. We can input the relevant questions, options, and image pairs to LVMLs through it.
7
+
8
+ ---
9
+
10
+ # Introduction
11
+
12
+ We define a multi-image relation association task, and meticulously curate **MMRA** benchmark, a **M**ulti-granularity **M**ulti-image **R**elational **A**ssociation benchmark, consisted of **1,024** samples.
13
+ In order to systematically and comprehensively evaluate mainstream LVLMs, we establish an associational relation system among images that contain **11 subtasks** (e.g, UsageSimilarity, SubEvent, etc.) at two granularity levels (i.e., "**image**" and "**entity**") according to the relations in ConceptNet.
14
+ Our experiments reveal that on the MMRA benchmark, current multi-image LVLMs exhibit distinct advantages and disadvantages across various subtasks. Notably, fine-grained, entity-level multi-image perception tasks pose a greater challenge for LVLMs compared to image-level tasks. Tasks that involve spatial perception are especially difficult for LVLMs to handle.
15
+ Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component.
16
+ Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.
17
+
18
+ <div align="center">
19
+ <img src=./imgs/framework.png width=80% />
20
+ </div>
21
+
22
+ <div align="center">
23
+ <img src=./imgs/main_result.png width=80% />
24
+ </div>
25
+
26
+ ---
27
+ # Evaluateion Codes
28
+
29
+ The codes of this paper can be found in our [GitHub](https://github.com/Wusiwei0410/MMRA/tree/main)
30
+
31
+
32
+
33
+ ---
34
+
35
+ # Using Datasets
36
+
37
+ You can load our datasets by following codes:
38
+
39
+ ```python
40
+ MMRA_data = datasets.load_dataset('m-a-p/MMRA')['train']
41
+ print(MMRA_data[0])
42
+ ```
43
+
44
+ ---
45
+ # Citation
46
+
47
+ BibTeX:
48
+ ```
49
+ @inproceedings{Wu2024MMRAAB,
50
+ title={MMRA: A Benchmark for Multi-granularity Multi-image Relational Association},
51
+ author={Siwei Wu and Kang Zhu and Yu Bai and Yiming Liang and Yizhi Li and Haoning Wu and Jiaheng Liu and Ruibo Liu and Xingwei Qu and Xuxin Cheng and Ge Zhang and Wenhao Huang and Chenghua Lin},
52
+ year={2024},
53
+ url={https://api.semanticscholar.org/CorpusID:271404179}
54
+ }
55
+ ```
imgs/framework.png ADDED

Git LFS Details

  • SHA256: 0e8d410af3d07be0d713c7de2d9a7fae2376be2990ae6794b04786318b806520
  • Pointer size: 131 Bytes
  • Size of remote file: 313 kB
imgs/main_result.png ADDED

Git LFS Details

  • SHA256: b91dbe2cc16c1b53d8d3a922d37d59e8dcd6d2228d01a1acf75d2bbb36e912ba
  • Pointer size: 131 Bytes
  • Size of remote file: 365 kB