init benchmark
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +100 -0
- cvr/0/answer/image/sub_image_4.png +3 -0
- cvr/0/choice/image/sub_image_1.png +3 -0
- cvr/0/choice/image/sub_image_2.png +3 -0
- cvr/0/choice/image/sub_image_3.png +3 -0
- cvr/0/choice/image/sub_image_4.png +3 -0
- cvr/1/answer/image/sub_image_4.png +3 -0
- cvr/1/choice/image/sub_image_1.png +3 -0
- cvr/1/choice/image/sub_image_2.png +3 -0
- cvr/1/choice/image/sub_image_3.png +3 -0
- cvr/1/choice/image/sub_image_4.png +3 -0
- cvr/10/answer/image/sub_image_4.png +3 -0
- cvr/10/choice/image/sub_image_1.png +3 -0
- cvr/10/choice/image/sub_image_2.png +3 -0
- cvr/10/choice/image/sub_image_3.png +3 -0
- cvr/10/choice/image/sub_image_4.png +3 -0
- cvr/100/answer/image/sub_image_4.png +3 -0
- cvr/100/choice/image/sub_image_1.png +3 -0
- cvr/100/choice/image/sub_image_2.png +3 -0
- cvr/100/choice/image/sub_image_3.png +3 -0
- cvr/100/choice/image/sub_image_4.png +3 -0
- cvr/101/answer/image/sub_image_4.png +3 -0
- cvr/101/choice/image/sub_image_1.png +3 -0
- cvr/101/choice/image/sub_image_2.png +3 -0
- cvr/101/choice/image/sub_image_3.png +3 -0
- cvr/101/choice/image/sub_image_4.png +3 -0
- cvr/102/answer/image/sub_image_4.png +3 -0
- cvr/102/choice/image/sub_image_1.png +3 -0
- cvr/102/choice/image/sub_image_2.png +3 -0
- cvr/102/choice/image/sub_image_3.png +3 -0
- cvr/102/choice/image/sub_image_4.png +3 -0
- cvr/103/answer/image/sub_image_4.png +3 -0
- cvr/103/choice/image/sub_image_1.png +3 -0
- cvr/103/choice/image/sub_image_2.png +3 -0
- cvr/103/choice/image/sub_image_3.png +3 -0
- cvr/103/choice/image/sub_image_4.png +3 -0
- cvr/104/answer/image/sub_image_4.png +3 -0
- cvr/104/choice/image/sub_image_1.png +3 -0
- cvr/104/choice/image/sub_image_2.png +3 -0
- cvr/104/choice/image/sub_image_3.png +3 -0
- cvr/104/choice/image/sub_image_4.png +3 -0
- cvr/105/answer/image/sub_image_4.png +3 -0
- cvr/105/choice/image/sub_image_1.png +3 -0
- cvr/105/choice/image/sub_image_2.png +3 -0
- cvr/105/choice/image/sub_image_3.png +3 -0
- cvr/105/choice/image/sub_image_4.png +3 -0
- cvr/106/answer/image/sub_image_4.png +3 -0
- cvr/106/choice/image/sub_image_1.png +3 -0
- cvr/106/choice/image/sub_image_2.png +3 -0
- cvr/106/choice/image/sub_image_3.png +3 -0
README.md
CHANGED
@@ -1,3 +1,103 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-3.0
|
3 |
---
|
4 |
+
|
5 |
+
# What is the Visual Cognition Gap between Humans and Multimodal LLMs?
|
6 |
+
|
7 |
+
## Description:
|
8 |
+
|
9 |
+
VCog-Bench is a publicly available zero-shot abstract visual reasoning (AVR) benchmark designed to evaluate Multimodal Large Language Models (MLLMs). This benchmark integrates two well-known AVR datasets from the AI community and includes a newly proposed MaRs-VQA dataset. The findings in VCog-Bench show that current state-of-the-art MLLMs and Vision-Language Models (VLMs), such as GPT-4o and LLaVA-1.6, InternVL demonstrate some basic understanding of AVR tasks. However, these models still face challenges with complex matrix reasoning tasks. This highlights the need for further exploration and development in this area. By providing a robust benchmark, we aim to encourage further innovation and progress in the field of zero-shot abstract visual reasoning.
|
10 |
+
|
11 |
+
## Benchmark Dataset Structure:
|
12 |
+
|
13 |
+
```
|
14 |
+
----vcog-bench
|
15 |
+
|----cvr
|
16 |
+
| |----case_name1
|
17 |
+
| | |----answer
|
18 |
+
| | | |----image
|
19 |
+
| | | | |----x.png
|
20 |
+
| | |----choice
|
21 |
+
| | | |----image
|
22 |
+
| | | | |----sub_image_0.png
|
23 |
+
| | | | |----sub_image_1.png
|
24 |
+
| | | | |----sub_image_2.png
|
25 |
+
| | | | |----sub_image_3.png
|
26 |
+
| |----case_name2
|
27 |
+
| |----case_name3
|
28 |
+
| |----case_name4
|
29 |
+
| |----......
|
30 |
+
|----raven
|
31 |
+
| |----case_name1
|
32 |
+
| | |----answer
|
33 |
+
| | | |----image
|
34 |
+
| | | | |----x.jpeg
|
35 |
+
| | |----choice
|
36 |
+
| | | |----image
|
37 |
+
| | | | |----0.jpeg
|
38 |
+
| | | | |----1.jpeg
|
39 |
+
| | | | |----2.jpeg
|
40 |
+
| | | | |----3.jpeg
|
41 |
+
| | | | |----4.jpeg
|
42 |
+
| | | | |----5.jpeg
|
43 |
+
| | | | |----6.jpeg
|
44 |
+
| | | | |----7.jpeg
|
45 |
+
| | | |----text
|
46 |
+
| | | | |----annotation.json
|
47 |
+
| | |----question
|
48 |
+
| | | |----image
|
49 |
+
| | | | |----question.jpeg
|
50 |
+
| |----case_name2
|
51 |
+
| |----case_name3
|
52 |
+
| |----case_name4
|
53 |
+
| |----......
|
54 |
+
|----marsvqa
|
55 |
+
| |----case_name1
|
56 |
+
| | |----answer
|
57 |
+
| | | |----image
|
58 |
+
| | | | |----xxx.jpeg
|
59 |
+
| | |----choice
|
60 |
+
| | | |----image
|
61 |
+
| | | | |----xxx.jpeg
|
62 |
+
| | | | |----xxx.jpeg
|
63 |
+
| | | | |----xxx.jpeg
|
64 |
+
| | | | |----xxx.jpeg
|
65 |
+
| | | |----text
|
66 |
+
| | | | |----annotation.json
|
67 |
+
| | |----choiceX
|
68 |
+
| | | |----image
|
69 |
+
| | | | |----xxx.jpeg
|
70 |
+
| | | | |----xxx.jpeg
|
71 |
+
| | | | |----xxx.jpeg
|
72 |
+
| | | | |----xxx.jpeg
|
73 |
+
| | |----question
|
74 |
+
| | | |----image
|
75 |
+
| | | | |----xxx.jpeg
|
76 |
+
| |----case_name2
|
77 |
+
| |----case_name3
|
78 |
+
| |----case_name4
|
79 |
+
| |----......
|
80 |
+
```
|
81 |
+
|
82 |
+
## Dataset Details
|
83 |
+
|
84 |
+
Content Types: VQA pairs with multiple images input
|
85 |
+
Volume: 560 VQA pairs (RAVEN), 480 VQA pairs (MaRs-VQA), 309 VQA pairs (CVR)
|
86 |
+
Source of Data: RAVEN dataset, MaRs-IB, CVR dataset
|
87 |
+
Data Collection Method: See the paper.
|
88 |
+
|
89 |
+
## Reference
|
90 |
+
|
91 |
+
```
|
92 |
+
@misc{cao2024visualcognitiongaphumans,
|
93 |
+
title={What is the Visual Cognition Gap between Humans and Multimodal LLMs?},
|
94 |
+
author={Xu Cao and Bolin Lai and Wenqian Ye and Yunsheng Ma and Joerg Heintz and Jintai Chen and Jianguo Cao and James M. Rehg},
|
95 |
+
year={2024},
|
96 |
+
eprint={2406.10424},
|
97 |
+
archivePrefix={arXiv},
|
98 |
+
primaryClass={cs.CV},
|
99 |
+
url={https://arxiv.org/abs/2406.10424},
|
100 |
+
}
|
101 |
+
```
|
102 |
+
|
103 |
+
|
cvr/0/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/0/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/0/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/0/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/0/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/1/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/1/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/1/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/1/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/1/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/10/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/10/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/10/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/10/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/10/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/100/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/100/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/100/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/100/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/100/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/101/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/101/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/101/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/101/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/101/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/102/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/102/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/102/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/102/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/102/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/103/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/103/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/103/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/103/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/103/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/104/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/104/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/104/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/104/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/104/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/105/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/105/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/105/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/105/choice/image/sub_image_3.png
ADDED
Git LFS Details
|
cvr/105/choice/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/106/answer/image/sub_image_4.png
ADDED
Git LFS Details
|
cvr/106/choice/image/sub_image_1.png
ADDED
Git LFS Details
|
cvr/106/choice/image/sub_image_2.png
ADDED
Git LFS Details
|
cvr/106/choice/image/sub_image_3.png
ADDED
Git LFS Details
|