Update README.md
Browse files
README.md
CHANGED
@@ -116,4 +116,76 @@ configs:
|
|
116 |
data_files:
|
117 |
- split: val
|
118 |
path: spatial_map/spatial_map_text_only_val.parquet
|
119 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
116 |
data_files:
|
117 |
- split: val
|
118 |
path: spatial_map/spatial_map_text_only_val.parquet
|
119 |
+
---
|
120 |
+
|
121 |
+
A key question for understanding multimodal vs. language capabilities of models is what is
|
122 |
+
the relative strength of the spatial reasoning and understanding in each modality, as spatial understanding is
|
123 |
+
expected to be a strength for multimodality? To test this we created a procedurally generatable, synthetic dataset
|
124 |
+
to testing spatial reasoning, navigation, and counting. These datasets are challenging and by
|
125 |
+
being procedurally generated new versions can easily be created to combat the effects of models being trained
|
126 |
+
on this data and the results being due to memorization. For each task, each question has an image and a text
|
127 |
+
representation that is sufficient for answering each question.
|
128 |
+
|
129 |
+
|
130 |
+
This dataset has three tasks that test: Spatial Understanding (Spatial-Map), Nav-
|
131 |
+
igation (Maze), and Counting (Spatial-Grid). Each task has three conditions, with respect to the input
|
132 |
+
modality, 1) text-only, input and a question, 2) vision-only, which is the standard task of visual-question an-
|
133 |
+
swering that consists of a vision-only input and a question, and 3) vision-text includes both text and image
|
134 |
+
representations with the question. Each condition includes 1500
|
135 |
+
images and text pairs for a total of 4500.
|
136 |
+
|
137 |
+
__Spatial Map__
|
138 |
+
|
139 |
+
The dataset consists of spatial relationships for random layouts of symbolic objects with text names on white background.
|
140 |
+
Each object is associated with a unique location name, such as Unicorn Umbrellas and Gale Gifts. To study the impact of modality,
|
141 |
+
the textual representation of each input consists of pairwise relations such as Brews Brothers Pub
|
142 |
+
is to the Southeast of Whale’s Watches. The questions include asking about the spatial
|
143 |
+
relationships between two locations and the number of objects that meet specific spatial criteria.
|
144 |
+
|
145 |
+
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500.
|
146 |
+
|
147 |
+
There are 3 question types:
|
148 |
+
1) In which direction is one object to another (answer is a direction)
|
149 |
+
2) Which object is to the direction of another (answer is an object name)
|
150 |
+
3) How many objects are in a direction of another (answer is a number)
|
151 |
+
|
152 |
+
Each question is multiple choice.
|
153 |
+
|
154 |
+
__Maze__
|
155 |
+
|
156 |
+
The dataset consists of small mazes with questions asked about the maze. Each sample can be
|
157 |
+
represented as colored blocks where different colors signify distinct elements: a green block marks
|
158 |
+
the starting point (S), a red block indicates the exit (E), black blocks represent impassable walls,
|
159 |
+
white blocks denote navigable paths, and blue blocks trace the path from S to E. The objective is to
|
160 |
+
navigate from S to E following the blue path, with movement permitted in the four cardinal directions
|
161 |
+
(up, down, left, right). Alternatively, each input can be depicted in textual format using ASCII code.
|
162 |
+
The questions asked include counting the number of turns from S to E and determining the spatial relationship
|
163 |
+
between S and E.
|
164 |
+
|
165 |
+
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500.
|
166 |
+
|
167 |
+
There are 3 question types:
|
168 |
+
1) How many right turns on the path from start to end (answer is a number)
|
169 |
+
2) How many total turns on the path from start to end (answer is a number)
|
170 |
+
3) Where is the exit releative to the start (answer is direction or yes/no)
|
171 |
+
|
172 |
+
Each question is multiple choice.
|
173 |
+
|
174 |
+
__Spatial Grid__
|
175 |
+
|
176 |
+
Each input consists of a grid of cells, each containing an image (e.g.,a rabbit). Alternatively, this grid
|
177 |
+
can also be represented in a purely textual format; for instance, the first row might be described as:
|
178 |
+
elephant | cat | giraffe | elephant | cat. The evaluations focus on tasks such as counting specific objects (e.g., rabbits) and
|
179 |
+
identifying the object located at a specific coordinate in the grid (e.g., first row, second column).
|
180 |
+
|
181 |
+
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500 questions.
|
182 |
+
|
183 |
+
There are 3 question types:
|
184 |
+
1) How many blocks contain a specific animal (answer is a number)
|
185 |
+
2) What animal is in one specific block, adressed by top-left, top, right, etc. (answer is an animal name)
|
186 |
+
3) What animal is in one specific block, addressed by row, column (answer is an animal name)
|
187 |
+
|
188 |
+
Each question is multiple choice.
|
189 |
+
|
190 |
+
---
|
191 |
+
More details here: https://arxiv.org/pdf/2406.14852
|