File size: 4,104 Bytes
3f0014e 9b6b30f 10a387c 9b6b30f c460365 9b6b30f c460365 9b6b30f 3f0014e 9b6b30f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: winogavil
pretty_name: WinoGAViL
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- commonsense-reasoning
- visual-reasoning
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
---
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
[email protected]; [email protected]
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
### Languages
English.
## Dataset Structure
### Data Fields
candidates (string).
cue (string).
associations (string).
score_fool_the_ai (int64).
num_associations (int64).
annotation_index (int64).
num_candidates (int64).
solvers_jaccard_mean (float64).
solvers_jaccard_std (float64).
ID (int64).
### Data Splits
-- 5 & 6 candidates. With 5 candidates, random chance for success is 38%. With 6 candidates, random chance for success is 34%.
-- 10 & 12 candidates. With 10 candidates, random chance for success is 24%. With 12 candidates, random chance for success is 19%.
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
|