ykl7 commited on
Commit
531866d
1 Parent(s): bb07a97

add tellmewhy

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. README.md +210 -0
  3. data/test.jsonl +3 -0
  4. data/train.jsonl +3 -0
  5. data/validation.jsonl +3 -0
.gitattributes CHANGED
@@ -49,3 +49,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
52
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text2text-generation
18
+ task_ids: []
19
+ paperswithcode_id: null
20
+ pretty_name: TellMeWhy
21
+ ---
22
+
23
+ # Dataset Card for NewsCommentary
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** https://stonybrooknlp.github.io/tellmewhy/
52
+ - **Repository:** https://github.com/StonyBrookNLP/tellmewhy
53
+ - **Paper:** https://aclanthology.org/2021.findings-acl.53/
54
+ - **Leaderboard:** None
55
+ - **Point of Contact:** [Yash Kumar Lal](mailto:[email protected])
56
+
57
+ ### Dataset Summary
58
+
59
+ TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ The dataset is designed to test why-question answering abilities of models when bound by local context.
64
+
65
+ ### Languages
66
+
67
+ English
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.
74
+
75
+ ```
76
+ {
77
+ "narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.",
78
+ "question":"Why did Cam order a pizza?",
79
+ "original_sentence_for_question":"Cam ordered a pizza and took it home.",
80
+ "narrative_lexical_overlap":0.3333333333,
81
+ "is_ques_answerable":"Not Answerable",
82
+ "answer":"Cam was hungry.",
83
+ "is_ques_answerable_annotator":"Not Answerable",
84
+ "original_narrative_form":[
85
+ "Cam ordered a pizza and took it home.",
86
+ "He opened the box to take out a slice.",
87
+ "Cam discovered that the store did not cut the pizza for him.",
88
+ "He looked for his pizza cutter but did not find it.",
89
+ "He had to use his chef knife to cut a slice."
90
+ ],
91
+ "question_meta":"rocstories_narrative_41270_sentence_0_question_0",
92
+ "helpful_sentences":[
93
+
94
+ ],
95
+ "human_eval":false,
96
+ "val_ann":[
97
+
98
+ ],
99
+ "gram_ann":[
100
+
101
+ ]
102
+ }
103
+ ```
104
+
105
+ ### Data Fields
106
+
107
+ - `question_meta` - Unique meta for each question in the corpus
108
+ - `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated
109
+ - `question` - Why question about an action or event in the narrative
110
+ - `answer` - Crowdsourced answer to the question
111
+ - `original_sentence_for_question` - Sentence in narrative from which question was generated
112
+ - `narrative_lexical_overlap` - Unigram overlap of answer with the narrative
113
+ - `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models.
114
+ - `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.
115
+ - `original_narrative_form` - ROCStories narrative as an array of its sentences
116
+ - `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.
117
+ - `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False.
118
+ - `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False.
119
+
120
+ ### Data Splits
121
+
122
+ The data is split into training, valiudation, and test sets.
123
+
124
+ | Train | Valid | Test |
125
+ | ------ | ----- | ----- |
126
+ | 23964 | 2992 | 3563 |
127
+
128
+ ## Dataset Creation
129
+
130
+ ### Curation Rationale
131
+
132
+ [More Information Needed]
133
+
134
+ ### Source Data
135
+
136
+ ROCStories corpus (Mostafazadeh et al, 2016)
137
+
138
+ #### Initial Data Collection and Normalization
139
+
140
+ ROCStories was used to create why-questions related to actions and events in the stories.
141
+
142
+ #### Who are the source language producers?
143
+
144
+ [More Information Needed]
145
+
146
+ ### Annotations
147
+
148
+ #### Annotation process
149
+
150
+ Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.
151
+
152
+ #### Who are the annotators?
153
+
154
+ Amazon Mechanical Turk workers
155
+
156
+ ### Personal and Sensitive Information
157
+
158
+ None
159
+
160
+ ## Considerations for Using the Data
161
+
162
+ ### Social Impact of Dataset
163
+
164
+ [More Information Needed]
165
+
166
+ ### Discussion of Biases
167
+
168
+ [More Information Needed]
169
+
170
+ ### Other Known Limitations
171
+
172
+ [More Information Needed]
173
+
174
+ ## Additional Information
175
+
176
+ ### Evaluation
177
+
178
+ To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy.
179
+
180
+ ### Dataset Curators
181
+
182
+ [More Information Needed]
183
+
184
+ ### Licensing Information
185
+
186
+ [More Information Needed]
187
+
188
+ ### Citation Information
189
+
190
+ ```
191
+ @inproceedings{lal-etal-2021-tellmewhy,
192
+ title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives",
193
+ author = "Lal, Yash Kumar and
194
+ Chambers, Nathanael and
195
+ Mooney, Raymond and
196
+ Balasubramanian, Niranjan",
197
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
198
+ month = aug,
199
+ year = "2021",
200
+ address = "Online",
201
+ publisher = "Association for Computational Linguistics",
202
+ url = "https://aclanthology.org/2021.findings-acl.53",
203
+ doi = "10.18653/v1/2021.findings-acl.53",
204
+ pages = "596--610",
205
+ }
206
+ ```
207
+
208
+ ### Contributions
209
+
210
+ Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset.
data/test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cec421adb98484ca3d4961dcd30cf6823b929bbe5c72b2ddd55f19efa094ab25
3
+ size 10404202
data/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa36e630bccb9d682d1c35441460d61015144ea84453e497e1afe259b3a08f21
3
+ size 70103837
data/validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69761056a0d0e12135f7c9a25a1f74dd920f267fc18503fb9a9009d3284c1b75
3
+ size 8710050