MicPie commited on
Commit
40fc8a4
1 Parent(s): 5263ea9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -24
README.md CHANGED
@@ -66,15 +66,14 @@ task_ids:
66
 
67
  ## Dataset Description
68
 
69
- - **Homepage:** [Needs More Information]
70
- - **Repository:** https://github.com/JunShern/few-shot-pretraining
71
  - **Paper:** Exploring Few-Shot Adaptation of Language Models with Tables
72
- - **Leaderboard:** [Needs More Information]
73
  - **Point of Contact:** [email protected], [email protected]
74
 
75
  ### Dataset Summary
76
 
77
- The AdapTable dataset consists of tables that naturally occur on the web and that are formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
78
 
79
  There are several dataset versions available:
80
 
@@ -82,9 +81,9 @@ There are several dataset versions available:
82
 
83
  * [AdapTable-unique](https://huggingface.co/datasets/MicPie/adaptable_unique): This is the same as [AdapTable-full](https://huggingface.co/datasets/MicPie/adaptable_full) but filtered to have a maximum of one task per website. [AdapTable-unique](https://huggingface.co/datasets/MicPie/adaptable_unique) contains exactly 23,744 tasks from 23,744 websites.
84
 
85
- * [AdapTable-5k](https://huggingface.co/datasets/MicPie/adaptable_5k): This dataset uses 5k random tables from the full dataset.
86
 
87
- * AdapTable data subsets based on a manual human quality rating:
88
  * [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low)
89
  * [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium)
90
  * [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high)
@@ -145,8 +144,6 @@ There are several dataset versions available:
145
  * [AdapTable-cluster29](https://huggingface.co/datasets/MicPie/adaptable_cluster29)
146
  * [AdapTable-cluster-noise](https://huggingface.co/datasets/MicPie/adaptable_cluster-noise)
147
 
148
-
149
-
150
  ### Supported Tasks and Leaderboards
151
 
152
  Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
@@ -161,7 +158,7 @@ English
161
 
162
  ### Data Instances
163
 
164
- Each table, i.e., task is represented as a json-lines file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
165
 
166
  There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
167
 
@@ -191,16 +188,13 @@ The AdapTable datasets do not come with additional data splits.
191
 
192
  ### Curation Rationale
193
 
194
- How do we convert tables to few-shot tasks?
195
- Unlike unstructured text, structured data in the form of tables lends itself easily to the few-shot task format. Given a table where each row is an instance of a similar class and the columns describe the attributes of each instance, we can turn each row into a task example to predict one attribute given the others. When the table has more than one row, we instantly have multiple examples of this task by using each row as a single example, and thus each table becomes a few-shot dataset for a particular task.
196
-
197
- The few-shot setting in this setup is significant: Tables often do not come with clear instructions for each field, so tasks may be underspecified if prompted in a zero-shot manner, but the intended task becomes clearer when examples are provided. This makes a good two-way match: The few-shot format is a perfect setup for table learning, and tables provide a natural dataset for few-shot training.
198
 
199
  ### Source Data
200
 
201
  #### Initial Data Collection and Normalization
202
 
203
- The data processing pipeline is explained in detail in our publication.
204
 
205
  #### Who are the source language producers?
206
 
@@ -210,29 +204,25 @@ The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/
210
 
211
  #### Annotation process
212
 
213
- No manual annotation process used.
214
- Only for the [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low), [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium), and [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high) manual annotations were carried out.
215
 
216
  #### Who are the annotators?
217
 
218
- People involved in the publication.
219
 
220
  ### Personal and Sensitive Information
221
 
222
- The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
223
 
224
  ## Considerations for Using the Data
225
 
226
  ### Social Impact of Dataset
227
 
228
- The purpose of this dataset is to help develop models that are better at few-shot learning and have higher few-shot performance by fine-tuning few-shot tasks extracted from tables.
229
-
230
- While tables have a similar structure to few-shot tasks and we do see an improved performance on few-shot tasks in our paper, we want to make clear that fine-tuning on tables also has its risks. First of all, since the tables are extracted from the web, they may contain user identities or otherwise sensitive information which a model might reveal at inference, or which could influence the learning process of a model in a negative way. Second, since tables are very diverse in nature, the model also trains on low-quality data or data with an unusual structure. While it is interesting that training on such data improves few-shot performance on downstream tasks, this could also imply that the model learns concepts that are very dissimilar to human concepts that would be useful for a certain downstream task. In other words, it is possible that the model learns weird things that are helpful on the evaluated downstream tasks, but might lead to bad out-of-distribution behavior.
231
 
232
  ### Discussion of Biases
233
 
234
- Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content.
235
- This implies that a model trained on our dataset will potentially reinforce harmful biases and toxic text that exist in our dataset.
236
 
237
  ### Other Known Limitations
238
 
@@ -248,4 +238,11 @@ Apache 2.0
248
 
249
  ### Citation Information
250
 
251
- [Needs More Information]
 
 
 
 
 
 
 
 
66
 
67
  ## Dataset Description
68
 
69
+ - **Homepage:** https://ethanperez.net/adaptable
70
+ - **Repository:** https://github.com/JunShern/few-shot-adaptation
71
  - **Paper:** Exploring Few-Shot Adaptation of Language Models with Tables
 
72
  - **Point of Contact:** [email protected], [email protected]
73
 
74
  ### Dataset Summary
75
 
76
+ The AdapTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
77
 
78
  There are several dataset versions available:
79
 
 
81
 
82
  * [AdapTable-unique](https://huggingface.co/datasets/MicPie/adaptable_unique): This is the same as [AdapTable-full](https://huggingface.co/datasets/MicPie/adaptable_full) but filtered to have a maximum of one task per website. [AdapTable-unique](https://huggingface.co/datasets/MicPie/adaptable_unique) contains exactly 23,744 tasks from 23,744 websites.
83
 
84
+ * [AdapTable-5k](https://huggingface.co/datasets/MicPie/adaptable_5k): This dataset contains 5k random tables from the full dataset.
85
 
86
+ * AdapTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
87
  * [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low)
88
  * [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium)
89
  * [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high)
 
144
  * [AdapTable-cluster29](https://huggingface.co/datasets/MicPie/adaptable_cluster29)
145
  * [AdapTable-cluster-noise](https://huggingface.co/datasets/MicPie/adaptable_cluster-noise)
146
 
 
 
147
  ### Supported Tasks and Leaderboards
148
 
149
  Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
 
158
 
159
  ### Data Instances
160
 
161
+ Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
162
 
163
  There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
164
 
 
188
 
189
  ### Curation Rationale
190
 
191
+ Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,350 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
 
 
 
192
 
193
  ### Source Data
194
 
195
  #### Initial Data Collection and Normalization
196
 
197
+ We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
198
 
199
  #### Who are the source language producers?
200
 
 
204
 
205
  #### Annotation process
206
 
207
+ Manual annotation was only carried out for the [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low), [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium), and [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
 
208
 
209
  #### Who are the annotators?
210
 
211
+ Annotations were carried out by a lab assistant.
212
 
213
  ### Personal and Sensitive Information
214
 
215
+ The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
216
 
217
  ## Considerations for Using the Data
218
 
219
  ### Social Impact of Dataset
220
 
221
+ This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
 
 
222
 
223
  ### Discussion of Biases
224
 
225
+ Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
 
226
 
227
  ### Other Known Limitations
228
 
 
238
 
239
  ### Citation Information
240
 
241
+ ```
242
+ @misc{https://ethanperez.net/adaptable,
243
+ author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
244
+ title = {Exploring Few-Shot Adaptation of Language Models with Tables},
245
+ publisher = {arXiv},
246
+ year = {2022},
247
+ }
248
+ ```