Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
Argument Mining in Scientific Reviews (AMSR) | |
We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews). | |
1. Raw Data | |
conferences_raw/ contains directories for each conference we scraped (e.g., [iclr20](./data/iclr20)). | |
The respective directory of each conference comprises multiple `*.json` files, where every file contains the information belonging to a single paper, such as the title, the abstract, the submission date and the reviews. | |
The reviews are stored in a list called `"review_content"`. | |
2. Cleaned Data | |
conferences_cleaned/ contains reviews and papers where we removed all unwated character sequences from the reviews. | |
For details on the details of the preprocessing steps, please refer to our paper "Argument Mining Driven Analysis of Peer-Reviews". | |
3. Annotated Data | |
conferences_annotated/ contains sentence_level and token_level data of 77 reviews, annotated each by 3 annotators. | |
We have three labels: | |
PRO - Arguments supporting the acceptance of the paper. | |
CON - Arguments opposing the acceptance of the paper. | |
NON - Non-argumentative sentences/tokens which have no influence on the acceptance of the paper. | |
And following we have three tasks: | |
Argumentation Detection: | |
A binary classification of whether a text span is an argument. | |
The classes are denoted by ARG and NON, where ARG is the union of PRO and CON classes. | |
Stance Detection: | |
A binary classification whether an argumentative text span is supporting or opposing the paper acceptance. | |
he model is trained and evaluated only on argumentative PRO and CON text spans. | |
Joint Detection: | |
A multi-class classification between the classes PRO, CON and NON, i.e. the combination of argumentation and stance detection. | |
4. Generalization across Conferences | |
conferences_annotated_generalization/ contains token_level data of 77 reviews split diffrently than in 3. | |
We studied the model’s generalization to peer-reviews for papers from other (sub)domains. | |
To this end, wereduce the test set to only contain reviews from the GI’20conference. | |
The focus of the GI’20 conference is ComputerGraphics and Human-Computer Interaction, while the otherconferences are focused on Representation Learning, AI andMedical Imaging. | |
We consider the GI’20 as a subdomain since all conferences are from the domain of computer science. | |
NO-GI: | |
The original training dataset with all sentences from reviews of GI’20 removed. | |
ALL | |
A resampling of the original training dataset of the same size as NO-GI, with sentences from all conferences. | |
5. jupyter-Notebook | |
ReviewStat is a jupyter notebook, which shows interesting statistics of the raw dataset. |