scitail / README.md
albertvillanova's picture
Convert dataset to Parquet (#3)
0cc4353
|
raw
history blame
10.1 kB
metadata
language:
  - en
paperswithcode_id: scitail
pretty_name: SciTail
dataset_info:
  - config_name: dgem_format
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype: string
      - name: hypothesis_graph_structure
        dtype: string
    splits:
      - name: train
        num_bytes: 6817626
        num_examples: 23088
      - name: test
        num_bytes: 606867
        num_examples: 2126
      - name: validation
        num_bytes: 393209
        num_examples: 1304
    download_size: 2007018
    dataset_size: 7817702
  - config_name: predictor_format
    features:
      - name: answer
        dtype: string
      - name: sentence2_structure
        dtype: string
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: gold_label
        dtype: string
      - name: question
        dtype: string
    splits:
      - name: train
        num_bytes: 8864108
        num_examples: 23587
      - name: test
        num_bytes: 795275
        num_examples: 2126
      - name: validation
        num_bytes: 510140
        num_examples: 1304
    download_size: 2169238
    dataset_size: 10169523
  - config_name: snli_format
    features:
      - name: sentence1_binary_parse
        dtype: string
      - name: sentence1_parse
        dtype: string
      - name: sentence1
        dtype: string
      - name: sentence2_parse
        dtype: string
      - name: sentence2
        dtype: string
      - name: annotator_labels
        sequence: string
      - name: gold_label
        dtype: string
    splits:
      - name: train
        num_bytes: 22457379
        num_examples: 23596
      - name: test
        num_bytes: 2005142
        num_examples: 2126
      - name: validation
        num_bytes: 1264378
        num_examples: 1304
    download_size: 7476483
    dataset_size: 25726899
  - config_name: tsv_format
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype: string
    splits:
      - name: train
        num_bytes: 4606527
        num_examples: 23097
      - name: test
        num_bytes: 410267
        num_examples: 2126
      - name: validation
        num_bytes: 260422
        num_examples: 1304
    download_size: 1836546
    dataset_size: 5277216
configs:
  - config_name: dgem_format
    data_files:
      - split: train
        path: dgem_format/train-*
      - split: test
        path: dgem_format/test-*
      - split: validation
        path: dgem_format/validation-*
  - config_name: predictor_format
    data_files:
      - split: train
        path: predictor_format/train-*
      - split: test
        path: predictor_format/test-*
      - split: validation
        path: predictor_format/validation-*
  - config_name: snli_format
    data_files:
      - split: train
        path: snli_format/train-*
      - split: test
        path: snli_format/test-*
      - split: validation
        path: snli_format/validation-*
  - config_name: tsv_format
    data_files:
      - split: train
        path: tsv_format/train-*
      - split: test
        path: tsv_format/test-*
      - split: validation
        path: tsv_format/validation-*

Dataset Card for "scitail"

Table of Contents

Dataset Description

Dataset Summary

The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

dgem_format

  • Size of downloaded dataset files: 14.18 MB
  • Size of the generated dataset: 7.83 MB
  • Total amount of disk used: 22.01 MB

An example of 'train' looks as follows.


predictor_format

  • Size of downloaded dataset files: 14.18 MB
  • Size of the generated dataset: 10.19 MB
  • Total amount of disk used: 24.37 MB

An example of 'validation' looks as follows.


snli_format

  • Size of downloaded dataset files: 14.18 MB
  • Size of the generated dataset: 25.77 MB
  • Total amount of disk used: 39.95 MB

An example of 'validation' looks as follows.


tsv_format

  • Size of downloaded dataset files: 14.18 MB
  • Size of the generated dataset: 5.30 MB
  • Total amount of disk used: 19.46 MB

An example of 'validation' looks as follows.


Data Fields

The data fields are the same among all splits.

dgem_format

  • premise: a string feature.
  • hypothesis: a string feature.
  • label: a string feature.
  • hypothesis_graph_structure: a string feature.

predictor_format

  • answer: a string feature.
  • sentence2_structure: a string feature.
  • sentence1: a string feature.
  • sentence2: a string feature.
  • gold_label: a string feature.
  • question: a string feature.

snli_format

  • sentence1_binary_parse: a string feature.
  • sentence1_parse: a string feature.
  • sentence1: a string feature.
  • sentence2_parse: a string feature.
  • sentence2: a string feature.
  • annotator_labels: a list of string features.
  • gold_label: a string feature.

tsv_format

  • premise: a string feature.
  • hypothesis: a string feature.
  • label: a string feature.

Data Splits

name train validation test
dgem_format 23088 1304 2126
predictor_format 23587 1304 2126
snli_format 23596 1304 2126
tsv_format 23097 1304 2126

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

inproceedings{scitail,
     Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},
     Booktitle = {AAAI},
     Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},
     Year = {2018}
}

Contributions

Thanks to @patrickvonplaten, @mariamabarham, @lewtun, @thomwolf for adding this dataset.