Datasets:
dataset_info:
- config_name: da
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 146694
num_examples: 1024
- name: val
num_bytes: 33804
num_examples: 256
- name: test
num_bytes: 276605
num_examples: 2048
- name: full_train
num_bytes: 738488
num_examples: 5352
download_size: 703702
dataset_size: 1195591
- config_name: de
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 145852
num_examples: 1024
- name: val
num_bytes: 38035
num_examples: 256
- name: test
num_bytes: 295391
num_examples: 2048
- name: full_train
num_bytes: 3784665
num_examples: 26098
download_size: 2609370
dataset_size: 4263943
- config_name: en
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 142432
num_examples: 1024
- name: val
num_bytes: 33811
num_examples: 256
- name: test
num_bytes: 276421
num_examples: 2048
- name: full_train
num_bytes: 2065205
num_examples: 15348
download_size: 1490532
dataset_size: 2517869
- config_name: fo
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 146596
num_examples: 1024
- name: val
num_bytes: 39001
num_examples: 256
- name: test
num_bytes: 156312
num_examples: 1024
- name: full_train
num_bytes: 164465
num_examples: 1148
download_size: 240821
dataset_size: 506374
- config_name: is
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 171843
num_examples: 1024
- name: val
num_bytes: 42652
num_examples: 256
- name: test
num_bytes: 347828
num_examples: 2048
- name: full_train
num_bytes: 680496
num_examples: 4008
download_size: 665385
dataset_size: 1242819
- config_name: nb
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 129282
num_examples: 1024
- name: val
num_bytes: 30779
num_examples: 256
- name: test
num_bytes: 252311
num_examples: 2048
- name: full_train
num_bytes: 3223287
num_examples: 25908
download_size: 2159785
dataset_size: 3635659
- config_name: nl
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 123042
num_examples: 1024
- name: val
num_bytes: 30717
num_examples: 256
- name: test
num_bytes: 249767
num_examples: 2048
- name: full_train
num_bytes: 2230281
num_examples: 18110
download_size: 1551953
dataset_size: 2633807
- config_name: nn
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 137050
num_examples: 1024
- name: val
num_bytes: 35422
num_examples: 256
- name: test
num_bytes: 274470
num_examples: 2048
- name: full_train
num_bytes: 3048722
num_examples: 22768
download_size: 2086411
dataset_size: 3495664
- config_name: sv
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 142715
num_examples: 1024
- name: val
num_bytes: 36601
num_examples: 256
- name: test
num_bytes: 278198
num_examples: 2048
- name: full_train
num_bytes: 998332
num_examples: 7434
download_size: 810940
dataset_size: 1455846
configs:
- config_name: da
data_files:
- split: train
path: da/train-*
- split: val
path: da/val-*
- split: test
path: da/test-*
- split: full_train
path: da/full_train-*
- config_name: de
data_files:
- split: train
path: de/train-*
- split: val
path: de/val-*
- split: test
path: de/test-*
- split: full_train
path: de/full_train-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: val
path: en/val-*
- split: test
path: en/test-*
- split: full_train
path: en/full_train-*
- config_name: fo
data_files:
- split: train
path: fo/train-*
- split: val
path: fo/val-*
- split: test
path: fo/test-*
- split: full_train
path: fo/full_train-*
- config_name: is
data_files:
- split: train
path: is/train-*
- split: val
path: is/val-*
- split: test
path: is/test-*
- split: full_train
path: is/full_train-*
- config_name: nb
data_files:
- split: train
path: nb/train-*
- split: val
path: nb/val-*
- split: test
path: nb/test-*
- split: full_train
path: nb/full_train-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: val
path: nl/val-*
- split: test
path: nl/test-*
- split: full_train
path: nl/full_train-*
- config_name: nn
data_files:
- split: train
path: nn/train-*
- split: val
path: nn/val-*
- split: test
path: nn/test-*
- split: full_train
path: nn/full_train-*
- config_name: sv
data_files:
- split: train
path: sv/train-*
- split: val
path: sv/val-*
- split: test
path: sv/test-*
- split: full_train
path: sv/full_train-*
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- da
- sv
- nb
- nn
- 'no'
- is
- fo
- en
- de
- nl
pretty_name: ScaLA
size_categories:
- 100K<n<1M
Dataset Card for ScaLA
Dataset Description
- Point of Contact: Dan Saattrup Nielsen
Dataset Summary
This dataset consists of documents and whether they are grammatically correct or not. It has been automatically generated using this script, which corrupts documents from a universal dependencies treebank.
Supported Tasks and Leaderboards
Evaluation of linguistic acceptability (binary classification on correct/incorrect) is the intended task for this dataset. Leaderboards are live here.
Languages
The dataset is available in Danish (da
), Swedish (sv
), Norwegian Bokmål (nb
), Norwegian Nynorsk (nn
), Icelandic (is
), Faroese (fo
), German (de
), Dutch (nl
) and English (en
).
Dataset Structure
An example from the dataset looks as follows.
{
"text": "some text",
"corruption_type": <null, "flip_neighbours" or "delete">,
"label": <"incorrect" or "correct">
}
Data Fields
text
: astring
feature.corruption_type
: astring
ornull
feature.label
: astring
feature.
Dataset Creation
Curation Rationale
There are very few linguistic acceptability datasets in the given languages.
Source Data
The dataset has been collected from the universal dependencies datasets for the given languages.
Additional Information
Dataset Curators
Dan Saattrup Nielsen from the The Alexandra Institute
Licensing Information
The dataset is licensed under the CC BY-SA 4.0 license.