The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for WikiQAar
Dataset Summary
Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus
Supported Tasks and Leaderboards
[More Information Needed]
Languages
The dataset is based on Arabic.
Dataset Structure
Data Instances
Each data point contains the question and whether the answer is a valid or not.
Data Fields
question_id
: the question id.question
: the question text.document_id
: the wikipedia document id.answer_id
: the answer id.answer
: a candidate answer to the question.label
: 1 if theanswer
is correct or 0 otherwise.
Data Splits
The dataset is not split.
train | validation | test | |
---|---|---|---|
Data split | 70,264 | 20,632 | 10,387 |
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
[More Information Needed]
Initial Data Collection and Normalization
Translation of WikiQA.
Who are the source language producers?
Translation of WikiQA.
Annotations
The dataset does not contain any additional annotations.
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
@InProceedings{YangYihMeek:EMNLP2015:WikiQA,
author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek},
title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}",
journal = {Association for Computational Linguistics},
year = 2015,
doi = {10.18653/v1/D15-1237},
pages = {2013–2018},
}
Contributions
Thanks to @zaidalyafeai for adding this dataset.
- Downloads last month
- 90