Datasets:
license: cc-by-4.0
pretty_name: KorQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories: question-generation
task_ids: question-generation
Dataset Card for "lmqg/qg_korquad"
Dataset Description
- Repository: https://github.com/asahi417/lm-question-generation
- Paper: https://arxiv.org/abs/2210.03992
- Point of Contact: Asahi Ushio
Dataset Summary
This is a subset of QG-Bench, a unified question generation benchmark proposed in "Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference". This is a modified version of KorQuAD for question generation (QG) task. Since the original dataset only contains training/validation set, we manually sample test set from training set, which has no overlap in terms of the paragraph with the training set.
Supported Tasks and Leaderboards
question-generation
: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
Languages
Korean (ko)
Dataset Structure
An example of 'train' looks as follows.
{
"question": "ν¨μν΄μνμ΄ μ£Όλͺ©νλ νꡬλ?",
"paragraph": "λ³νμ λν μ΄ν΄μ λ¬μ¬λ μμ°κ³Όνμ μμ΄μ μΌλ°μ μΈ μ£Όμ μ΄λ©°, λ―Έμ λΆνμ λ³νλ₯Ό νꡬνλ κ°λ ₯ν λꡬλ‘μ λ°μ λμλ€. ν¨μλ λ³ννλ μμ λ¬μ¬ν¨μ μμ΄μ μ€μΆμ μΈ κ°λ
μΌλ‘μ¨ λ μ€λ₯΄κ² λλ€. μ€μμ μ€λ³μλ‘ κ΅¬μ±λ ν¨μμ μλ°ν νκ΅¬κ° μ€ν΄μνμ΄λΌλ λΆμΌλ‘ μλ €μ§κ² λμκ³ , 볡μμμ λν μ΄μ κ°μ νꡬλΆμΌλ 볡μν΄μνμ΄λΌκ³ νλ€. ν¨μν΄μνμ ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬμ μ£Όλͺ©νλ€. ν¨μν΄μνμ λ§μ μμ©λΆμΌ μ€ νλκ° μμμνμ΄λ€. λ§μ λ¬Έμ λ€μ΄ μμ°μ€λ½κ² μκ³Ό κ·Έ μμ λ³νμ¨μ κ΄κ³λ‘ κ·μ°©λκ³ , μ΄λ¬ν λ¬Έμ λ€μ΄ λ―ΈλΆλ°©μ μμΌλ‘ λ€λ£¨μ΄μ§λ€. μμ°μ λ§μ νμλ€μ΄ λμνκ³λ‘ κΈ°μ λ μ μλ€. νΌλ μ΄λ‘ μ μ΄λ¬ν μμΈ‘ λΆκ°λ₯ν νμμ νꡬνλ λ° μλΉν κΈ°μ¬λ₯Ό νλ€.",
"answer": "ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬ",
"sentence": "ν¨μν΄μνμ ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬ μ μ£Όλͺ©νλ€.",
"paragraph_sentence": 'λ³νμ λν μ΄ν΄μ λ¬μ¬λ μμ°κ³Όνμ μμ΄μ μΌλ°μ μΈ μ£Όμ μ΄λ©°, λ―Έμ λΆνμ λ³νλ₯Ό νꡬνλ κ°λ ₯ν λꡬλ‘μ λ°μ λμλ€. ν¨μλ λ³ννλ μμ λ¬μ¬ν¨μ μμ΄μ μ€μΆμ μΈ κ°λ
μΌλ‘μ¨ λ μ€λ₯΄κ² λλ€. μ€μμ μ€λ³μλ‘ κ΅¬μ±λ ν¨μμ μλ°ν νκ΅¬κ° μ€ν΄μνμ΄λΌλ λΆμΌλ‘ μλ €μ§κ² λμκ³ , 볡μμμ λν μ΄μ κ°μ νꡬ λΆμΌλ 볡μν΄μνμ΄λΌκ³ νλ€. <hl> ν¨μν΄μνμ ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬ μ μ£Όλͺ©νλ€. <hl> ν¨μν΄μνμ λ§μ μμ©λΆμΌ μ€ νλκ° μμμνμ΄λ€. λ§μ λ¬Έμ λ€μ΄ μμ°μ€λ½κ² μκ³Ό κ·Έ μμ λ³νμ¨μ κ΄κ³λ‘ κ·μ°©λκ³ , μ΄λ¬ν λ¬Έμ λ€μ΄ λ―ΈλΆλ°©μ μμΌλ‘ λ€λ£¨μ΄μ§λ€. μμ°μ λ§μ νμλ€μ΄ λμνκ³λ‘ κΈ°μ λ μ μλ€. νΌλ μ΄λ‘ μ μ΄λ¬ν μμΈ‘ λΆκ°λ₯ν νμμ νꡬνλ λ° μλΉν κΈ°μ¬λ₯Ό νλ€.',
"paragraph_answer": 'λ³νμ λν μ΄ν΄μ λ¬μ¬λ μμ°κ³Όνμ μμ΄μ μΌλ°μ μΈ μ£Όμ μ΄λ©°, λ―Έμ λΆνμ λ³νλ₯Ό νꡬνλ κ°λ ₯ν λꡬλ‘μ λ°μ λμλ€. ν¨μλ λ³ννλ μμ λ¬μ¬ν¨μ μμ΄μ μ€μΆμ μΈ κ°λ
μΌλ‘μ¨ λ μ€λ₯΄κ² λλ€. μ€μμ μ€λ³μλ‘ κ΅¬μ±λ ν¨μμ μλ°ν νκ΅¬κ° μ€ν΄μνμ΄λΌλ λΆμΌλ‘ μλ €μ§κ² λμκ³ , 볡μμμ λν μ΄μ κ°μ νꡬ λΆμΌλ 볡μν΄μνμ΄λΌκ³ νλ€. ν¨μν΄μνμ <hl> ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬ <hl>μ μ£Όλͺ©νλ€. ν¨μν΄μνμ λ§μ μμ©λΆμΌ μ€ νλκ° μμμνμ΄λ€. λ§μ λ¬Έμ λ€μ΄ μμ°μ€λ½κ² μκ³Ό κ·Έ μμ λ³νμ¨μ κ΄κ³λ‘ κ·μ°©λκ³ , μ΄λ¬ν λ¬Έμ λ€μ΄ λ―ΈλΆλ°©μ μμΌλ‘ λ€λ£¨μ΄μ§λ€. μμ°μ λ§μ νμλ€μ΄ λμνκ³λ‘ κΈ°μ λ μ μλ€. νΌλ μ΄λ‘ μ μ΄λ¬ν μμΈ‘ λΆκ°λ₯ν νμμ νꡬνλ λ° μλΉν κΈ°μ¬λ₯Ό νλ€.',
"sentence_answer": "ν¨μν΄μνμ <hl> ν¨μμ 곡κ°(νΉν 무νμ°¨μ)μ νꡬ <hl> μ μ£Όλͺ©νλ€."
}
The data fields are the same among all splits.
question
: astring
feature.paragraph
: astring
feature.answer
: astring
feature.sentence
: astring
feature.paragraph_answer
: astring
feature, which is same as the paragraph but the answer is highlighted by a special token<hl>
.paragraph_sentence
: astring
feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token<hl>
.sentence_answer
: astring
feature, which is same as the sentence but the answer is highlighted by a special token<hl>
.
Each of paragraph_answer
, paragraph_sentence
, and sentence_answer
feature is assumed to be used to train a question generation model,
but with different information. The paragraph_answer
and sentence_answer
features are for answer-aware question generation and
paragraph_sentence
feature is for sentence-aware question generation.
Data Splits
train | validation | test |
---|---|---|
54556 | 5766 | 5766 |
Citation Information
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}