|
|
|
--- |
|
license: cc-by-4.0 |
|
metrics: |
|
- bleu4 |
|
- meteor |
|
- rouge-l |
|
- bertscore |
|
- moverscore |
|
language: fr |
|
datasets: |
|
- lmqg/qg_frquad |
|
pipeline_tag: text2text-generation |
|
tags: |
|
- question generation |
|
widget: |
|
- text: "Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc." |
|
example_title: "Question Generation Example 1" |
|
- text: "Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la <hl> Grande Guerre <hl> de 14-18, ou son rejet par l'électorat en juillet 1945." |
|
example_title: "Question Generation Example 2" |
|
- text: "contre <hl> Normie Smith <hl> et 15 000 dollars le 28 novembre 1938." |
|
example_title: "Question Generation Example 3" |
|
model-index: |
|
- name: lmqg/mbart-large-cc25-frquad-qg |
|
results: |
|
- task: |
|
name: Text2text Generation |
|
type: text2text-generation |
|
dataset: |
|
name: lmqg/qg_frquad |
|
type: default |
|
args: default |
|
metrics: |
|
- name: BLEU4 (Question Generation) |
|
type: bleu4_question_generation |
|
value: 0.72 |
|
- name: ROUGE-L (Question Generation) |
|
type: rouge_l_question_generation |
|
value: 16.4 |
|
- name: METEOR (Question Generation) |
|
type: meteor_question_generation |
|
value: 7.78 |
|
- name: BERTScore (Question Generation) |
|
type: bertscore_question_generation |
|
value: 71.48 |
|
- name: MoverScore (Question Generation) |
|
type: moverscore_question_generation |
|
value: 50.35 |
|
- name: BLEU4 (Question & Answer Generation (with Gold Answer)) |
|
type: bleu4_question_answer_generation_with_gold_answer |
|
value: 9.7 |
|
- name: ROUGE-L (Question & Answer Generation (with Gold Answer)) |
|
type: rouge_l_question_answer_generation_with_gold_answer |
|
value: 33.61 |
|
- name: METEOR (Question & Answer Generation (with Gold Answer)) |
|
type: meteor_question_answer_generation_with_gold_answer |
|
value: 26.31 |
|
- name: BERTScore (Question & Answer Generation (with Gold Answer)) |
|
type: bertscore_question_answer_generation_with_gold_answer |
|
value: 80.27 |
|
- name: MoverScore (Question & Answer Generation (with Gold Answer)) |
|
type: moverscore_question_answer_generation_with_gold_answer |
|
value: 55.65 |
|
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 81.27 |
|
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 81.25 |
|
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 81.29 |
|
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 55.61 |
|
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 55.6 |
|
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
|
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer |
|
value: 55.61 |
|
--- |
|
|
|
# Model Card of `lmqg/mbart-large-cc25-frquad-qg` |
|
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). |
|
|
|
|
|
### Overview |
|
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) |
|
- **Language:** fr |
|
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) |
|
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/) |
|
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) |
|
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) |
|
|
|
### Usage |
|
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) |
|
```python |
|
from lmqg import TransformersQG |
|
|
|
# initialize model |
|
model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qg") |
|
|
|
# model prediction |
|
questions = model.generate_q(list_context="Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.", list_answer="le Suprême Berger") |
|
|
|
``` |
|
|
|
- With `transformers` |
|
```python |
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qg") |
|
output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") |
|
|
|
``` |
|
|
|
## Evaluation |
|
|
|
|
|
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) |
|
|
|
| | Score | Type | Dataset | |
|
|:-----------|--------:|:--------|:-----------------------------------------------------------------| |
|
| BERTScore | 71.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_1 | 14.36 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_2 | 3.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_3 | 1.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_4 | 0.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| METEOR | 7.78 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| MoverScore | 50.35 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| ROUGE_L | 16.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
|
|
|
|
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json) |
|
|
|
| | Score | Type | Dataset | |
|
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| |
|
| BERTScore | 80.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_1 | 29.47 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_2 | 19.07 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_3 | 13.39 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| Bleu_4 | 9.7 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| METEOR | 26.31 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| MoverScore | 55.65 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedF1Score (BERTScore) | 81.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedF1Score (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedPrecision (BERTScore) | 81.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedPrecision (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedRecall (BERTScore) | 81.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| QAAlignedRecall (MoverScore) | 55.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
| ROUGE_L | 33.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | |
|
|
|
|
|
|
|
## Training hyperparameters |
|
|
|
The following hyperparameters were used during fine-tuning: |
|
- dataset_path: lmqg/qg_frquad |
|
- dataset_name: default |
|
- input_types: ['paragraph_answer'] |
|
- output_types: ['question'] |
|
- prefix_types: None |
|
- model: facebook/mbart-large-cc25 |
|
- max_length: 512 |
|
- max_length_output: 32 |
|
- epoch: 8 |
|
- batch: 4 |
|
- lr: 0.001 |
|
- fp16: False |
|
- random_seed: 1 |
|
- gradient_accumulation_steps: 16 |
|
- label_smoothing: 0.15 |
|
|
|
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json). |
|
|
|
## Citation |
|
``` |
|
@inproceedings{ushio-etal-2022-generative, |
|
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", |
|
author = "Ushio, Asahi and |
|
Alva-Manchego, Fernando and |
|
Camacho-Collados, Jose", |
|
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", |
|
month = dec, |
|
year = "2022", |
|
address = "Abu Dhabi, U.A.E.", |
|
publisher = "Association for Computational Linguistics", |
|
} |
|
|
|
``` |
|
|