commit files to HF hub
Browse files- README.md +19 -79
- eval/metric.first.answer.paragraph_answer.question.lmqg_qg_frquad.default.json +1 -1
- eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json +1 -1
- eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt +0 -0
- eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt +0 -0
README.md
CHANGED
@@ -33,55 +33,19 @@ model-index:
|
|
33 |
metrics:
|
34 |
- name: BLEU4 (Question Generation)
|
35 |
type: bleu4_question_generation
|
36 |
-
value:
|
37 |
- name: ROUGE-L (Question Generation)
|
38 |
type: rouge_l_question_generation
|
39 |
-
value:
|
40 |
- name: METEOR (Question Generation)
|
41 |
type: meteor_question_generation
|
42 |
-
value:
|
43 |
- name: BERTScore (Question Generation)
|
44 |
type: bertscore_question_generation
|
45 |
-
value:
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
-
value:
|
49 |
-
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
50 |
-
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
51 |
-
value: 81.27
|
52 |
-
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
53 |
-
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
54 |
-
value: 81.25
|
55 |
-
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
56 |
-
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
57 |
-
value: 81.29
|
58 |
-
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
59 |
-
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
60 |
-
value: 55.61
|
61 |
-
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
62 |
-
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
63 |
-
value: 55.6
|
64 |
-
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
65 |
-
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
66 |
-
value: 55.61
|
67 |
-
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
|
68 |
-
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
|
69 |
-
value: 75.55
|
70 |
-
- name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
|
71 |
-
type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
|
72 |
-
value: 77.16
|
73 |
-
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
|
74 |
-
type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
|
75 |
-
value: 74.04
|
76 |
-
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
|
77 |
-
type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
|
78 |
-
value: 51.75
|
79 |
-
- name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
|
80 |
-
type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
|
81 |
-
value: 52.52
|
82 |
-
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
|
83 |
-
type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
|
84 |
-
value: 51.03
|
85 |
---
|
86 |
|
87 |
# Model Card of `lmqg/mbart-large-cc25-frquad-qg`
|
@@ -125,38 +89,14 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
|
|
125 |
|
126 |
| | Score | Type | Dataset |
|
127 |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
|
128 |
-
| BERTScore |
|
129 |
-
| Bleu_1 |
|
130 |
-
| Bleu_2 |
|
131 |
-
| Bleu_3 |
|
132 |
-
| Bleu_4 |
|
133 |
-
| METEOR |
|
134 |
-
| MoverScore |
|
135 |
-
| ROUGE_L |
|
136 |
-
|
137 |
-
|
138 |
-
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json)
|
139 |
-
|
140 |
-
| | Score | Type | Dataset |
|
141 |
-
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
142 |
-
| QAAlignedF1Score (BERTScore) | 81.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
143 |
-
| QAAlignedF1Score (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
144 |
-
| QAAlignedPrecision (BERTScore) | 81.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
145 |
-
| QAAlignedPrecision (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
146 |
-
| QAAlignedRecall (BERTScore) | 81.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
147 |
-
| QAAlignedRecall (MoverScore) | 55.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
148 |
-
|
149 |
-
|
150 |
-
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-frquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mbart-large-cc25-frquad-ae.json)
|
151 |
-
|
152 |
-
| | Score | Type | Dataset |
|
153 |
-
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
154 |
-
| QAAlignedF1Score (BERTScore) | 75.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
155 |
-
| QAAlignedF1Score (MoverScore) | 51.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
156 |
-
| QAAlignedPrecision (BERTScore) | 74.04 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
157 |
-
| QAAlignedPrecision (MoverScore) | 51.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
158 |
-
| QAAlignedRecall (BERTScore) | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
159 |
-
| QAAlignedRecall (MoverScore) | 52.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
160 |
|
161 |
|
162 |
|
@@ -165,18 +105,18 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
|
|
165 |
The following hyperparameters were used during fine-tuning:
|
166 |
- dataset_path: lmqg/qg_frquad
|
167 |
- dataset_name: default
|
168 |
-
- input_types:
|
169 |
-
- output_types:
|
170 |
- prefix_types: None
|
171 |
- model: facebook/mbart-large-cc25
|
172 |
- max_length: 512
|
173 |
- max_length_output: 32
|
174 |
-
- epoch:
|
175 |
-
- batch:
|
176 |
-
- lr: 0.
|
177 |
- fp16: False
|
178 |
- random_seed: 1
|
179 |
-
- gradient_accumulation_steps:
|
180 |
- label_smoothing: 0.15
|
181 |
|
182 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
|
|
|
33 |
metrics:
|
34 |
- name: BLEU4 (Question Generation)
|
35 |
type: bleu4_question_generation
|
36 |
+
value: 9.47
|
37 |
- name: ROUGE-L (Question Generation)
|
38 |
type: rouge_l_question_generation
|
39 |
+
value: 30.62
|
40 |
- name: METEOR (Question Generation)
|
41 |
type: meteor_question_generation
|
42 |
+
value: 19.8
|
43 |
- name: BERTScore (Question Generation)
|
44 |
type: bertscore_question_generation
|
45 |
+
value: 81.75
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
+
value: 57.96
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
---
|
50 |
|
51 |
# Model Card of `lmqg/mbart-large-cc25-frquad-qg`
|
|
|
89 |
|
90 |
| | Score | Type | Dataset |
|
91 |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
|
92 |
+
| BERTScore | 81.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
93 |
+
| Bleu_1 | 30.64 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
94 |
+
| Bleu_2 | 19.09 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
95 |
+
| Bleu_3 | 13.26 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
96 |
+
| Bleu_4 | 9.47 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
97 |
+
| METEOR | 19.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
98 |
+
| MoverScore | 57.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
99 |
+
| ROUGE_L | 30.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
|
101 |
|
102 |
|
|
|
105 |
The following hyperparameters were used during fine-tuning:
|
106 |
- dataset_path: lmqg/qg_frquad
|
107 |
- dataset_name: default
|
108 |
+
- input_types: paragraph_answer
|
109 |
+
- output_types: question
|
110 |
- prefix_types: None
|
111 |
- model: facebook/mbart-large-cc25
|
112 |
- max_length: 512
|
113 |
- max_length_output: 32
|
114 |
+
- epoch: 7
|
115 |
+
- batch: 16
|
116 |
+
- lr: 0.0002
|
117 |
- fp16: False
|
118 |
- random_seed: 1
|
119 |
+
- gradient_accumulation_steps: 4
|
120 |
- label_smoothing: 0.15
|
121 |
|
122 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
|
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_frquad.default.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"validation": {"Bleu_1": 0.
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.3100079744816499, "Bleu_2": 0.1864790045549178, "Bleu_3": 0.127381730709651, "Bleu_4": 0.09041409589904609}, "test": {"Bleu_1": 0.30524955201080156, "Bleu_2": 0.19005737843582, "Bleu_3": 0.1321122729640997, "Bleu_4": 0.09440059249803767}}
|
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"validation": {"Bleu_1": 0.
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.31183109270653203, "Bleu_2": 0.18792193517614694, "Bleu_3": 0.1285923759543279, "Bleu_4": 0.09139932689525869, "METEOR": 0.2003735134108025, "ROUGE_L": 0.32654333131001667, "BERTScore": 0.8112381537155198, "MoverScore": 0.5767622342106911}, "test": {"Bleu_1": 0.306424741768385, "Bleu_2": 0.1908808322930319, "Bleu_3": 0.1326252420714428, "Bleu_4": 0.09468344912075398, "METEOR": 0.19795720123707838, "ROUGE_L": 0.3062119155590123, "BERTScore": 0.8174732348101039, "MoverScore": 0.5795739024172827}}
|
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|