asahi417 commited on
Commit
9801bf1
1 Parent(s): 04801be

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,55 +33,19 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 0.72
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 16.4
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 7.78
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 71.48
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 50.35
49
- - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
50
- type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
51
- value: 81.27
52
- - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
53
- type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer
54
- value: 81.25
55
- - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
56
- type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer
57
- value: 81.29
58
- - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
59
- type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer
60
- value: 55.61
61
- - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
62
- type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer
63
- value: 55.6
64
- - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
65
- type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
66
- value: 55.61
67
- - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
68
- type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
69
- value: 75.55
70
- - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
71
- type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
72
- value: 77.16
73
- - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
74
- type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
75
- value: 74.04
76
- - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
77
- type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
78
- value: 51.75
79
- - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
80
- type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
81
- value: 52.52
82
- - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
83
- type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
84
- value: 51.03
85
  ---
86
 
87
  # Model Card of `lmqg/mbart-large-cc25-frquad-qg`
@@ -125,38 +89,14 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
125
 
126
  | | Score | Type | Dataset |
127
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
128
- | BERTScore | 71.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
129
- | Bleu_1 | 14.36 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
130
- | Bleu_2 | 3.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
131
- | Bleu_3 | 1.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
132
- | Bleu_4 | 0.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
133
- | METEOR | 7.78 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
134
- | MoverScore | 50.35 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
135
- | ROUGE_L | 16.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
136
-
137
-
138
- - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json)
139
-
140
- | | Score | Type | Dataset |
141
- |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
142
- | QAAlignedF1Score (BERTScore) | 81.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
143
- | QAAlignedF1Score (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
144
- | QAAlignedPrecision (BERTScore) | 81.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
145
- | QAAlignedPrecision (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
146
- | QAAlignedRecall (BERTScore) | 81.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
147
- | QAAlignedRecall (MoverScore) | 55.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
148
-
149
-
150
- - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-frquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mbart-large-cc25-frquad-ae.json)
151
-
152
- | | Score | Type | Dataset |
153
- |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
154
- | QAAlignedF1Score (BERTScore) | 75.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
155
- | QAAlignedF1Score (MoverScore) | 51.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
156
- | QAAlignedPrecision (BERTScore) | 74.04 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
157
- | QAAlignedPrecision (MoverScore) | 51.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
158
- | QAAlignedRecall (BERTScore) | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
159
- | QAAlignedRecall (MoverScore) | 52.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
160
 
161
 
162
 
@@ -165,18 +105,18 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
165
  The following hyperparameters were used during fine-tuning:
166
  - dataset_path: lmqg/qg_frquad
167
  - dataset_name: default
168
- - input_types: ['paragraph_answer']
169
- - output_types: ['question']
170
  - prefix_types: None
171
  - model: facebook/mbart-large-cc25
172
  - max_length: 512
173
  - max_length_output: 32
174
- - epoch: 8
175
- - batch: 4
176
- - lr: 0.001
177
  - fp16: False
178
  - random_seed: 1
179
- - gradient_accumulation_steps: 16
180
  - label_smoothing: 0.15
181
 
182
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 9.47
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 30.62
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 19.8
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 81.75
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 57.96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ---
50
 
51
  # Model Card of `lmqg/mbart-large-cc25-frquad-qg`
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 81.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
93
+ | Bleu_1 | 30.64 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
94
+ | Bleu_2 | 19.09 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
95
+ | Bleu_3 | 13.26 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
96
+ | Bleu_4 | 9.47 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | METEOR | 19.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | MoverScore | 57.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | ROUGE_L | 30.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
 
102
 
 
105
  The following hyperparameters were used during fine-tuning:
106
  - dataset_path: lmqg/qg_frquad
107
  - dataset_name: default
108
+ - input_types: paragraph_answer
109
+ - output_types: question
110
  - prefix_types: None
111
  - model: facebook/mbart-large-cc25
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 7
115
+ - batch: 16
116
+ - lr: 0.0002
117
  - fp16: False
118
  - random_seed: 1
119
+ - gradient_accumulation_steps: 4
120
  - label_smoothing: 0.15
121
 
122
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.15266974606897177, "Bleu_2": 0.032315652726206935, "Bleu_3": 0.01242272000629668, "Bleu_4": 0.005639579702145279}, "test": {"Bleu_1": 0.14319611140546562, "Bleu_2": 0.03550101836852145, "Bleu_3": 0.014304205020402904, "Bleu_4": 0.007136815650060918}}
 
1
+ {"validation": {"Bleu_1": 0.3100079744816499, "Bleu_2": 0.1864790045549178, "Bleu_3": 0.127381730709651, "Bleu_4": 0.09041409589904609}, "test": {"Bleu_1": 0.30524955201080156, "Bleu_2": 0.19005737843582, "Bleu_3": 0.1321122729640997, "Bleu_4": 0.09440059249803767}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.15390469643777396, "Bleu_2": 0.032738698795741726, "Bleu_3": 0.012638666968838693, "Bleu_4": 0.005749977493630156, "METEOR": 0.07530493765743741, "ROUGE_L": 0.1803680529492586, "BERTScore": 0.7121313798282017, "MoverScore": 0.5039723302559359}, "test": {"Bleu_1": 0.1435759891766124, "Bleu_2": 0.03577421097785861, "Bleu_3": 0.014463809488654859, "Bleu_4": 0.007249735123112426, "METEOR": 0.07782490144091612, "ROUGE_L": 0.16404130481401358, "BERTScore": 0.7148478843440516, "MoverScore": 0.5034779936072653}}
 
1
+ {"validation": {"Bleu_1": 0.31183109270653203, "Bleu_2": 0.18792193517614694, "Bleu_3": 0.1285923759543279, "Bleu_4": 0.09139932689525869, "METEOR": 0.2003735134108025, "ROUGE_L": 0.32654333131001667, "BERTScore": 0.8112381537155198, "MoverScore": 0.5767622342106911}, "test": {"Bleu_1": 0.306424741768385, "Bleu_2": 0.1908808322930319, "Bleu_3": 0.1326252420714428, "Bleu_4": 0.09468344912075398, "METEOR": 0.19795720123707838, "ROUGE_L": 0.3062119155590123, "BERTScore": 0.8174732348101039, "MoverScore": 0.5795739024172827}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff