Datasets:
Tasks:
Table to Text
Modalities:
Text
Languages:
English
Size:
10K - 100K
Tags:
data-to-text
License:
Sebastian Gehrmann
commited on
Commit
•
5b3af5b
1
Parent(s):
734e8fc
Data Card.
Browse files- README.md +1 -1
- web_nlg.json +1 -1
README.md
CHANGED
@@ -565,7 +565,7 @@ We evaluated a wide range of models as part of the GEM benchmark.
|
|
565 |
|
566 |
<!-- info: What are the most relevant previous results for this task/dataset? -->
|
567 |
<!-- scope: microscope -->
|
568 |
-
Results can be found
|
569 |
|
570 |
|
571 |
|
|
|
565 |
|
566 |
<!-- info: What are the most relevant previous results for this task/dataset? -->
|
567 |
<!-- scope: microscope -->
|
568 |
+
Results can be found on the [GEM website](https://gem-benchmark.com/results).
|
569 |
|
570 |
|
571 |
|
web_nlg.json
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
"other-metrics-definitions": "N/A",
|
5 |
"has-previous-results": "yes",
|
6 |
"current-evaluation": "We evaluated a wide range of models as part of the GEM benchmark.",
|
7 |
-
"previous-results": "Results can be found
|
8 |
"original-evaluation": "For both languages, the participating systems are automatically evaluated in a multi-reference scenario. Each English hypothesis is compared to a maximum of 5 references, and each Russian one to a maximum of 7 references. On average, English data has 2.89 references per test instance, and Russian data has 2.52 references per instance. \n\nIn a human evaluation, example are uniformly sampled across size of triple sets and the following dimensions are assessed (on MTurk and Yandex.Toloka):\n\n1. Data Coverage: Does the text include descriptions of all predicates presented in the data?\n2. Relevance: Does the text describe only such predicates (with related subjects and objects), which are found in the data?\n3. Correctness: When describing predicates which are found in the data, does the text mention correct the objects and adequately introduces the subject for this specific predicate?\n4. Text Structure: Is the text grammatical, well-structured, written in acceptable English language?\n5. Fluency: Is it possible to say that the text progresses naturally, forms a coherent whole and it is easy to understand the text?\n\nFor additional information like the instructions, we refer to the original paper.\n"
|
9 |
}
|
10 |
},
|
|
|
4 |
"other-metrics-definitions": "N/A",
|
5 |
"has-previous-results": "yes",
|
6 |
"current-evaluation": "We evaluated a wide range of models as part of the GEM benchmark.",
|
7 |
+
"previous-results": "Results can be found on the [GEM website](https://gem-benchmark.com/results).",
|
8 |
"original-evaluation": "For both languages, the participating systems are automatically evaluated in a multi-reference scenario. Each English hypothesis is compared to a maximum of 5 references, and each Russian one to a maximum of 7 references. On average, English data has 2.89 references per test instance, and Russian data has 2.52 references per instance. \n\nIn a human evaluation, example are uniformly sampled across size of triple sets and the following dimensions are assessed (on MTurk and Yandex.Toloka):\n\n1. Data Coverage: Does the text include descriptions of all predicates presented in the data?\n2. Relevance: Does the text describe only such predicates (with related subjects and objects), which are found in the data?\n3. Correctness: When describing predicates which are found in the data, does the text mention correct the objects and adequately introduces the subject for this specific predicate?\n4. Text Structure: Is the text grammatical, well-structured, written in acceptable English language?\n5. Fluency: Is it possible to say that the text progresses naturally, forms a coherent whole and it is easy to understand the text?\n\nFor additional information like the instructions, we refer to the original paper.\n"
|
9 |
}
|
10 |
},
|