lvwerra HF staff commited on
Commit
86af275
1 Parent(s): 6e8a8a3

Update Space (evaluate main: 828c6327)

Browse files
Files changed (5) hide show
  1. README.md +145 -5
  2. app.py +6 -0
  3. bleu.py +121 -0
  4. requirements.txt +3 -0
  5. tokenizer_13a.py +100 -0
README.md CHANGED
@@ -1,12 +1,152 @@
1
  ---
2
- title: Bleu
3
- emoji: 🏢
4
- colorFrom: yellow
5
- colorTo: purple
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: BLEU
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for BLEU
16
+
17
+
18
+ ## Metric Description
19
+ BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
20
+
21
+ Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality. Neither intelligibility nor grammatical correctness are not taken into account.
22
+
23
+ ## Intended Uses
24
+ BLEU and BLEU-derived metrics are most often used for machine translation.
25
+
26
+ ## How to Use
27
+
28
+ This metric takes as input a list of predicted sentences and a list of lists of reference sentences (since each predicted sentence can have multiple references):
29
+
30
+ ```python
31
+ >>> predictions = ["hello there general kenobi", "foo bar foobar"]
32
+ >>> references = [
33
+ ... ["hello there general kenobi", "hello there !"],
34
+ ... ["foo bar foobar"]
35
+ ... ]
36
+ >>> bleu = evaluate.load("bleu")
37
+ >>> results = bleu.compute(predictions=predictions, references=references)
38
+ >>> print(results)
39
+ {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6}
40
+ ```
41
+
42
+ ### Inputs
43
+ - **predictions** (`list` of `str`s): Translations to score.
44
+ - **references** (`list` of `list`s of `str`s): references for each translation.
45
+ - ** tokenizer** : approach used for standardizing `predictions` and `references`.
46
+ The default tokenizer is `tokenizer_13a`, a relatively minimal tokenization approach that is however equivalent to `mteval-v13a`, used by WMT.
47
+ This can be replaced by another tokenizer from a source such as [SacreBLEU](https://github.com/mjpost/sacrebleu/tree/master/sacrebleu/tokenizers).
48
+
49
+ The default tokenizer is based on whitespace and regexes. It can be replaced by any function that takes a string as input and returns a list of tokens as output. E.g. `word_tokenize()` from [NLTK](https://www.nltk.org/api/nltk.tokenize.html) or pretrained tokenizers from the [Tokenizers library](https://huggingface.co/docs/tokenizers/index).
50
+ - **max_order** (`int`): Maximum n-gram order to use when computing BLEU score. Defaults to `4`.
51
+ - **smooth** (`boolean`): Whether or not to apply Lin et al. 2004 smoothing. Defaults to `False`.
52
+
53
+ ### Output Values
54
+ - **bleu** (`float`): bleu score
55
+ - **precisions** (`list` of `float`s): geometric mean of n-gram precisions,
56
+ - **brevity_penalty** (`float`): brevity penalty,
57
+ - **length_ratio** (`float`): ratio of lengths,
58
+ - **translation_length** (`int`): translation_length,
59
+ - **reference_length** (`int`): reference_length
60
+
61
+ Output Example:
62
+ ```python
63
+ {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6}
64
+ ```
65
+
66
+ BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.
67
+
68
+ #### Values from Popular Papers
69
+ The [original BLEU paper](https://aclanthology.org/P02-1040/) (Papineni et al. 2002) compares BLEU scores of five different models on the same 500-sentence corpus. These scores ranged from 0.0527 to 0.2571.
70
+
71
+ The [Attention is All you Need paper](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) (Vaswani et al. 2017) got a BLEU score of 0.284 on the WMT 2014 English-to-German translation task, and 0.41 on the WMT 2014 English-to-French translation task.
72
+
73
+ ### Examples
74
+
75
+ Example where each prediction has 1 reference:
76
+ ```python
77
+ >>> predictions = ["hello there general kenobi","foo bar foobar"]
78
+ >>> references = [
79
+ ... ["hello there general kenobi"],
80
+ ... ["foo bar foobar"]
81
+ ... ]
82
+ >>> bleu = evaluate.load("bleu")
83
+ >>> results = bleu.compute(predictions=predictions, references=references)
84
+ >>> print(results)
85
+ {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.0, 'translation_length': 7, 'reference_length': 7}
86
+ ```
87
+
88
+ Example where the second prediction has 2 references:
89
+ ```python
90
+ >>> predictions = [
91
+ ... ["hello there general kenobi",
92
+ ... ["foo bar foobar"]
93
+ ... ]
94
+ >>> references = [
95
+ ... [["hello there general kenobi"], ["hello there!"]],
96
+ ... [["foo bar foobar"]]
97
+ ... ]
98
+ >>> bleu = evaluate.load("bleu")
99
+ >>> results = bleu.compute(predictions=predictions, references=references)
100
+ >>> print(results)
101
+ {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6}
102
+ ```
103
+
104
+ Example with the word tokenizer from NLTK:
105
+ ```python
106
+ >>> bleu = evaluate.load("bleu")
107
+ >>> from nltk.tokenize import word_tokenize
108
+ >>> predictions = [
109
+ ... ["hello there general kenobi",
110
+ ... ["foo bar foobar"]
111
+ ... ]
112
+ >>> references = [
113
+ ... [["hello there general kenobi"], ["hello there!"]],
114
+ ... [["foo bar foobar"]]
115
+ ... ]
116
+ >>> results = bleu.compute(predictions=predictions, references=references, tokenizer=word_tokenize)
117
+ >>> print(results)
118
+ {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6}
119
+ ```
120
+
121
+ ## Limitations and Bias
122
+ This metric has multiple known limitations:
123
+ - BLEU compares overlap in tokens from the predictions and references, instead of comparing meaning. This can lead to discrepancies between BLEU scores and human ratings.
124
+ - Shorter predicted translations achieve higher scores than longer ones, simply due to how the score is calculated. A brevity penalty is introduced to attempt to counteract this.
125
+ - BLEU scores are not comparable across different datasets, nor are they comparable across different languages.
126
+ - BLEU scores can vary greatly depending on which parameters are used to generate the scores, especially when different tokenization and normalization techniques are used. It is therefore not possible to compare BLEU scores generated using different parameters, or when these parameters are unknown. For more discussion around this topic, see the following [issue](https://github.com/huggingface/datasets/issues/137).
127
+
128
+ ## Citation
129
+ ```bibtex
130
+ @INPROCEEDINGS{Papineni02bleu:a,
131
+ author = {Kishore Papineni and Salim Roukos and Todd Ward and Wei-jing Zhu},
132
+ title = {BLEU: a Method for Automatic Evaluation of Machine Translation},
133
+ booktitle = {},
134
+ year = {2002},
135
+ pages = {311--318}
136
+ }
137
+ @inproceedings{lin-och-2004-orange,
138
+ title = "{ORANGE}: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation",
139
+ author = "Lin, Chin-Yew and
140
+ Och, Franz Josef",
141
+ booktitle = "{COLING} 2004: Proceedings of the 20th International Conference on Computational Linguistics",
142
+ month = "aug 23{--}aug 27",
143
+ year = "2004",
144
+ address = "Geneva, Switzerland",
145
+ publisher = "COLING",
146
+ url = "https://www.aclweb.org/anthology/C04-1072",
147
+ pages = "501--507",
148
+ }
149
+ ```
150
+
151
+ ## Further References
152
+ - This Hugging Face implementation uses [this Tensorflow implementation](https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("bleu")
6
+ launch_gradio_widget(module)
bleu.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ BLEU metric. """
15
+
16
+ import datasets
17
+
18
+ import evaluate
19
+
20
+ from .nmt_bleu import compute_bleu # From: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py
21
+ from .tokenizer_13a import Tokenizer13a
22
+
23
+
24
+ _CITATION = """\
25
+ @INPROCEEDINGS{Papineni02bleu:a,
26
+ author = {Kishore Papineni and Salim Roukos and Todd Ward and Wei-jing Zhu},
27
+ title = {BLEU: a Method for Automatic Evaluation of Machine Translation},
28
+ booktitle = {},
29
+ year = {2002},
30
+ pages = {311--318}
31
+ }
32
+ @inproceedings{lin-och-2004-orange,
33
+ title = "{ORANGE}: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation",
34
+ author = "Lin, Chin-Yew and
35
+ Och, Franz Josef",
36
+ booktitle = "{COLING} 2004: Proceedings of the 20th International Conference on Computational Linguistics",
37
+ month = "aug 23{--}aug 27",
38
+ year = "2004",
39
+ address = "Geneva, Switzerland",
40
+ publisher = "COLING",
41
+ url = "https://www.aclweb.org/anthology/C04-1072",
42
+ pages = "501--507",
43
+ }
44
+ """
45
+
46
+ _DESCRIPTION = """\
47
+ BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another.
48
+ Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is"
49
+ – this is the central idea behind BLEU. BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
50
+
51
+ Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations.
52
+ Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality.
53
+ Neither intelligibility nor grammatical correctness are not taken into account.
54
+ """
55
+
56
+ _KWARGS_DESCRIPTION = """
57
+ Computes BLEU score of translated segments against one or more references.
58
+ Args:
59
+ predictions: list of translations to score.
60
+ references: list of lists of references for each translation.
61
+ tokenizer : approach used for tokenizing `predictions` and `references`.
62
+ The default tokenizer is `tokenizer_13a`, a minimal tokenization approach that is equivalent to `mteval-v13a`, used by WMT.
63
+ This can be replaced by any function that takes a string as input and returns a list of tokens as output.
64
+ max_order: Maximum n-gram order to use when computing BLEU score.
65
+ smooth: Whether or not to apply Lin et al. 2004 smoothing.
66
+ Returns:
67
+ 'bleu': bleu score,
68
+ 'precisions': geometric mean of n-gram precisions,
69
+ 'brevity_penalty': brevity penalty,
70
+ 'length_ratio': ratio of lengths,
71
+ 'translation_length': translation_length,
72
+ 'reference_length': reference_length
73
+ Examples:
74
+
75
+ >>> predictions = ["hello there general kenobi", "foo bar foobar"]
76
+ >>> references = [
77
+ ... ["hello there general kenobi", "hello there!"],
78
+ ... ["foo bar foobar"]
79
+ ... ]
80
+ >>> bleu = evaluate.load("bleu")
81
+ >>> results = bleu.compute(predictions=predictions, references=references)
82
+ >>> print(results["bleu"])
83
+ 1.0
84
+ """
85
+
86
+
87
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
88
+ class Bleu(evaluate.EvaluationModule):
89
+ def _info(self):
90
+ return evaluate.EvaluationModuleInfo(
91
+ description=_DESCRIPTION,
92
+ citation=_CITATION,
93
+ inputs_description=_KWARGS_DESCRIPTION,
94
+ features=datasets.Features(
95
+ {
96
+ "predictions": datasets.Value("string", id="sequence"),
97
+ "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
98
+ }
99
+ ),
100
+ codebase_urls=["https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py"],
101
+ reference_urls=[
102
+ "https://en.wikipedia.org/wiki/BLEU",
103
+ "https://towardsdatascience.com/evaluating-text-output-in-nlp-bleu-at-your-own-risk-e8609665a213",
104
+ ],
105
+ )
106
+
107
+ def _compute(self, predictions, references, tokenizer=Tokenizer13a(), max_order=4, smooth=False):
108
+ references = [[tokenizer(r) for r in ref] for ref in references]
109
+ predictions = [tokenizer(p) for p in predictions]
110
+ score = compute_bleu(
111
+ reference_corpus=references, translation_corpus=predictions, max_order=max_order, smooth=smooth
112
+ )
113
+ (bleu, precisions, bp, ratio, translation_length, reference_length) = score
114
+ return {
115
+ "bleu": bleu,
116
+ "precisions": precisions,
117
+ "brevity_penalty": bp,
118
+ "length_ratio": ratio,
119
+ "translation_length": translation_length,
120
+ "reference_length": reference_length,
121
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
tokenizer_13a.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Source: https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/tokenizers/tokenizer_13a.py
2
+ # Copyright 2020 SacreBLEU Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ import re
17
+ from functools import lru_cache
18
+
19
+
20
+ class BaseTokenizer:
21
+ """A base dummy tokenizer to derive from."""
22
+
23
+ def signature(self):
24
+ """
25
+ Returns a signature for the tokenizer.
26
+ :return: signature string
27
+ """
28
+ return "none"
29
+
30
+ def __call__(self, line):
31
+ """
32
+ Tokenizes an input line with the tokenizer.
33
+ :param line: a segment to tokenize
34
+ :return: the tokenized line
35
+ """
36
+ return line
37
+
38
+
39
+ class TokenizerRegexp(BaseTokenizer):
40
+ def signature(self):
41
+ return "re"
42
+
43
+ def __init__(self):
44
+ self._re = [
45
+ # language-dependent part (assuming Western languages)
46
+ (re.compile(r"([\{-\~\[-\` -\&\(-\+\:-\@\/])"), r" \1 "),
47
+ # tokenize period and comma unless preceded by a digit
48
+ (re.compile(r"([^0-9])([\.,])"), r"\1 \2 "),
49
+ # tokenize period and comma unless followed by a digit
50
+ (re.compile(r"([\.,])([^0-9])"), r" \1 \2"),
51
+ # tokenize dash when preceded by a digit
52
+ (re.compile(r"([0-9])(-)"), r"\1 \2 "),
53
+ # one space only between words
54
+ # NOTE: Doing this in Python (below) is faster
55
+ # (re.compile(r'\s+'), r' '),
56
+ ]
57
+
58
+ @lru_cache(maxsize=2**16)
59
+ def __call__(self, line):
60
+ """Common post-processing tokenizer for `13a` and `zh` tokenizers.
61
+ :param line: a segment to tokenize
62
+ :return: the tokenized line
63
+ """
64
+ for (_re, repl) in self._re:
65
+ line = _re.sub(repl, line)
66
+
67
+ # no leading or trailing spaces, single space within words
68
+ # return ' '.join(line.split())
69
+ # This line is changed with regards to the original tokenizer (seen above) to return individual words
70
+ return line.split()
71
+
72
+
73
+ class Tokenizer13a(BaseTokenizer):
74
+ def signature(self):
75
+ return "13a"
76
+
77
+ def __init__(self):
78
+ self._post_tokenizer = TokenizerRegexp()
79
+
80
+ @lru_cache(maxsize=2**16)
81
+ def __call__(self, line):
82
+ """Tokenizes an input line using a relatively minimal tokenization
83
+ that is however equivalent to mteval-v13a, used by WMT.
84
+
85
+ :param line: a segment to tokenize
86
+ :return: the tokenized line
87
+ """
88
+
89
+ # language-independent part:
90
+ line = line.replace("<skipped>", "")
91
+ line = line.replace("-\n", "")
92
+ line = line.replace("\n", " ")
93
+
94
+ if "&" in line:
95
+ line = line.replace("&quot;", '"')
96
+ line = line.replace("&amp;", "&")
97
+ line = line.replace("&lt;", "<")
98
+ line = line.replace("&gt;", ">")
99
+
100
+ return self._post_tokenizer(f" {line} ")