lvwerra HF staff commited on
Commit
4462578
1 Parent(s): 4375581

Update Space (evaluate main: 828c6327)

Browse files
Files changed (4) hide show
  1. README.md +98 -5
  2. app.py +6 -0
  3. matthews_correlation.py +103 -0
  4. requirements.txt +4 -0
README.md CHANGED
@@ -1,12 +1,105 @@
1
  ---
2
- title: Matthews_correlation
3
- emoji: 🏃
4
- colorFrom: green
5
- colorTo: yellow
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Matthews Correlation Coefficient
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for Matthews Correlation Coefficient
16
+
17
+ ## Metric Description
18
+ The Matthews correlation coefficient is used in machine learning as a
19
+ measure of the quality of binary and multiclass classifications. It takes
20
+ into account true and false positives and negatives and is generally
21
+ regarded as a balanced measure which can be used even if the classes are of
22
+ very different sizes. The MCC is in essence a correlation coefficient value
23
+ between -1 and +1. A coefficient of +1 represents a perfect prediction, 0
24
+ an average random prediction and -1 an inverse prediction. The statistic
25
+ is also known as the phi coefficient. [source: Wikipedia]
26
+
27
+ ## How to Use
28
+ At minimum, this metric requires a list of predictions and a list of references:
29
+ ```python
30
+ >>> matthews_metric = evaluate.load("matthews_correlation")
31
+ >>> results = matthews_metric.compute(references=[0, 1], predictions=[0, 1])
32
+ >>> print(results)
33
+ {'matthews_correlation': 1.0}
34
+ ```
35
+
36
+ ### Inputs
37
+ - **`predictions`** (`list` of `int`s): Predicted class labels.
38
+ - **`references`** (`list` of `int`s): Ground truth labels.
39
+ - **`sample_weight`** (`list` of `int`s, `float`s, or `bool`s): Sample weights. Defaults to `None`.
40
+
41
+ ### Output Values
42
+ - **`matthews_correlation`** (`float`): Matthews correlation coefficient.
43
+
44
+ The metric output takes the following form:
45
+ ```python
46
+ {'matthews_correlation': 0.54}
47
+ ```
48
+
49
+ This metric can be any value from -1 to +1, inclusive.
50
+
51
+ #### Values from Popular Papers
52
+
53
+
54
+ ### Examples
55
+ A basic example with only predictions and references as inputs:
56
+ ```python
57
+ >>> matthews_metric = evaluate.load("matthews_correlation")
58
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
59
+ ... predictions=[1, 2, 2, 0, 3, 3])
60
+ >>> print(results)
61
+ {'matthews_correlation': 0.5384615384615384}
62
+ ```
63
+
64
+ The same example as above, but also including sample weights:
65
+ ```python
66
+ >>> matthews_metric = evaluate.load("matthews_correlation")
67
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
68
+ ... predictions=[1, 2, 2, 0, 3, 3],
69
+ ... sample_weight=[0.5, 3, 1, 1, 1, 2])
70
+ >>> print(results)
71
+ {'matthews_correlation': 0.09782608695652174}
72
+ ```
73
+
74
+ The same example as above, with sample weights that cause a negative correlation:
75
+ ```python
76
+ >>> matthews_metric = evaluate.load("matthews_correlation")
77
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
78
+ ... predictions=[1, 2, 2, 0, 3, 3],
79
+ ... sample_weight=[0.5, 1, 0, 0, 0, 1])
80
+ >>> print(results)
81
+ {'matthews_correlation': -0.25}
82
+ ```
83
+
84
+ ## Limitations and Bias
85
+ *Note any limitations or biases that the metric has.*
86
+
87
+
88
+ ## Citation
89
+ ```bibtex
90
+ @article{scikit-learn,
91
+ title={Scikit-learn: Machine Learning in {P}ython},
92
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
93
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
94
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
95
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
96
+ journal={Journal of Machine Learning Research},
97
+ volume={12},
98
+ pages={2825--2830},
99
+ year={2011}
100
+ }
101
+ ```
102
+
103
+ ## Further References
104
+
105
+ - This Hugging Face implementation uses [this scikit-learn implementation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("matthews_correlation")
6
+ launch_gradio_widget(module)
matthews_correlation.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Matthews Correlation metric."""
15
+
16
+ import datasets
17
+ from sklearn.metrics import matthews_corrcoef
18
+
19
+ import evaluate
20
+
21
+
22
+ _DESCRIPTION = """
23
+ Compute the Matthews correlation coefficient (MCC)
24
+
25
+ The Matthews correlation coefficient is used in machine learning as a
26
+ measure of the quality of binary and multiclass classifications. It takes
27
+ into account true and false positives and negatives and is generally
28
+ regarded as a balanced measure which can be used even if the classes are of
29
+ very different sizes. The MCC is in essence a correlation coefficient value
30
+ between -1 and +1. A coefficient of +1 represents a perfect prediction, 0
31
+ an average random prediction and -1 an inverse prediction. The statistic
32
+ is also known as the phi coefficient. [source: Wikipedia]
33
+ """
34
+
35
+ _KWARGS_DESCRIPTION = """
36
+ Args:
37
+ predictions (list of int): Predicted labels, as returned by a model.
38
+ references (list of int): Ground truth labels.
39
+ sample_weight (list of int, float, or bool): Sample weights. Defaults to `None`.
40
+ Returns:
41
+ matthews_correlation (dict containing float): Matthews correlation.
42
+ Examples:
43
+ Example 1, a basic example with only predictions and references as inputs:
44
+ >>> matthews_metric = evaluate.load("matthews_correlation")
45
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
46
+ ... predictions=[1, 2, 2, 0, 3, 3])
47
+ >>> print(round(results['matthews_correlation'], 2))
48
+ 0.54
49
+
50
+ Example 2, the same example as above, but also including sample weights:
51
+ >>> matthews_metric = evaluate.load("matthews_correlation")
52
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
53
+ ... predictions=[1, 2, 2, 0, 3, 3],
54
+ ... sample_weight=[0.5, 3, 1, 1, 1, 2])
55
+ >>> print(round(results['matthews_correlation'], 2))
56
+ 0.1
57
+
58
+ Example 3, the same example as above, but with sample weights that cause a negative correlation:
59
+ >>> matthews_metric = evaluate.load("matthews_correlation")
60
+ >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2],
61
+ ... predictions=[1, 2, 2, 0, 3, 3],
62
+ ... sample_weight=[0.5, 1, 0, 0, 0, 1])
63
+ >>> print(round(results['matthews_correlation'], 2))
64
+ -0.25
65
+ """
66
+
67
+ _CITATION = """\
68
+ @article{scikit-learn,
69
+ title={Scikit-learn: Machine Learning in {P}ython},
70
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
71
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
72
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
73
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
74
+ journal={Journal of Machine Learning Research},
75
+ volume={12},
76
+ pages={2825--2830},
77
+ year={2011}
78
+ }
79
+ """
80
+
81
+
82
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
83
+ class MatthewsCorrelation(evaluate.EvaluationModule):
84
+ def _info(self):
85
+ return evaluate.EvaluationModuleInfo(
86
+ description=_DESCRIPTION,
87
+ citation=_CITATION,
88
+ inputs_description=_KWARGS_DESCRIPTION,
89
+ features=datasets.Features(
90
+ {
91
+ "predictions": datasets.Value("int32"),
92
+ "references": datasets.Value("int32"),
93
+ }
94
+ ),
95
+ reference_urls=[
96
+ "https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html"
97
+ ],
98
+ )
99
+
100
+ def _compute(self, predictions, references, sample_weight=None):
101
+ return {
102
+ "matthews_correlation": float(matthews_corrcoef(references, predictions, sample_weight=sample_weight)),
103
+ }
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
4
+ sklearn