Spaces:
Runtime error
Runtime error
Avijit Ghosh
commited on
Commit
•
d43a899
1
Parent(s):
30b822d
reduce own work
Browse files- configs/homoglyphbias.yaml +0 -16
- configs/stereoset.yaml +0 -16
configs/homoglyphbias.yaml
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
Abstract: .nan
|
2 |
-
Applicable Models: .nan
|
3 |
-
Authors: .nan
|
4 |
-
Considerations: .nan
|
5 |
-
Datasets: .nan
|
6 |
-
Group: BiasEvals
|
7 |
-
Hashtags: .nan
|
8 |
-
Link: Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
|
9 |
-
Modality: Image
|
10 |
-
Screenshots: []
|
11 |
-
Suggested Evaluation: Effect of different scripts on text-to-image generation
|
12 |
-
Level: Output
|
13 |
-
URL: https://arxiv.org/pdf/2209.08891.pdf
|
14 |
-
What it is evaluating: It evaluates generated images for cultural stereotypes, when
|
15 |
-
using different scripts (homoglyphs). It somewhat measures the suceptibility of
|
16 |
-
a model to produce cultural stereotypes by simply switching the script
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
configs/stereoset.yaml
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
Abstract: .nan
|
2 |
-
Applicable Models: .nan
|
3 |
-
Authors: .nan
|
4 |
-
Considerations: Automating stereotype detection makes distinguishing harmful stereotypes
|
5 |
-
difficult. It also raises many false positives and can flag relatively neutral associations
|
6 |
-
based in fact (e.g. population x has a high proportion of lactose intolerant people).
|
7 |
-
Datasets: .nan
|
8 |
-
Group: BiasEvals
|
9 |
-
Hashtags: .nan
|
10 |
-
Link: 'StereoSet: Measuring stereotypical bias in pretrained language models'
|
11 |
-
Modality: Text
|
12 |
-
Screenshots: []
|
13 |
-
Suggested Evaluation: StereoSet
|
14 |
-
Level: Dataset
|
15 |
-
URL: https://arxiv.org/abs/2004.09456
|
16 |
-
What it is evaluating: Protected class stereotypes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|