Spaces:
Runtime error
Runtime error
Avijit Ghosh
commited on
Commit
•
ef66fb7
1
Parent(s):
15af994
typo
Browse files
app.py
CHANGED
@@ -130,12 +130,12 @@ The following categories are high-level, non-exhaustive, and present a synthesis
|
|
130 |
|
131 |
""")
|
132 |
with gr.Tabs(elem_classes="tab-buttons") as tabs1:
|
133 |
-
with gr.TabItem("Bias/
|
134 |
fulltable = globaldf[globaldf['Group'] == 'BiasEvals']
|
135 |
fulltable = fulltable[['Modality','Level', 'Suggested Evaluation', 'What it is evaluating', 'Link']]
|
136 |
|
137 |
gr.Markdown("""
|
138 |
-
Generative AI systems can perpetuate harmful biases from various sources, including systemic, human, and statistical biases. These biases, also known as "fairness" considerations, can manifest in the final system due to choices made throughout the development process. They include harmful associations and
|
139 |
""")
|
140 |
with gr.Row():
|
141 |
modality_filter = gr.CheckboxGroup(["Text", "Image", "Audio", "Video"],
|
|
|
130 |
|
131 |
""")
|
132 |
with gr.Tabs(elem_classes="tab-buttons") as tabs1:
|
133 |
+
with gr.TabItem("Bias/Stereotypes"):
|
134 |
fulltable = globaldf[globaldf['Group'] == 'BiasEvals']
|
135 |
fulltable = fulltable[['Modality','Level', 'Suggested Evaluation', 'What it is evaluating', 'Link']]
|
136 |
|
137 |
gr.Markdown("""
|
138 |
+
Generative AI systems can perpetuate harmful biases from various sources, including systemic, human, and statistical biases. These biases, also known as "fairness" considerations, can manifest in the final system due to choices made throughout the development process. They include harmful associations and stereotypes related to protected classes, such as race, gender, and sexuality. Evaluating biases involves assessing correlations, co-occurrences, sentiment, and toxicity across different modalities, both within the model itself and in the outputs of downstream tasks.
|
139 |
""")
|
140 |
with gr.Row():
|
141 |
modality_filter = gr.CheckboxGroup(["Text", "Image", "Audio", "Video"],
|