Spaces:
Runtime error
Runtime error
NimaBoscarino
commited on
Commit
β’
e2e9680
1
Parent(s):
fb019b5
Add Q&A event for making intelligence
Browse files
app.py
CHANGED
@@ -81,6 +81,10 @@ rigorous = Category(
|
|
81 |
- Techniques for detoxifying language models.
|
82 |
""",
|
83 |
news=[
|
|
|
|
|
|
|
|
|
84 |
News(
|
85 |
title="ποΈ AI chatbots are coming to search engines β can you trust the results?",
|
86 |
link="https://www.nature.com/articles/d41586-023-00423-4"
|
@@ -315,7 +319,7 @@ def category_tab(category):
|
|
315 |
gr.Markdown(elem_id="margin-top", value="#### Check back soon for featured datasets π€")
|
316 |
|
317 |
|
318 |
-
with gr.Blocks(css="#margin-top {margin-top: 15px} #center {text-align: center;} #news-tab {padding: 15px;} #news-tab h3 {margin: 0px; text-align: center;} #news-tab p {margin: 0px;} #article-button {flex-grow: initial;} #news-row {align-items: center;} #spaces-flex {flex-wrap: wrap;} #space-card { display: flex; min-width: calc(90% / 3); max-width:calc(100% / 3); box-sizing: border-box;}") as demo:
|
319 |
with gr.Row(elem_id="center"):
|
320 |
gr.Markdown("# Ethics & Society at Hugging Face")
|
321 |
|
@@ -323,7 +327,58 @@ with gr.Blocks(css="#margin-top {margin-top: 15px} #center {text-align: center;}
|
|
323 |
At Hugging Face, we are committed to operationalizing ethics at the cutting-edge of machine learning. This page is dedicated to highlighting projects β inside and outside Hugging Face β in order to encourage and support more ethical development and use of AI. We wish to foster ongoing conversations of ethics and values; this means that this page will evolve over time, and your feedback is invaluable. Please open up an issue in the [Community tab](https://huggingface.co/spaces/society-ethics/about/discussions) to share your thoughts!
|
324 |
""")
|
325 |
|
326 |
-
with gr.Accordion(label="
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
327 |
gr.Markdown("""
|
328 |
Follow these steps to join the discussion:
|
329 |
|
|
|
81 |
- Techniques for detoxifying language models.
|
82 |
""",
|
83 |
news=[
|
84 |
+
News(
|
85 |
+
title="WIRED: Inside the Suspicion Machine",
|
86 |
+
link="https://www.wired.com/story/welfare-state-algorithms/"
|
87 |
+
),
|
88 |
News(
|
89 |
title="ποΈ AI chatbots are coming to search engines β can you trust the results?",
|
90 |
link="https://www.nature.com/articles/d41586-023-00423-4"
|
|
|
319 |
gr.Markdown(elem_id="margin-top", value="#### Check back soon for featured datasets π€")
|
320 |
|
321 |
|
322 |
+
with gr.Blocks(css="#margin-top {margin-top: 15px} #center {text-align: center;} #news-tab {padding: 15px;} #news-tab h3 {margin: 0px; text-align: center;} #news-tab p {margin: 0px;} #article-button {flex-grow: initial;} #news-row {align-items: center;} #spaces-flex {flex-wrap: wrap;} #space-card { display: flex; min-width: calc(90% / 3); max-width:calc(100% / 3); box-sizing: border-box;} #event-tabs {margin-top: 0px;}") as demo:
|
323 |
with gr.Row(elem_id="center"):
|
324 |
gr.Markdown("# Ethics & Society at Hugging Face")
|
325 |
|
|
|
327 |
At Hugging Face, we are committed to operationalizing ethics at the cutting-edge of machine learning. This page is dedicated to highlighting projects β inside and outside Hugging Face β in order to encourage and support more ethical development and use of AI. We wish to foster ongoing conversations of ethics and values; this means that this page will evolve over time, and your feedback is invaluable. Please open up an issue in the [Community tab](https://huggingface.co/spaces/society-ethics/about/discussions) to share your thoughts!
|
328 |
""")
|
329 |
|
330 |
+
with gr.Accordion(label="Ucoming Events", open=True):
|
331 |
+
with gr.Row(elem_id="margin-top"):
|
332 |
+
with gr.Column(scale=1):
|
333 |
+
gr.Image(value="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/making-intelligence-banner.png", show_label=False)
|
334 |
+
with gr.Column(scale=2):
|
335 |
+
with gr.Tabs(elem_id="event-tabs"):
|
336 |
+
with gr.Tab("About the Event"):
|
337 |
+
gr.Markdown("""
|
338 |
+
For our inaugural Ethics & Society Q&A, we're welcoming [Borhane Blili-Hamelin, PhD](https://borhane.xyz), and [Leif Hancox-Li, PhD](https://boltzmann-brain.github.io)!
|
339 |
+
|
340 |
+
Come discuss their recent paper(["Making Intelligence: Ethical Values in IQ and ML Benchmarks"](https://arxiv.org/abs/2209.00692)), learn about the value-laden aspects of ML benchmarks, and share your ideas on how we can apply these lessons to our work π€
|
341 |
+
|
342 |
+
Join the Discord and RSVP to the event: [hf.co/join/discord](https://hf.co/join/discord) π
|
343 |
+
|
344 |
+
**Date:** March 13th 2023, 9:00 AM Pacific Time, **Location:** The Hugging Face Discord at #society-ethics
|
345 |
+
""")
|
346 |
+
with gr.Tab("Speaker Bios"):
|
347 |
+
gr.Markdown("""
|
348 |
+
### About Borhane Blili-Hamelin, PhD (he/him)
|
349 |
+
|
350 |
+
Iβm a consultant, researcher, and organizer focused on AI ethics. As a consultant with BABL AI, I help organizations mitigate harm through AI risk management and auditing. I build cross-disciplinary research projects on the risks and values embedded in AI systems. I also love participatory problem-solving and community-driven projects. I'm Ethics and Performance Lead at AVID, founded Accountability Case Labs, and am co-director of Open Post Academics. I'm a Mozilla Festival alum: former TAIWG Project Lead and Wrangler. I strive to make AI governance more cross-disciplinary, reflective and empowering for impacted communities.
|
351 |
+
|
352 |
+
I have a PhD in philosophy from Columbia University.
|
353 |
+
|
354 |
+
Iβm a QuΓ©bec expat living in Brooklyn, NY!
|
355 |
+
|
356 |
+
#### Links
|
357 |
+
|
358 |
+
- Personal website: [borhane.xyz](https://borhane.xyz)
|
359 |
+
- Linkedin: [linkedin.com/in/borhane](https://www.linkedin.com/in/borhane/)
|
360 |
+
- Twitter: [@Borhane_B_H](https://twitter.com/Borhane_B_H)
|
361 |
+
|
362 |
+
### About Leif Hancox-Li, PhD (he/they)
|
363 |
+
|
364 |
+
Iβm a data scientist who does interdisciplinary [research](https://boltzmann-brain.github.io/papers) on responsible AI and helps data science teams make their models more explainable. I excel at bringing a humanistic perspective to technical issues while also having the skills to implement technical solutions that take social values into account. My research has won best paper awards at both [FAccT](https://twitter.com/FAccTConference/status/1369315183143903237?s=20) and the [Philosophy of Science Association](https://philsci.org/ernest_nagel_early-career_scho.php).
|
365 |
+
|
366 |
+
Prior to this, I was a technical writer for a variety of software products, ranging from MLOps to REST APIs to complicated enterprise GUIs. An even longer time ago, I got a PhD in philosophy after stints in computer vision and physics.
|
367 |
+
|
368 |
+
#### Links
|
369 |
+
|
370 |
+
- Personal website: [boltzmann-brain.github.io](https://boltzmann-brain.github.io)
|
371 |
+
- Linkedin: [https://www.linkedin.com/in/leif-hancox-li-1a6a7a132/](https://www.linkedin.com/in/leif-hancox-li-1a6a7a132/)
|
372 |
+
- Twitter: [@struthious](https://twitter.com/struthious)
|
373 |
+
""")
|
374 |
+
with gr.Tab("Paper Abstract"):
|
375 |
+
gr.Markdown("""
|
376 |
+
Read the full paper at: [https://arxiv.org/abs/2209.00692](https://arxiv.org/abs/2209.00692)
|
377 |
+
|
378 |
+
> In recent years, ML researchers have wrestled with defining and improving machine learning (ML) benchmarks and datasets. In parallel, some have trained a critical lens on the ethics of dataset creation and ML research. In this position paper, we highlight the entanglement of ethics with seemingly ``technical'' or ``scientific'' decisions about the design of ML benchmarks. Our starting point is the existence of multiple overlooked structural similarities between human intelligence benchmarks and ML benchmarks. Both types of benchmarks set standards for describing, evaluating, and comparing performance on tasks relevant to intelligence -- standards that many scholars of human intelligence have long recognized as value-laden. We use perspectives from feminist philosophy of science on IQ benchmarks and thick concepts in social science to argue that values need to be considered and documented when creating ML benchmarks. It is neither possible nor desirable to avoid this choice by creating value-neutral benchmarks. Finally, we outline practical recommendations for ML benchmark research ethics and ethics review.
|
379 |
+
""")
|
380 |
+
|
381 |
+
with gr.Accordion(label="Visit us over on the Hugging Face Discord!", open=False):
|
382 |
gr.Markdown("""
|
383 |
Follow these steps to join the discussion:
|
384 |
|