Ethics and Society Newsletter #1
Hello, world!
Originating as an open-source company, Hugging Face was founded on some key ethical values in tech: collaboration, responsibility, and transparency. To code in an open environment means having your code – and the choices within – viewable to the world, associated with your account and available for others to critique and add to. As the research community began using the Hugging Face Hub to host models and data, the community directly integrated reproducibility as another fundamental value of the company. And as the number of datasets and models on Hugging Face grew, those working at Hugging Face implemented documentation requirements and free instructive courses, meeting the newly emerging values defined by the research community with complementary values around auditability and understanding the math, code, processes and people that lead to current technology.
How to operationalize ethics in AI is an open research area. Although theory and scholarship on applied ethics and artificial intelligence have existed for decades, applied and tested practices for ethics within AI development have only begun to emerge within the past 10 years. This is partially a response to machine learning models – the building blocks of AI systems – outgrowing the benchmarks used to measure their progress, leading to wide-spread adoption of machine learning systems in a range of practical applications that affect everyday life. For those of us interested in advancing ethics-informed AI, joining a machine learning company founded in part on ethical principles, just as it begins to grow, and just as people across the world are beginning to grapple with ethical AI issues, is an opportunity to fundamentally shape what the AI of the future looks like. It’s a new kind of modern-day AI experiment: What does a technology company with ethics in mind from the start look like? Focusing an ethics lens on machine learning, what does it mean to democratize good ML?
To this end, we share some of our recent thinking and work in the new Hugging Face Ethics and Society newsletter, to be published every season, at the equinox and solstice. Here it is! It is put together by us, the “Ethics and Society regulars”, an open group of people across the company who come together as equals to work through the broader context of machine learning in society and the role that Hugging Face plays. We believe it to be critical that we are not a dedicated team: in order for a company to make value-informed decisions throughout its work and processes, there needs to be a shared responsibility and commitment from all parties involved to acknowledge and learn about the ethical stakes of our work.
We are continuously researching practices and studies on the meaning of a “good” ML, trying to provide some criteria that could define it. Being an ongoing process, we embark on this by looking ahead to the different possible futures of AI, creating what we can in the present day to get us to a point that harmonizes different values held by us as individuals as well as the broader ML community. We ground this approach in the founding principles of Hugging Face:
We seek to collaborate with the open-source community. This includes providing modernized tools for documentation and evaluation, alongside community discussion, Discord, and individual support for contributors aiming to share their work in a way that’s informed by different values.
We seek to be transparent about our thinking and processes as we develop them. This includes sharing writing on specific project values at the start of a project and our thinking on AI policy. We also gain from the community feedback on this work, as a resource for us to learn more about what to do.
We ground the creation of these tools and artifacts in responsibility for the impacts of what we do now and in the future. Prioritizing this has led to project designs that make machine learning systems more auditable and understandable – including for people with expertise outside of ML – such as the education project and our experimental tools for ML data analysis that don't require coding.
Building from these basics, we are taking an approach to operationalizing values that center the context-specific nature of our projects and the foreseeable effects they may have. As such, we offer no global list of values or principles here; instead, we continue to share project-specific thinking, such as this newsletter, and will share more as we understand more. Since we believe that community discussion is key to identifying different values at play and who is impacted, we have recently opened up the opportunity for anyone who can connect to the Hugging Face Hub online to provide direct feedback on models, data, and Spaces. Alongside tools for open discussion, we have created a Code of Conduct and content guidelines to help guide discussions along dimensions we believe to be important for an inclusive community space. We have developed a Private Hub for secure ML development, a library for evaluation to make it easier for developers to evaluate their models rigorously, code for analyzing data for skews and biases, and tools for tracking carbon emissions when training a model. We are also developing new open and responsible AI licensing, a modern form of licensing that directly addresses the harms that AI systems can create. And this week, we made it possible to “flag” model and Spaces repositories in order to report on ethical and legal issues.
In the coming months, we will be putting together several other pieces on values, tensions, and ethics operationalization. We welcome (and want!) feedback on any and all of our work, and hope to continue engaging with the AI community through technical and values-informed lenses.
Thanks for reading! 🤗
~ Meg, on behalf of the Ethics and Society regulars