To what extent are we responsible for our content and how to create safer Spaces?
This is a brief blog that outlines some thoughts surrounding the question: To what extent are we responsible for our content and how to create safer Spaces? Certainly relevant for the Telegram CEO Pavel Durov but not less important for people like you and me.
π My own "oops"-moment. I created a space with a Flux model and it resulted in some inappropriate content generation. So, I had a small discussion about creating safe AI with some colleagues over at Hugging Face. Hereβs what you can do!π
π¦ The ethics team has a nice collection of tools and ideas to help owners secure their code and prevent misuse. Several ways to create safer spaces can be found here. https://huggingface.co/collections/society-ethics/provenance-watermarking-and-deepfake-detection-65c6792b0831983147bb7578
π· Use AI classifiers to filter out harmful or inappropriate content. Itβs a simple but effective way to stop misuse in its tracks. For stable diffusion, we have implemented a basic baseline to block basic keywords and terms. https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py
π Track Usage: Consider monitoring user activities in some way, like logging IP addresses. While there are privacy concerns and GDPR-related caveats, it helps to detect and prevent abuse.
β Most content platforms fall under the international safe harbour principle, which does not hold them accountable for illegal content if they don't know it is there (privacy-related you simply can't), and if they act promptly when they do. https://en.wikipedia.org/wiki/International_Safe_Harbor_Privacy_Principles
π Clear Guidelines: Set transparent usage policies. Make sure users understand whatβs acceptable and what the consequences are for breaking the rules. We have some at Hugging Face too. https://huggingface.co/content-guidelines
βοΈ Open Source Legal clauses for products using LLMs: This morning I saw this post from Gideon Mendels from Comet ML that shared public legal clauses that should cover common risky scenarios around the usage of LLMs in production. https://gist.github.com/gidim/18e1685f6a47b235e393e57bad89d454
Thanks for the discussion π€ Noemie Chirokoff, Margaret Mitchell, Omar Sanseviero, Bruna Sellin Trevelin