{"questions": {"a188ff31-68db-46bc-b419-ac0fdecc6b1b": "What is data poisoning in the context of GAI and how does it affect model outputs?", "c9f79a82-8775-4461-ac2f-b7b4e8734e96": "How do GAI systems pose intellectual property risks related to copyrighted material in their training data?", "80b370f2-6d21-4ff6-bc9b-763317af5321": "How can governance structures help mitigate bias in human-based systems?", "d4e20011-f4f9-4de6-8453-8144d77ba3a7": "What are some common concerns associated with human-based systems in terms of bias and effectiveness?", "12534c5f-eb68-41cb-a879-b908e95a65d0": "What are the best practices for establishing policies for data collection and retention to mitigate risks such as data privacy breaches and harmful biases?", "11951856-b35d-4123-ab15-80c38cd82195": "How can organizations effectively implement policies to protect third-party intellectual property and training data?", "869c7e52-2b77-4144-af5f-d5beebc138f9": "What are the key safeguards that should be included in automated systems to protect the public from harm?", "a310a614-4bd4-4302-ab42-37b6faa7d838": "How can early-stage public consultation improve the safety and effectiveness of automated systems?", "468bff4b-cdbb-489a-9cbd-95d9062f93dc": "What are some of the key investigative projects that Surya Mattu has worked on at The Markup?", "3e5b4545-4ceb-424f-b519-16c3870dd543": "How has Mariah Montgomery's role as National Campaign Director at the Partnership for Working Families impacted labor rights and policies?", "0f0eea1d-f05e-458e-b31a-e955629c7e44": "How can organizations effectively integrate pre- and post-deployment feedback into their monitoring processes for GAI models?", "6cc0bcc5-9c82-49a6-b118-3e333e70ee9e": "What are the benefits of using AI red-teaming in the pre-deployment testing phase for capturing external feedback?", "4441faa1-8f27-4fc7-bea0-847caa1c1505": "What are the potential negative impacts of automated systems on individuals and communities?", "0db8fdee-99cf-47c6-9d8d-a85f3b294826": "How can confirmation bias affect the effectiveness of safety mechanisms in technology?", "57db460e-0123-4edf-b7df-87a967a60c26": "What are the key safety metrics used to evaluate AI system reliability and robustness?", "48589831-4f3c-4bf6-9cb4-bc4277c489dd": "How can AI systems be designed to fail safely when operating beyond their knowledge limits?", "1df11168-7aa5-4b43-91df-c14c32f01440": "What are the risks associated with data brokers collecting consumer data without permission?", "2127b35f-68cd-4e5f-a669-a6a4bb532fa8": "How does the use of surveillance technologies in schools and workplaces impact mental health?", "afefb290-48ec-450c-b530-5fe1b6c5340b": "What is ballot curing and how does it impact the election process?", "eecbf085-2f16-45c4-ba65-35813ca84568": "How do different states handle signature discrepancies in mail-in ballots?", "43b6add5-244e-4c11-be3b-0944fecfa6b9": "What are the best practices for detecting and mitigating algorithmic bias according to the Brookings Report?", "37bbd6b6-d24a-4b73-a4f4-f532d8c1793a": "How can public agencies implement Algorithmic Impact Assessments to ensure accountability, as suggested by the AI Now Institute Report?", "318fe73a-0591-41e8-b65e-925c71b2caab": "How is the federal government addressing discrimination in mortgage lending through the Department of Justice's nationwide initiative?", "56664bc2-0933-4e58-8d03-5c06b9d06c04": "What role do federal agencies like the Consumer Financial Protection Bureau play in the Action Plan to Advance Property Appraisal and Valuation Equity?", "7f8b418c-6e85-4ab0-83db-b7ed7dc49a45": "What are the best practices for updating due diligence processes to include intellectual property and data privacy for GAI acquisitions?", "e81617a3-9609-4012-ba46-caa374c306de": "How can organizations effectively monitor and assess third-party GAI risks in real-time?", "054e5797-d024-41bd-8de9-983d038a8797": "What are the best practices for performing disparity testing and making the results public?", "fdb81ad2-acf2-4aa4-b551-fe766d22f273": "How can organizations effectively mitigate disparities identified through testing?", "09a4ef32-a01e-4ca9-9bf6-4704e328ccef": "How can people protect themselves from being tracked by devices originally meant for finding lost items?", "e314f460-f6e2-4d11-b612-d51529a9dee6": "What are the potential issues with using algorithms to deploy police in neighborhoods?", "741f5989-422f-4bc5-9f72-0f3b22bb4f25": "What are the mental health impacts of NCII on women and sexual minorities?", "19592c9a-0621-4629-bdfc-8a08f0d396b4": "How can GAI training datasets be protected from including CSAM and NCII?", "f95100da-f55f-4402-909d-fdde5cf17d25": "What are the key challenges in designing non-discriminatory AI technology discussed in the panel?", "bb3e7970-5b1e-4e98-87ad-b30d33ff6a89": "How can community participation enhance human-computer interaction in AI systems?", "796ffa10-1532-4fa1-b832-d8ee058d410d": "What are the potential sociotechnical harms of algorithmic systems as discussed by Shelby et al (2023)?", "2c38117e-4b2d-4553-b319-f4ba3997996e": "How does training on generated data affect AI models according to Shumailov et al (2023)?", "3b9c9379-cc75-4b9d-a68a-dc6b0a48fd9c": "What are the key suggested actions for managing GAI risks according to the AI RMF 10 and Playbook?", "1a02235f-7bf0-4e7e-8149-ab610eacb769": "How do the suggested actions for managing GAI risks vary depending on the stage of the GAI lifecycle?", "08cbf993-d60b-4982-bf84-140c29d30450": "How can organizations ensure that consent practices do not allow for abusive surveillance practices?", "e253b5ac-feb7-4116-9e68-d2c817da36a5": "What are the best practices for re-acquiring consent if the use case of data changes or if data is transferred to another entity?", "0260750e-4f7d-4c1b-b0b7-ae4c36cc8fc3": "What are the key principles outlined in the AI Bill of Rights?", "39863570-2d41-4d21-bde1-1afc78c157b0": "How does the AI Bill of Rights address algorithmic discrimination?", "8b4fd9d7-e1d4-472e-bd34-35fa98299c07": "How can we effectively track and document instances of anthropomorphization in GAI system interfaces?", "1f34befe-4432-419f-8465-066a0d82ff77": "What are the best practices for verifying the provenance of GAI system training data and TEVV data?", "3e35db8c-c1b3-4b9f-b6c0-a3fd6e52d2b0": "What is the importance of having demographically and interdisciplinarily diverse AI red teams in pre-deployment contexts?", "0c3b35ca-f421-41e4-b016-8f367561acbe": "How can general public involvement in AI red-teaming contribute to identifying flaws in AI models?", "d73655e4-93f0-41c5-b69e-814ff8189db8": "QUESTION #1: How are major universities using race as a predictor of student success?", "bb8b9729-d1b6-407d-9f0c-aa1bd62a8d78": "QUESTION #2: What concerns do students and professors have about using race as a predictor in education?", "3f1dec42-4087-4e06-9e7e-491c96cdee67": "How can AI-enabled systems contribute to building better and more innovative infrastructure?", "9a94cfb2-25b9-4aa8-94c5-987c53fa42bf": "What lessons from urban planning can be applied to the integration of AI technologies in communities?", "f445449e-b75f-44ee-a819-018ad630bd35": "What are the benefits of having a human alternative to automated systems?", "58fd202f-0791-411a-9124-09381dbbad11": "How can one ensure timely human consideration and remedy when an automated system fails?", "26ee0f55-a947-440f-b4bc-4b7def4e3545": "What are the main findings of the Department of Justice's report on the risk assessment tool for predicting recidivism?", "dcb01564-a34f-42a7-ac6c-13764525a7d2": "How is the Department of Justice addressing the disparities in the risk assessment tool for predicting recidivism among different groups of color?", "8e29d29a-fc98-4a6f-b42b-580fc084dd71": "What is the Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government?", "4bbc6d4b-6b67-4831-8bca-853eb46aec3a": "What were President Biden's remarks on the Supreme Court decision to overturn Roe v Wade?", "919fdd1d-2abb-472e-ac8d-bde9df2bb391": "What are the best practices for re-assessing model risks after implementing fine-tuning or retrieval-augmented generation?", "a795e873-419b-454b-8598-fb0c49a7e5cc": "How can organizations effectively review and manage training data to prevent the reproduction of intellectual property or CBRN information in AI outputs?", "242b750e-1236-41f7-a1cc-eedef8f0427d": "What are some common examples of AI incidents that organizations should be aware of?", "c506e557-776f-42ed-99f9-c752ac2bb94b": "How can organizations effectively track and document the provenance of datasets to identify AI-generated data issues?", "426616e2-6297-47c3-89c7-71ec1186cdba": "What is the role of the American Civil Liberties Union in protecting digital privacy?", "db43af55-434d-441f-8dc7-acc8ff3f8432": "How does the Center for Democracy & Technology advocate for internet freedom and security?", "f2913868-28a6-4558-904a-0486fbfc1f6e": "How can organizations ensure the accuracy of predictions or recommendations generated by automated systems?", "5a4faa70-0364-4fd0-9c98-b26fb63f7786": "What are the best practices for implementing ongoing monitoring procedures for automated systems?", "3ad57490-e4f7-4fd2-bff4-93211043ec13": "What are the key considerations for implementing automated systems in sensitive domains like criminal justice and health?", "3a05d7ba-2e46-406b-aeb9-51b33efff15f": "How can organizations ensure meaningful oversight and human consideration in high-risk automated decision-making systems?", "6ae09ea8-3090-401b-9f1e-4ce5270152cd": "What are the privacy expectations for automated systems handling sensitive data?", "207207ff-faab-4342-b76f-ef0c6fac88c9": "How should consent be managed for automated systems collecting sensitive data?", "d15a10aa-36cb-4f3a-9f9e-2c0416ce1084": "What is the contact information for inquiries related to NIST AI publications?", "c9f4fb11-9365-4354-aa94-7cc93efcafb5": "Where can I find additional information about NIST AI publications?", "4aebac20-11d4-42c8-be6a-f7ac4e43cbbc": "How can organizations effectively combat automation bias in automated systems?", "71537f88-7e77-4720-9cc0-bca516b4721f": "What are the best practices for training individuals to properly interpret outputs from automated systems?", "e004e796-65d5-4109-89bd-472cae5b6c75": "What were the FTC's findings in the case against Everalbum Inc?", "290cd0b2-456b-41bc-bf0e-3ea3e32f480d": "How did the FTC address privacy concerns in the case against Weight Watchers and Kurbo?", "1c416614-5e28-45f4-9e8e-937971dcff9a": "What are the potential harms of GAI related to misinformation, disinformation, and deepfakes?", "d83ab93d-9be0-488d-94fd-8e58074a3388": "How should organizations disclose the use of GAI to end users to mitigate risks?", "bfc45e93-d073-4348-8fb1-03dfaf4e73f3": "What measures can designers and developers take to prevent algorithmic discrimination?", "4819bdb4-1724-4318-855c-9c4f680c0655": "How does algorithmic discrimination impact different protected classes such as race, gender, and disability?", "a8a96840-d387-42d9-9b56-f05b73027f5c": "What are some innovative solutions provided by the industry to mitigate risks to the safety and efficacy of AI systems?", "7fdb6c15-c3f8-4327-b2fe-0169c08ce375": "How does the Office of Management and Budget (OMB) suggest expanding opportunities for stakeholder engagement in program design?", "3509c40f-7af0-49a5-bd16-c7da584b3980": "What are the nine principles outlined in Executive Order 13960 for the use of AI in the federal government?", "a86eba64-72a8-4afa-a7f5-8c50c3b0c660": "How can laws and policies ensure that AI systems are accurate, reliable, and effective in real-life applications?", "9eee9d68-6e0f-4430-989f-cb569677d74c": "How can we distinguish between fact and opinion in the content generated by AI systems?", "6fba0797-2aaa-4686-9325-999b5396f47b": "What are the risks associated with the anthropomorphization of General AI (GAI) systems?", "449ab90b-3762-4d3e-99ea-899bd340c42b": "What are confabulations in the context of text-based outputs?", "1c57be24-8e1d-4a3a-a29e-1d153c019510": "How do legal confabulations manifest in state-of-the-art language models?", "6b30e12e-cecf-4cd7-936e-84468c950a36": "What is the purpose of the Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government?", "5547bf9b-ceae-4386-a486-7708637ab6a1": "What role do Navigators play according to HealthCaregov?", "a520c4cc-f2f6-4dd8-bd3a-a1a750440209": "What are the key principles outlined in the ISO/IEC Guide 71:2014 for addressing accessibility in standards?", "d72c0d17-abee-470b-8725-abf4aad59b3f": "How do the Web Content Accessibility Guidelines (WCAG) 20 impact web development practices?", "8e31e286-3ac3-488f-a211-4575fd663a17": "What are the key expectations for automated systems to ensure data privacy and protection from unchecked surveillance?", "76f71eb0-f3b8-425d-8772-65a5d214634f": "How can heightened oversight of surveillance systems prevent algorithmic discrimination based on community membership?", "eb21dff3-4dd0-47af-a449-b9b525386911": "What are the key considerations for ensuring equitable outcomes in fallback and escalation systems for automated systems?", "645a1801-9128-4977-947d-5437b8933966": "How can organizations ensure that human consideration and fallback mechanisms are conducted in a timely manner for automated systems?", "50321a04-5130-43ab-9305-cc1d548da8e0": "What are the extra protections for data related to sensitive domains like health and personal finance?", "32f6e506-6e82-41f7-b80c-f0702a537ca2": "How do technological developments impact the sensitivity of data domains and the need for enhanced data protections?", "7a80ac97-319d-452b-a900-e739da72ab44": "What are some benchmarks used to quantify systemic bias in GAI system outputs?", "bedb600e-b951-4e89-9442-24b971ff1b21": "How can fairness assessments help measure systemic bias in GAI systems?", "5fa67d29-3be8-4c81-a7c9-1a4d5dfa0ba7": "What are the potential biases in hiring tools that learn from a company's predominantly male employee base?", "868637a7-88fa-4891-bcfd-da1d37772744": "How do predictive models that use race as a factor affect Black students' academic guidance and major selection?", "6410524d-24f8-4aaf-8b70-5dcfc8272cd0": "What measures are being taken to prevent the misuse of Apple AirTags for stalking and harassment?", "d0990582-29f1-41c2-90e1-89c4efc58153": "How does crime prediction software perpetuate biases despite promises of being free from them?", "cbc06d96-6605-45f4-8067-0342ab04aac4": "What are the key elements to consider when incorporating GAI systems into an AI system inventory?", "b16b4f6b-0ec2-49bc-9453-9bbf1a8feea5": "How should organizations handle inventory exemptions for GAI systems embedded into application software?", "b1da9e4e-62f7-4d08-ac87-2b196fa9114e": "What measures can be taken to ensure automated systems protect against algorithmic discrimination?", "aa279423-ea2b-4fa2-beb1-7a6e1400c36f": "How can independent evaluations of automated systems be conducted without compromising individual privacy?", "8a9ae766-2f74-4272-bd84-e95787e5e943": "What are the best practices for determining data origin and content lineage in AI systems?", "2567081e-89ba-4d98-a746-eaf8503e5c5d": "How can test and evaluation processes be instituted for data and content flows within an AI system?", "fa585f44-6fb6-443b-983e-6304d9c2f5e1": "What are the expectations for automated systems in high-risk settings like criminal justice?", "f51b4b1e-689f-4a47-82a2-d9a9a0d30ab7": "How should the level of risk influence the design of explanatory mechanisms in automated systems?", "ecb05eb6-335e-4451-bf2e-4c8ad8e800bf": "What are the current methods for reporting AI incidents?", "51d512f0-8849-48de-a188-5aab8ddee724": "How do publicly available databases decide which AI incidents to track?", "80d2e492-0668-4e82-b83e-d1cef2355444": "What is the NIST AI 600-1 framework about?", "b7e8353e-ffb6-41a5-a321-d6b5521a03d5": "How does the NIST Trustworthy and Responsible AI framework address generative artificial intelligence risks?", "acbfd37b-65e1-440b-b4b1-9b3ee9a15fac": "What are the best practices for establishing acceptable use policies for GAI in human-AI teaming settings?", "c86085f1-bc71-4d66-8869-d5335b328ec7": "How can organizations effectively implement synthetic content detection and labeling tools?"}, "relevant_contexts": {"a188ff31-68db-46bc-b419-ac0fdecc6b1b": ["1eebe549-0cfa-4adf-84b0-ed9a06656695"], "c9f79a82-8775-4461-ac2f-b7b4e8734e96": ["1eebe549-0cfa-4adf-84b0-ed9a06656695"], "80b370f2-6d21-4ff6-bc9b-763317af5321": ["bd7c4ee6-636c-4e73-8669-68ae8df8a0e8"], "d4e20011-f4f9-4de6-8453-8144d77ba3a7": ["bd7c4ee6-636c-4e73-8669-68ae8df8a0e8"], "12534c5f-eb68-41cb-a879-b908e95a65d0": ["96206509-2450-4808-b3db-0ad36b187bf3"], "11951856-b35d-4123-ab15-80c38cd82195": ["96206509-2450-4808-b3db-0ad36b187bf3"], "869c7e52-2b77-4144-af5f-d5beebc138f9": ["5b799f01-f51b-4867-8554-833805f3ab80"], "a310a614-4bd4-4302-ab42-37b6faa7d838": ["5b799f01-f51b-4867-8554-833805f3ab80"], "468bff4b-cdbb-489a-9cbd-95d9062f93dc": ["43309aea-4c65-4a8b-9dbb-ad2c5402ed13"], "3e5b4545-4ceb-424f-b519-16c3870dd543": ["43309aea-4c65-4a8b-9dbb-ad2c5402ed13"], "0f0eea1d-f05e-458e-b31a-e955629c7e44": ["4c75b2c9-d74b-46ad-b25f-e5b2bbba9a2f"], "6cc0bcc5-9c82-49a6-b118-3e333e70ee9e": ["4c75b2c9-d74b-46ad-b25f-e5b2bbba9a2f"], "4441faa1-8f27-4fc7-bea0-847caa1c1505": ["ca9ae4fc-a936-4dda-acea-192bc0206464"], "0db8fdee-99cf-47c6-9d8d-a85f3b294826": ["ca9ae4fc-a936-4dda-acea-192bc0206464"], "57db460e-0123-4edf-b7df-87a967a60c26": ["4b00025b-f3dc-41ec-b5e2-b4f77272ad81"], "48589831-4f3c-4bf6-9cb4-bc4277c489dd": ["4b00025b-f3dc-41ec-b5e2-b4f77272ad81"], "1df11168-7aa5-4b43-91df-c14c32f01440": ["054c9a30-d999-4ec7-a07e-200e0ac42d1f"], "2127b35f-68cd-4e5f-a669-a6a4bb532fa8": ["054c9a30-d999-4ec7-a07e-200e0ac42d1f"], "afefb290-48ec-450c-b530-5fe1b6c5340b": ["1b4221a5-1a5d-4193-b4c3-d0927768a090"], "eecbf085-2f16-45c4-ba65-35813ca84568": ["1b4221a5-1a5d-4193-b4c3-d0927768a090"], "43b6add5-244e-4c11-be3b-0944fecfa6b9": ["e2a458cd-3f14-4aad-ad1d-0efcae5d686c"], "37bbd6b6-d24a-4b73-a4f4-f532d8c1793a": ["e2a458cd-3f14-4aad-ad1d-0efcae5d686c"], "318fe73a-0591-41e8-b65e-925c71b2caab": ["380e7d12-ea58-4f2f-bc0c-4e04c176047d"], "56664bc2-0933-4e58-8d03-5c06b9d06c04": ["380e7d12-ea58-4f2f-bc0c-4e04c176047d"], "7f8b418c-6e85-4ab0-83db-b7ed7dc49a45": ["b73c4e8f-15b1-48df-b5d3-0dc244b5e44d"], "e81617a3-9609-4012-ba46-caa374c306de": ["b73c4e8f-15b1-48df-b5d3-0dc244b5e44d"], "054e5797-d024-41bd-8de9-983d038a8797": ["fac21c98-5e09-4073-8499-737a13a0eb2d"], "fdb81ad2-acf2-4aa4-b551-fe766d22f273": ["fac21c98-5e09-4073-8499-737a13a0eb2d"], "09a4ef32-a01e-4ca9-9bf6-4704e328ccef": ["0026669e-4953-4d6a-b1d9-ecfa12faec64"], "e314f460-f6e2-4d11-b612-d51529a9dee6": ["0026669e-4953-4d6a-b1d9-ecfa12faec64"], "741f5989-422f-4bc5-9f72-0f3b22bb4f25": ["1171bb5d-18a9-429e-8122-da09f3a0d9f2"], "19592c9a-0621-4629-bdfc-8a08f0d396b4": ["1171bb5d-18a9-429e-8122-da09f3a0d9f2"], "f95100da-f55f-4402-909d-fdde5cf17d25": ["d96f7e82-cc68-47f6-86d2-85aa141a8c9e"], "bb3e7970-5b1e-4e98-87ad-b30d33ff6a89": ["d96f7e82-cc68-47f6-86d2-85aa141a8c9e"], "796ffa10-1532-4fa1-b832-d8ee058d410d": ["d44e9dcd-c607-44be-8995-10b21aae83a5"], "2c38117e-4b2d-4553-b319-f4ba3997996e": ["d44e9dcd-c607-44be-8995-10b21aae83a5"], "3b9c9379-cc75-4b9d-a68a-dc6b0a48fd9c": ["a9851a96-2f0d-44d3-bc00-c23aaa41be72"], "1a02235f-7bf0-4e7e-8149-ab610eacb769": ["a9851a96-2f0d-44d3-bc00-c23aaa41be72"], "08cbf993-d60b-4982-bf84-140c29d30450": ["394ba34f-5572-41aa-9636-d1f9f550d321"], "e253b5ac-feb7-4116-9e68-d2c817da36a5": ["394ba34f-5572-41aa-9636-d1f9f550d321"], "0260750e-4f7d-4c1b-b0b7-ae4c36cc8fc3": ["1c921767-4d8e-42c2-b1b7-f1eef6154d6f"], "39863570-2d41-4d21-bde1-1afc78c157b0": ["1c921767-4d8e-42c2-b1b7-f1eef6154d6f"], "8b4fd9d7-e1d4-472e-bd34-35fa98299c07": ["e4a13b31-217a-46da-a63d-97fb166719a8"], "1f34befe-4432-419f-8465-066a0d82ff77": ["e4a13b31-217a-46da-a63d-97fb166719a8"], "3e35db8c-c1b3-4b9f-b6c0-a3fd6e52d2b0": ["64bede83-602b-4ecc-9aa8-b7e66674fcbf"], "0c3b35ca-f421-41e4-b016-8f367561acbe": ["64bede83-602b-4ecc-9aa8-b7e66674fcbf"], "d73655e4-93f0-41c5-b69e-814ff8189db8": ["9d624f3e-302d-4fcf-9a0e-5e84ce69a0e6"], "bb8b9729-d1b6-407d-9f0c-aa1bd62a8d78": ["9d624f3e-302d-4fcf-9a0e-5e84ce69a0e6"], "3f1dec42-4087-4e06-9e7e-491c96cdee67": ["24ba513e-4acb-465b-be49-00cb67405123"], "9a94cfb2-25b9-4aa8-94c5-987c53fa42bf": ["24ba513e-4acb-465b-be49-00cb67405123"], "f445449e-b75f-44ee-a819-018ad630bd35": ["fb71dcec-b23f-4f60-a695-56ecd3f315ac"], "58fd202f-0791-411a-9124-09381dbbad11": ["fb71dcec-b23f-4f60-a695-56ecd3f315ac"], "26ee0f55-a947-440f-b4bc-4b7def4e3545": ["ee208f32-1e0d-4e1e-a351-3417bbd87afb"], "dcb01564-a34f-42a7-ac6c-13764525a7d2": ["ee208f32-1e0d-4e1e-a351-3417bbd87afb"], "8e29d29a-fc98-4a6f-b42b-580fc084dd71": ["193dbafa-5c73-4b7a-9b65-0df439acb9d8"], "4bbc6d4b-6b67-4831-8bca-853eb46aec3a": ["193dbafa-5c73-4b7a-9b65-0df439acb9d8"], "919fdd1d-2abb-472e-ac8d-bde9df2bb391": ["b115198f-f69a-4ce2-aebb-b3842c8f5271"], "a795e873-419b-454b-8598-fb0c49a7e5cc": ["b115198f-f69a-4ce2-aebb-b3842c8f5271"], "242b750e-1236-41f7-a1cc-eedef8f0427d": ["ad125822-a8be-416c-904e-df009ec77b21"], "c506e557-776f-42ed-99f9-c752ac2bb94b": ["ad125822-a8be-416c-904e-df009ec77b21"], "426616e2-6297-47c3-89c7-71ec1186cdba": ["e44738ee-74b6-4246-bc14-d817afb94e83"], "db43af55-434d-441f-8dc7-acc8ff3f8432": ["e44738ee-74b6-4246-bc14-d817afb94e83"], "f2913868-28a6-4558-904a-0486fbfc1f6e": ["68ce524c-132f-488c-adcf-6d6b0fd3ee28"], "5a4faa70-0364-4fd0-9c98-b26fb63f7786": ["68ce524c-132f-488c-adcf-6d6b0fd3ee28"], "3ad57490-e4f7-4fd2-bff4-93211043ec13": ["ed722cdb-468f-4721-a373-d1ca5a35c1f9"], "3a05d7ba-2e46-406b-aeb9-51b33efff15f": ["ed722cdb-468f-4721-a373-d1ca5a35c1f9"], "6ae09ea8-3090-401b-9f1e-4ce5270152cd": ["4097f22e-c5bf-4c18-8078-c3a2899b5bfb"], "207207ff-faab-4342-b76f-ef0c6fac88c9": ["4097f22e-c5bf-4c18-8078-c3a2899b5bfb"], "d15a10aa-36cb-4f3a-9f9e-2c0416ce1084": ["72d14b3e-b07e-43bd-9020-1a2c23f4ef52"], "c9f4fb11-9365-4354-aa94-7cc93efcafb5": ["72d14b3e-b07e-43bd-9020-1a2c23f4ef52"], "4aebac20-11d4-42c8-be6a-f7ac4e43cbbc": ["db18094e-cd82-4e21-8d23-3a29d290999b"], "71537f88-7e77-4720-9cc0-bca516b4721f": ["db18094e-cd82-4e21-8d23-3a29d290999b"], "e004e796-65d5-4109-89bd-472cae5b6c75": ["094c20fa-14b1-497b-b40e-5b99c32cf2fc"], "290cd0b2-456b-41bc-bf0e-3ea3e32f480d": ["094c20fa-14b1-497b-b40e-5b99c32cf2fc"], "1c416614-5e28-45f4-9e8e-937971dcff9a": ["f33bc6b2-858a-46bd-ba56-b6410ce7b11b"], "d83ab93d-9be0-488d-94fd-8e58074a3388": ["f33bc6b2-858a-46bd-ba56-b6410ce7b11b"], "bfc45e93-d073-4348-8fb1-03dfaf4e73f3": ["ea01c2f2-4936-4233-8845-855c033c5a09"], "4819bdb4-1724-4318-855c-9c4f680c0655": ["ea01c2f2-4936-4233-8845-855c033c5a09"], "a8a96840-d387-42d9-9b56-f05b73027f5c": ["641dd569-3b6d-49b4-ab74-5b743949ed5d"], "7fdb6c15-c3f8-4327-b2fe-0169c08ce375": ["641dd569-3b6d-49b4-ab74-5b743949ed5d"], "3509c40f-7af0-49a5-bd16-c7da584b3980": ["ea99d79c-dacc-4993-a145-2146a1469e05"], "a86eba64-72a8-4afa-a7f5-8c50c3b0c660": ["ea99d79c-dacc-4993-a145-2146a1469e05"], "9eee9d68-6e0f-4430-989f-cb569677d74c": ["e8a4ecfe-f6e5-4984-8f0c-694996adfb03"], "6fba0797-2aaa-4686-9325-999b5396f47b": ["e8a4ecfe-f6e5-4984-8f0c-694996adfb03"], "449ab90b-3762-4d3e-99ea-899bd340c42b": ["a7b25bc5-d04c-4ce5-b11d-18080ed7322b"], "1c57be24-8e1d-4a3a-a29e-1d153c019510": ["a7b25bc5-d04c-4ce5-b11d-18080ed7322b"], "6b30e12e-cecf-4cd7-936e-84468c950a36": ["0422346b-f47b-48ad-890e-93045e292363"], "5547bf9b-ceae-4386-a486-7708637ab6a1": ["0422346b-f47b-48ad-890e-93045e292363"], "a520c4cc-f2f6-4dd8-bd3a-a1a750440209": ["d444272b-84db-47b2-8e39-d070bef54d11"], "d72c0d17-abee-470b-8725-abf4aad59b3f": ["d444272b-84db-47b2-8e39-d070bef54d11"], "8e31e286-3ac3-488f-a211-4575fd663a17": ["84e5065a-6f26-49c3-aeb8-31a8102a856b"], "76f71eb0-f3b8-425d-8772-65a5d214634f": ["84e5065a-6f26-49c3-aeb8-31a8102a856b"], "eb21dff3-4dd0-47af-a449-b9b525386911": ["3976a13c-4484-47bc-8b1d-0fcb75a19b95"], "645a1801-9128-4977-947d-5437b8933966": ["3976a13c-4484-47bc-8b1d-0fcb75a19b95"], "50321a04-5130-43ab-9305-cc1d548da8e0": ["88018024-6cf6-4719-ad61-61f79483bb74"], "32f6e506-6e82-41f7-b80c-f0702a537ca2": ["88018024-6cf6-4719-ad61-61f79483bb74"], "7a80ac97-319d-452b-a900-e739da72ab44": ["641be3b7-f879-4cc0-bc16-d9cb27069618"], "bedb600e-b951-4e89-9442-24b971ff1b21": ["641be3b7-f879-4cc0-bc16-d9cb27069618"], "5fa67d29-3be8-4c81-a7c9-1a4d5dfa0ba7": ["f12b5467-1c94-4938-98a8-5e0e4e6fff77"], "868637a7-88fa-4891-bcfd-da1d37772744": ["f12b5467-1c94-4938-98a8-5e0e4e6fff77"], "6410524d-24f8-4aaf-8b70-5dcfc8272cd0": ["380caf5a-f592-4a9d-8e55-905836b69ded"], "d0990582-29f1-41c2-90e1-89c4efc58153": ["380caf5a-f592-4a9d-8e55-905836b69ded"], "cbc06d96-6605-45f4-8067-0342ab04aac4": ["5b9ba636-3418-4270-a189-27f4e5b95ae0"], "b16b4f6b-0ec2-49bc-9453-9bbf1a8feea5": ["5b9ba636-3418-4270-a189-27f4e5b95ae0"], "b1da9e4e-62f7-4d08-ac87-2b196fa9114e": ["c3f7bcbe-0afe-4e8b-a6c2-8266ee6bec0a"], "aa279423-ea2b-4fa2-beb1-7a6e1400c36f": ["c3f7bcbe-0afe-4e8b-a6c2-8266ee6bec0a"], "8a9ae766-2f74-4272-bd84-e95787e5e943": ["f78abfc0-dc1b-4904-b10f-45b2d75bdffa"], "2567081e-89ba-4d98-a746-eaf8503e5c5d": ["f78abfc0-dc1b-4904-b10f-45b2d75bdffa"], "fa585f44-6fb6-443b-983e-6304d9c2f5e1": ["e88db2aa-0248-4c41-9ff5-f64b062d93ad"], "f51b4b1e-689f-4a47-82a2-d9a9a0d30ab7": ["e88db2aa-0248-4c41-9ff5-f64b062d93ad"], "ecb05eb6-335e-4451-bf2e-4c8ad8e800bf": ["481dbfa9-e17c-4a32-bfda-547eb5403563"], "51d512f0-8849-48de-a188-5aab8ddee724": ["481dbfa9-e17c-4a32-bfda-547eb5403563"], "80d2e492-0668-4e82-b83e-d1cef2355444": ["60edd255-562c-403c-b6b1-20d1d828e53f"], "b7e8353e-ffb6-41a5-a321-d6b5521a03d5": ["60edd255-562c-403c-b6b1-20d1d828e53f"], "acbfd37b-65e1-440b-b4b1-9b3ee9a15fac": ["810d4e10-aa6e-4399-aee2-0740c4dc03c4"], "c86085f1-bc71-4d66-8869-d5335b328ec7": ["810d4e10-aa6e-4399-aee2-0740c4dc03c4"]}, "corpus": {"1eebe549-0cfa-4adf-84b0-ed9a06656695": "Another cybersecurity risk to GAI is data poisoning, in which an adversary compromises a training \ndataset used by a model to manipulate its outputs or operation. Malicious tampering with data or parts \nof the model could exacerbate risks associated with GAI system outputs. \nTrustworthy AI Characteristics: Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n2.10. \nIntellectual Property \nIntellectual property risks from GAI systems may arise where the use of copyrighted works is not a fair \nuse under the fair use doctrine. If a GAI system\u2019s training data included copyrighted material, GAI \noutputs displaying instances of training data memorization (see Data Privacy above) could infringe on \ncopyright. \nHow GAI relates to copyright, including the status of generated content that is similar to but does not \nstrictly copy work protected by copyright, is currently being debated in legal fora. Similar discussions are", "bd7c4ee6-636c-4e73-8669-68ae8df8a0e8": "or lead to algorithmic discrimination. \nOversight. Human-based systems have the potential for bias, including automation bias, as well as other \nconcerns that may limit their effectiveness. The results of assessments of the efficacy and potential bias of \nsuch human-based systems should be overseen by governance structures that have the potential to update the \noperation of the human-based system in order to mitigate these effects. \n50", "96206509-2450-4808-b3db-0ad36b187bf3": "Intellectual Property; Data Privacy; \nObscene, Degrading, and/or \nAbusive Content \nMP-4.1-005 \nEstablish policies for collection, retention, and minimum quality of data, in \nconsideration of the following risks: Disclosure of inappropriate CBRN information; \nUse of Illegal or dangerous content; O\ufb00ensive cyber capabilities; Training data \nimbalances that could give rise to harmful biases; Leak of personally identi\ufb01able \ninformation, including facial likenesses of individuals. \nCBRN Information or Capabilities; \nIntellectual Property; Information \nSecurity; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-006 Implement policies and practices de\ufb01ning how third-party intellectual property and \ntraining data will be used, stored, and protected. \nIntellectual Property; Value Chain \nand Component Integration \nMP-4.1-007 Re-evaluate models that were \ufb01ne-tuned or enhanced on top of third-party \nmodels. \nValue Chain and Component \nIntegration \nMP-4.1-008", "5b799f01-f51b-4867-8554-833805f3ab80": "SAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a", "43309aea-4c65-4a8b-9dbb-ad2c5402ed13": "\u2022\nSurya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup\n\u2022\nMariah Montgomery, National Campaign Director, Partnership for Working Families\n55", "4c75b2c9-d74b-46ad-b25f-e5b2bbba9a2f": "While indirect feedback methods such as automated error collection systems are useful, they often lack \nthe context and depth that direct input from end users can provide. Organizations can leverage feedback \napproaches described in the Pre-Deployment Testing section to capture input from external sources such \nas through AI red-teaming. \nIntegrating pre- and post-deployment external feedback into the monitoring process for GAI models and \ncorresponding applications can help enhance awareness of performance changes and mitigate potential \nrisks and harms from outputs. There are many ways to capture and make use of user feedback \u2013 before \nand after GAI systems and digital content transparency approaches are deployed \u2013 to gain insights about \nauthentication e\ufb03cacy and vulnerabilities, impacts of adversarial threats on techniques, and unintended \nconsequences resulting from the utilization of content provenance approaches on users and", "ca9ae4fc-a936-4dda-acea-192bc0206464": "technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of", "4b00025b-f3dc-41ec-b5e2-b4f77272ad81": "32 \nMEASURE 2.6: The AI system is evaluated regularly for safety risks \u2013 as identi\ufb01ed in the MAP function. The AI system to be \ndeployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if \nmade to operate beyond its knowledge limits. Safety metrics re\ufb02ect system reliability and robustness, real-time monitoring, and \nresponse times for AI system failures. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.6-001 \nAssess adverse impacts, including health and wellbeing impacts for value chain \nor other AI Actors that are exposed to sexually explicit, o\ufb00ensive, or violent \ninformation during GAI training and maintenance. \nHuman-AI Con\ufb01guration; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; \nDangerous, Violent, or Hateful \nContent \nMS-2.6-002 \nAssess existence or levels of harmful bias, intellectual property infringement,", "054c9a30-d999-4ec7-a07e-200e0ac42d1f": "into other automated systems that directly impact people\u2019s lives. Federal law has not grown to address the expanding \nscale of private data collection, or of the ability of governments at all levels to access that data and leverage the means \nof private collection. \nMeanwhile, members of the American public are often unable to access their personal data or make critical decisions \nabout its collection and use. Data brokers frequently collect consumer data from numerous sources without \nconsumers\u2019 permission or knowledge.60 Moreover, there is a risk that inaccurate and faulty data can be used to \nmake decisions about their lives, such as whether they will qualify for a loan or get a job. Use of surveillance \ntechnologies has increased in schools and workplaces, and, when coupled with consequential management and \nevaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and", "1b4221a5-1a5d-4193-b4c3-d0927768a090": "110. Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020. https://\nbipartisanpolicy.org/blog/the-low-down-on-ballot-curing/; Zahavah Levine and Thea Raymond-\nSeidel. Mail Voting Litigation in 2020, Part IV: Verifying Mail Ballots. Oct. 29, 2020.\nhttps://www.lawfareblog.com/mail-voting-litigation-2020-part-iv-verifying-mail-ballots\n111. National Conference of State Legislatures. Table 15: States With Signature Cure Processes. Jan. 18,\n2022.\nhttps://www.ncsl.org/research/elections-and-campaigns/vopp-table-15-states-that-permit-voters-to\u00ad\ncorrect-signature-discrepancies.aspx\n112. White House Office of Science and Technology Policy. Join the Effort to Create A Bill of Rights for\nan Automated Society. Nov. 10, 2021.\nhttps://www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of\u00ad\nrights-for-an-automated-society/\n113. White House Office of Science and Technology Policy. Notice of Request for Information (RFI) on", "e2a458cd-3f14-4aad-ad1d-0efcae5d686c": "Research Institute Report. June 29, 2021. https://datasociety.net/library/assembling-accountability\u00ad\nalgorithmic-impact-assessment-for-the-public-interest/; Nicol Turner Lee, Paul Resnick, and Genie\nBarton. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms.\nBrookings Report. May 22, 2019.\nhttps://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and\u00ad\npolicies-to-reduce-consumer-harms/; Andrew D. Selbst. An Institutional View Of Algorithmic Impact\nAssessments. Harvard Journal of Law & Technology. June 15, 2021. https://ssrn.com/abstract=3867634;\nDillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. Algorithmic Impact\nAssessments: A Practical Framework for Public Agency Accountability. AI Now Institute Report. April\n2018. https://ainowinstitute.org/aiareport2018.pdf\n51. Department of Justice. Justice Department Announces New Initiative to Combat Redlining. Oct. 22,", "380e7d12-ea58-4f2f-bc0c-4e04c176047d": "HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe federal government is working to combat discrimination in mortgage lending. The Depart\u00ad\nment of Justice has launched a nationwide initiative to combat redlining, which includes reviewing how \nlenders who may be avoiding serving communities of color are conducting targeted marketing and advertising.51 \nThis initiative will draw upon strong partnerships across federal agencies, including the Consumer Financial \nProtection Bureau and prudential regulators. The Action Plan to Advance Property Appraisal and Valuation \nEquity includes a commitment from the agencies that oversee mortgage lending to include a \nnondiscrimination standard in the proposed rules for Automated Valuation Models.52", "b73c4e8f-15b1-48df-b5d3-0dc244b5e44d": "Intellectual Property \nGV-6.1-009 \nUpdate and integrate due diligence processes for GAI acquisition and \nprocurement vendor assessments to include intellectual property, data privacy, \nsecurity, and other risks. For example, update processes to: Address solutions that \nmay rely on embedded GAI technologies; Address ongoing monitoring, \nassessments, and alerting, dynamic risk assessments, and real-time reporting \ntools for monitoring third-party GAI risks; Consider policy adjustments across GAI \nmodeling libraries, tools and APIs, \ufb01ne-tuned models, and embedded tools; \nAssess GAI vendors, open-source or proprietary GAI tools, or GAI service \nproviders against incident or vulnerability databases. \nData Privacy; Human-AI \nCon\ufb01guration; Information \nSecurity; Intellectual Property; \nValue Chain and Component \nIntegration; Harmful Bias and \nHomogenization \nGV-6.1-010 \nUpdate GAI acceptable use policies to address proprietary and open-source GAI", "fac21c98-5e09-4073-8499-737a13a0eb2d": "disparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5", "0026669e-4953-4d6a-b1d9-ecfa12faec64": "\u2022\nA device originally developed to help people track and find lost items has been used as a tool by stalkers to track\nvictims\u2019 locations in violation of their privacy and safety. The device manufacturer took steps after release to\nprotect people from unwanted tracking by alerting people on their phones when a device is found to be moving\nwith them over time and also by having the device make an occasional noise, but not all phones are able\nto receive the notification and the devices remain a safety concern due to their misuse.8 \n\u2022\nAn algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit,\neven if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions\nwere the result of a feedback loop generated from the reuse of data from previous arrests and algorithm\npredictions.9\n16", "1171bb5d-18a9-429e-8122-da09f3a0d9f2": "the creation and spread of NCII disproportionately impacts women and sexual minorities, and can have \nsubsequent negative consequences including decline in overall mental health, substance abuse, and \neven suicidal thoughts. \nData used for training GAI models may unintentionally include CSAM and NCII. A recent report noted \nthat several commonly used GAI training datasets were found to contain hundreds of known images of", "d96f7e82-cc68-47f6-86d2-85aa141a8c9e": "APPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n\u2022\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n\u2022\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n\u2022\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n\u2022\nLiz O\u2019Sullivan, CEO, Parity AI\n\u2022\nTimnit Gebru, Independent Scholar\n\u2022\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n\u2022", "d44e9dcd-c607-44be-8995-10b21aae83a5": "58 \nSatariano, A. et al. (2023) The People Onscreen Are Fake. The Disinformation Is Real. New York Times. \nhttps://www.nytimes.com/2023/02/07/technology/arti\ufb01cial-intelligence-training-deepfake.html \nSchaul, K. et al. (2024) Inside the secret list of websites that make AI like ChatGPT sound smart. \nWashington Post. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/ \nScheurer, J. et al. (2023) Technical report: Large language models can strategically deceive their users \nwhen put under pressure. arXiv. https://arxiv.org/abs/2311.07590 \nShelby, R. et al. (2023) Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm \nReduction. arXiv. https://arxiv.org/pdf/2210.05791 \nShevlane, T. et al. (2023) Model evaluation for extreme risks. arXiv. https://arxiv.org/pdf/2305.15324 \nShumailov, I. et al. (2023) The curse of recursion: training on generated data makes models forget. arXiv. \nhttps://arxiv.org/pdf/2305.17493v2", "a9851a96-2f0d-44d3-bc00-c23aaa41be72": "In addition to the suggested actions below, AI risk management activities and actions set forth in the AI \nRMF 1.0 and Playbook are already applicable for managing GAI risks. Organizations are encouraged to \napply the activities suggested in the AI RMF and its Playbook when managing the risk of GAI systems. \nImplementation of the suggested actions will vary depending on the type of risk, characteristics of GAI \nsystems, stage of the GAI lifecycle, and relevant AI actors involved. \nSuggested actions to manage GAI risks can be found in the tables below: \n\u2022 \nThe suggested actions are organized by relevant AI RMF subcategories to streamline these \nactivities alongside implementation of the AI RMF. \n\u2022 \nNot every subcategory of the AI RMF is included in this document.13 Suggested actions are \nlisted for only some subcategories. \n \n \n13 As this document was focused on the GAI PWG e\ufb00orts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later.", "394ba34f-5572-41aa-9636-d1f9f550d321": "Provide the public with mechanisms for appropriate and meaningful consent, access, and \ncontrol over their data \nUse-specific consent. Consent practices should not allow for abusive surveillance practices. Where data \ncollectors or automated systems seek consent, they should seek it for specific, narrow use contexts, for specif\u00ad\nic time durations, and for use by specific entities. Consent should not extend if any of these conditions change; \nconsent should be re-acquired before using data if the use case changes, a time limit elapses, or data is trans\u00ad\nferred to another entity (including being shared or sold). Consent requested should be limited in scope and \nshould not request consent beyond what is required. Refusal to provide consent should be allowed, without \nadverse effects, to the greatest extent possible based on the needs of the use case. \nBrief and direct consent requests. When seeking consent from users short, plain language consent", "1c921767-4d8e-42c2-b1b7-f1eef6154d6f": "TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \nSAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13", "e4a13b31-217a-46da-a63d-97fb166719a8": "Human-AI Con\ufb01guration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Con\ufb01guration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that \ufb01ne-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV", "64bede83-602b-4ecc-9aa8-b7e66674fcbf": "public; this section focuses on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify \ufb02aws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management e\ufb00orts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n\u2022 \nGeneral Public: Performed by general users (not necessarily AI or technical experts) who are \nexpected to use the model or interact with its outputs, and who bring their own lived \nexperiences and perspectives to the task of AI red-teaming. These individuals may have been", "9d624f3e-302d-4fcf-9a0e-5e84ce69a0e6": "34. Todd Feathers. Major Universities Are Using Race as a \u201cHigh Impact Predictor\u201d of Student Success:\nStudents, professors, and education experts worry that that\u2019s pushing Black students in particular out of math\nand science. The Markup. Mar. 2, 2021. https://themarkup.org/machine-learning/2021/03/02/major\u00ad\nuniversities-are-using-race-as-a-high-impact-predictor-of-student-success\n65", "24ba513e-4acb-465b-be49-00cb67405123": "APPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures \u2013 providing examples of how strong transparency requirements in smart city projects", "fb71dcec-b23f-4f60-a695-56ecd3f315ac": "SECTION TITLE\nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nYou should be able to opt out, where appropriate, and have access to a person who can quickly \nconsider and remedy problems you encounter. You should be able to opt out from automated systems in \nfavor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable \nexpectations in a given context and with a focus on ensuring broad accessibility and protecting the public from \nespecially harmful impacts. In some cases, a human or other alternative may be required by law. You should have \naccess to timely human consideration and remedy by a fallback and escalation process if an automated system \nfails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and \nfallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and", "ee208f32-1e0d-4e1e-a351-3417bbd87afb": "\u2022\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24", "193dbafa-5c73-4b7a-9b65-0df439acb9d8": "ENDNOTES\n1.The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the\nFederal\u00a0Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive\norder-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n2. The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun.\n24, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/06/24/remarks-by-president\u00ad\nbiden-on-the-supreme-court-decision-to-overturn-roe-v-wade/\n3. The White House. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https://\nwww.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an\u00ad\nautomated-society/\n4. U.S. Dept. of Health, Educ. & Welfare, Report of the Sec\u2019y\u2019s Advisory Comm. on Automated Pers. Data Sys.,", "b115198f-f69a-4ce2-aebb-b3842c8f5271": "Value Chain and Component \nIntegration; Harmful Bias and \nHomogenization \nMG-3.1-003 \nRe-assess model risks after \ufb01ne-tuning or retrieval-augmented generation \nimplementation and for any third-party GAI models deployed for applications \nand/or use cases that were not evaluated in initial testing. \nValue Chain and Component \nIntegration \nMG-3.1-004 \nTake reasonable measures to review training data for CBRN information, and \nintellectual property, and where appropriate, remove it. Implement reasonable \nmeasures to prevent, \ufb02ag, or take other action in response to outputs that \nreproduce particular training data (e.g., plagiarized, trademarked, patented, \nlicensed content or trade secret material). \nIntellectual Property; CBRN \nInformation or Capabilities", "ad125822-a8be-416c-904e-df009ec77b21": "communities. Furthermore, organizations can track and document the provenance of datasets to identify \ninstances in which AI-generated data is a potential root cause of performance issues with the GAI \nsystem. \nA.1.8. Incident Disclosure \nOverview \nAI incidents can be de\ufb01ned as an \u201cevent, circumstance, or series of events where the development, use, \nor malfunction of one or more AI systems directly or indirectly contributes to one of the following harms: \ninjury or harm to the health of a person or groups of people (including psychological harms and harms to \nmental health); disruption of the management and operation of critical infrastructure; violations of \nhuman rights or a breach of obligations under applicable law intended to protect fundamental, labor, \nand intellectual property rights; or harm to property, communities, or the environment.\u201d AI incidents can \noccur in the aggregate (i.e., for systemic discrimination) or acutely (i.e., for one individual). \nState of AI Incident Tracking and Disclosure", "e44738ee-74b6-4246-bc14-d817afb94e83": "American Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative", "68ce524c-132f-488c-adcf-6d6b0fd3ee28": "ing should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator\u2019s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\u00ad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\u00ad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\u00ad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing", "ed722cdb-468f-4721-a373-d1ca5a35c1f9": "should not impose an unreasonable burden on the public. Automated systems with an intended use within sensi\u00ad\ntive domains, including, but not limited to, criminal justice, employment, education, and health, should additional\u00ad\nly be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting \nwith the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes \na description of these human governance processes and assessment of their timeliness, accessibility, outcomes, \nand effectiveness should be made public whenever possible. \nDefinitions for key terms in The Blueprint for an AI Bill of Rights can be found in Applying the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for actualizing each principle can be found in the Technical Companion. \n7", "4097f22e-c5bf-4c18-8078-c3a2899b5bfb": "DATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \u00ad\u00ad\u00ad\u00ad\u00ad\u00ad\nIn addition to the privacy expectations above for general non-sensitive data, any system collecting, using, shar-\ning, or storing sensitive data should meet the expectations below. Depending on the technological use case and \nbased on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child. \nProvide enhanced protections for data related to sensitive domains \nNecessary functions only. Sensitive data should only be used for functions strictly necessary for that \ndomain or for functions that are required for administrative reasons (e.g., school attendance records), unless \nconsent is acquired, if appropriate, and the additional expectations in this section are met. Consent for non-", "72d14b3e-b07e-43bd-9020-1a2c23f4ef52": "researchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identi\ufb01ed in this document in \norder to adequately describe an experimental procedure or concept. Such identi\ufb01cation is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it", "db18094e-cd82-4e21-8d23-3a29d290999b": "should be maintained and supported as long as the relevant automated system continues to be in use. \nInstitute training, assessment, and oversight to combat automation bias and ensure any \nhuman-based components of a system are effective. \nTraining and assessment. Anyone administering, interacting with, or interpreting the outputs of an auto\u00ad\nmated system should receive training in that system, including how to properly interpret outputs of a system \nin light of its intended purpose and in how to mitigate the effects of automation bias. The training should reoc\u00ad\ncur regularly to ensure it is up to date with the system and to ensure the system is used appropriately. Assess\u00ad\nment should be ongoing to ensure that the use of the system with human involvement provides for appropri\u00ad\nate results, i.e., that the involvement of people does not invalidate the system's assessment as safe and effective \nor lead to algorithmic discrimination.", "094c20fa-14b1-497b-b40e-5b99c32cf2fc": "(https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and\nagainst Weight Watchers and their subsidiary Kurbo\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)\n69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109\n(1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. \u00a7 1232g), Children's Online\nPrivacy Protection Act of 1998, 15 U.S.C. 6501\u20136505, and Confidential Information Protection and\nStatistical Efficiency Act (CIPSEA) (116 Stat. 2899)\n70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov.\n21, 2018.\nhttps://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true\n71. Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012.\nhttps://www.nytimes.com/2012/02/19/magazine/shopping-habits.html", "f33bc6b2-858a-46bd-ba56-b6410ce7b11b": "Security \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address speci\ufb01c risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Con\ufb01guration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization", "ea01c2f2-4936-4233-8845-855c033c5a09": "\u00ad\u00ad\u00ad\u00ad\u00ad\u00ad\u00ad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity", "641dd569-3b6d-49b4-ab74-5b743949ed5d": "requirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies\u2019 reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-", "ea99d79c-dacc-4993-a145-2146a1469e05": "SAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \u00ad\u00ad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles\u2014while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI\u2014require that AI is: (a) lawful and \nrespectful of our Nation\u2019s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d)", "e8a4ecfe-f6e5-4984-8f0c-694996adfb03": "systems. \n8. Information Integrity: Lowered barrier to entry to generate and support the exchange and \nconsumption of content which may not distinguish fact from opinion or \ufb01ction or acknowledge \nuncertainties, or could be leveraged for large-scale dis- and mis-information campaigns. \n9. Information Security: Lowered barriers for o\ufb00ensive cyber capabilities, including via automated \ndiscovery and exploitation of vulnerabilities to ease hacking, malware, phishing, o\ufb00ensive cyber \n \n \n6 Some commenters have noted that the terms \u201challucination\u201d and \u201cfabrication\u201d anthropomorphize GAI, which \nitself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human \nentities. \n7 What is categorized as sensitive data or sensitive PII can be highly contextual based on the nature of the \ninformation, but examples of sensitive information include information that relates to an information subject\u2019s", "a7b25bc5-d04c-4ce5-b11d-18080ed7322b": "9 Confabulations of falsehoods are most commonly a problem for text-based outputs; for audio, image, or video \ncontent, creative generation of non-factual content can be a desired behavior. \n10 For example, legal confabulations have been shown to be pervasive in current state-of-the-art LLMs. See also, \ne.g.,", "0422346b-f47b-48ad-890e-93045e292363": "this document as well as in Executive Order on Advancing Racial Equity and Support for Underserved\nCommunities Through the Federal Government:\nhttps://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order\u00ad\nadvancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n106. HealthCare.gov. Navigator - HealthCare.gov Glossary. Accessed May 2, 2022.\nhttps://www.healthcare.gov/glossary/navigator/\n72", "d444272b-84db-47b2-8e39-d070bef54d11": "ENDNOTES\n57. ISO Technical Management Board. ISO/IEC Guide 71:2014. Guide for addressing accessibility in\nstandards. International Standards Organization. 2021. https://www.iso.org/standard/57385.html\n58. World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.0. Dec. 11, 2008.\nhttps://www.w3.org/TR/WCAG20/\n59. Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, and Andrew Bert. NIST Special\nPublication 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. The\nNational Institute of Standards and Technology. March, 2022. https://nvlpubs.nist.gov/nistpubs/\nSpecialPublications/NIST.SP.1270.pdf\n60. See, e.g., the 2014 Federal Trade Commission report \u201cData Brokers A Call for Transparency and\nAccountability\u201d. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency\u00ad\naccountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf", "84e5065a-6f26-49c3-aeb8-31a8102a856b": "DATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nProtect the public from unchecked surveillance \nHeightened oversight of surveillance. Surveillance or monitoring systems should be subject to \nheightened oversight that includes at a minimum assessment of potential harms during design (before deploy\u00ad\nment) and in an ongoing manner, to ensure that the American public\u2019s rights, opportunities, and access are \nprotected. This assessment should be done before deployment and should give special attention to ensure \nthere is not algorithmic discrimination, especially based on community membership, when deployed in a \nspecific real-world context. Such assessment should then be reaffirmed in an ongoing manner as long as the \nsystem is in use.", "3976a13c-4484-47bc-8b1d-0fcb75a19b95": "HUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nEquitable. Consideration should be given to ensuring outcomes of the fallback and escalation system are \nequitable when compared to those of the automated system and such that the fallback and escalation \nsystem provides equitable access to underserved communities.105 \nTimely. Human consideration and fallback are only useful if they are conducted and concluded in a \ntimely manner. The determination of what is timely should be made relative to the specific automated \nsystem, and the review system should be staffed and regularly assessed to ensure it is providing timely \nconsideration and fallback. In time-critical systems, this mechanism should be immediately available or,", "88018024-6cf6-4719-ad61-61f79483bb74": "DATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\nSome domains, including health, employment, education, criminal justice, and personal finance, have long been \nsingled out as sensitive domains deserving of enhanced data protections. This is due to the intimate nature of these \ndomains as well as the inability of individuals to opt out of these domains in any meaningful way, and the \nhistorical discrimination that has often accompanied data knowledge.69 Domains understood by the public to be \nsensitive also change over time, including because of technological developments. Tracking and monitoring \ntechnologies, personal tracking devices, and our extensive data footprints are used and misused more than ever \nbefore; as such, the protections afforded by current legal guidelines may be inadequate. The American public \ndeserves assurances that data related to such sensitive domains is protected and used appropriately and only in", "641be3b7-f879-4cc0-bc16-d9cb27069618": "36 \nMEASURE 2.11: Fairness and bias \u2013 as identi\ufb01ed in the MAP function \u2013 are evaluated and results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.11-001 \nApply use-case appropriate benchmarks (e.g., Bias Benchmark Questions, Real \nHateful or Harmful Prompts, Winogender Schemas15) to quantify systemic bias, \nstereotyping, denigration, and hateful content in GAI system outputs; \nDocument assumptions and limitations of benchmarks, including any actual or \npossible training/test data cross contamination, relative to in-context \ndeployment environment. \nHarmful Bias and Homogenization \nMS-2.11-002 \nConduct fairness assessments to measure systemic bias. Measure GAI system \nperformance across demographic groups and subgroups, addressing both \nquality of service and any allocation of services and resources. Quantify harms \nusing: \ufb01eld testing with sub-group populations to determine likelihood of \nexposure to generated content exhibiting harmful bias, AI red-teaming with", "f12b5467-1c94-4938-98a8-5e0e4e6fff77": "than an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n\u2022\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\u00ad\ncants for spurious and discriminatory reasons; resumes with the word \u201cwomen\u2019s,\u201d such as \u201cwomen\u2019s\nchess club captain,\u201d were penalized in the candidate ranking.33\n\u2022\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n\u2022", "380caf5a-f592-4a9d-8e55-905836b69ded": "zucked-users-say-they-get-blocked-racism-discussion/2859593002/\n8. See, e.g., Michael Levitt. AirTags are being used to track people and cars. Here's what is being done about it.\nNPR. Feb. 18, 2022. https://www.npr.org/2022/02/18/1080944193/apple-airtags-theft-stalking-privacy-tech;\nSamantha Cole. Police Records Show Women Are Being Stalked With Apple AirTags Across the Country.\nMotherboard. Apr. 6, 2022. https://www.vice.com/en/article/y3vj3y/apple-airtags-police-reports-stalking\u00ad\nharassment\n9. Kristian Lum and William Isaac. To Predict and Serve? Significance. Vol. 13, No. 5, p. 14-19. Oct. 7, 2016.\nhttps://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x; Aaron Sankin, Dhruv Mehrotra,\nSurya Mattu, and Annie Gilbertson. Crime Prediction Software Promised to Be Free of Biases. New Data Shows\nIt Perpetuates Them. The Markup and Gizmodo. Dec. 2, 2021. https://themarkup.org/prediction\u00ad\nbias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates\u00ad", "5b9ba636-3418-4270-a189-27f4e5b95ae0": "GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory \nand adjust AI system inventory requirements to account for GAI risks. \nInformation Security \nGV-1.6-002 De\ufb01ne any inventory exemptions in organizational policies for GAI systems \nembedded into application software. \nValue Chain and Component \nIntegration \nGV-1.6-003 \nIn addition to general model, governance, and risk information, consider the \nfollowing items in GAI system inventory entries: Data provenance information \n(e.g., source, signatures, versioning, watermarks); Known issues reported from \ninternal bug tracking or external information sharing resources (e.g., AI incident \ndatabase, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles \nand responsibilities; Special rights and considerations for intellectual property,", "c3f7bcbe-0afe-4e8b-a6c2-8266ee6bec0a": "WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy.", "f78abfc0-dc1b-4904-b10f-45b2d75bdffa": "24 \nMAP 2.1: The speci\ufb01c tasks and methods used to implement the tasks that the AI system will support are de\ufb01ned (e.g., classi\ufb01ers, \ngenerative models, recommenders). \nAction ID \nSuggested Action \nGAI Risks \nMP-2.1-001 \nEstablish known assumptions and practices for determining data origin and \ncontent lineage, for documentation and evaluation purposes. \nInformation Integrity \nMP-2.1-002 \nInstitute test and evaluation for data and content \ufb02ows within the GAI system, \nincluding but not limited to, original data sources, data transformations, and \ndecision-making criteria. \nIntellectual Property; Data Privacy \nAI Actor Tasks: TEVV \n \nMAP 2.2: Information about the AI system\u2019s knowledge limits and how system output may be utilized and overseen by humans is \ndocumented. Documentation provides su\ufb03cient information to assist relevant AI Actors when making decisions and taking \nsubsequent actions. \nAction ID \nSuggested Action \nGAI Risks \nMP-2.2-001", "e88db2aa-0248-4c41-9ff5-f64b062d93ad": "NOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTailored to the level of risk. An assessment should be done to determine the level of risk of the auto\u00ad\nmated system. In settings where the consequences are high as determined by a risk assessment, or extensive \noversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should \nbe built into the system design so that the system\u2019s full behavior can be explained in advance (i.e., only fully \ntransparent models should be used), rather than as an after-the-decision interpretation. In other settings, the \nextent of explanation provided should be tailored to the risk level. \nValid. The explanation provided by a system should accurately reflect the factors and the influences that led", "481dbfa9-e17c-4a32-bfda-547eb5403563": "State of AI Incident Tracking and Disclosure \nFormal channels do not currently exist to report and document AI incidents. However, a number of \npublicly available databases have been created to document their occurrence. These reporting channels \nmake decisions on an ad hoc basis about what kinds of incidents to track. Some, for example, track by \namount of media coverage.", "60edd255-562c-403c-b6b1-20d1d828e53f": "NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1 \n \nJuly 2024 \n \n \n \n \nU.S. Department of Commerce \nGina M. Raimondo, Secretary \nNational Institute of Standards and Technology \nLaurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology", "810d4e10-aa6e-4399-aee2-0740c4dc03c4": "48 \n\u2022 Data protection \n\u2022 Data retention \n\u2022 Consistency in use of de\ufb01ning key terms \n\u2022 Decommissioning \n\u2022 Discouraging anonymous use \n\u2022 Education \n\u2022 Impact assessments \n\u2022 Incident response \n\u2022 Monitoring \n\u2022 Opt-outs \n\u2022 Risk-based controls \n\u2022 Risk mapping and measurement \n\u2022 Science-backed TEVV practices \n\u2022 Secure software development practices \n\u2022 Stakeholder engagement \n\u2022 Synthetic content detection and \nlabeling tools and techniques \n\u2022 Whistleblower protections \n\u2022 Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as di\ufb00erent levels of human-AI con\ufb01gurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations"}}