question,contexts,ground_truth,evolution_type,metadata,episode_done What actions did the OSTP take to engage with stakeholders regarding the use of artificial intelligence and biometric technologies?,"['APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation \nStephanie Dinkins and the Future \nHistories Studio at Stony Brook \nUniversity \nTechNet \nThe Alliance for Media Arts and \nCulture, MIT Open Documentary \nLab and Co-Creation Studio, and \nImmerse \nThe International Brotherhood of \nTeamsters \nThe Leadership Conference on \nCivil and Human Rights \nThorn \nU.S. Chamber of Commerce’s \nTechnology Engagement Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \nXR Association \nXR Safety Initiative \n• As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor members of the public. The listening sessions together drew upwards of 300 participants. The Science and\nTechnology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61\n', 'APPENDIX\nSummaries of Additional Engagements: \n• OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n• OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative \nConsumer Technology Association \nCourtney Radsch \nCoworker \nCyber Farm Labs \nData & Society Research Institute \nData for Black Lives \nData to Actionable Knowledge Lab \nat Harvard University \nDeloitte \nDev Technology Group \nDigital Therapeutics Alliance \nDigital Welfare State & Human \nRights Project and Center for \nHuman Rights and Global Justice at \nNew York University School of \nLaw, and Temple University \nInstitute for Law, Innovation & \nTechnology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier Foundation \nElectronic Privacy Information \nCenter, Center for Digital \nDemocracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information Technology \nResearch and Development \nInteragency Working Group \nHireVue \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at Science Applications \nInternational Corporation \nInformation Technology and \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation \nInternational Business Machines \nCorporation \nInternational Committee of the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner, Dan Berger, \nMargaret Hu, and Sara Katsanis \nJonathan Barry-Blocker \nJoseph Turow \nJoy Buolamwini \nJoy Mack \nKaren Bureau \nLamont Gholston \nLawyers’ Committee for Civil \nRights Under Law \n60\n']","OSTP engaged with stakeholders regarding the use of artificial intelligence and biometric technologies by conducting two listening sessions for members of the public, which drew upwards of 300 participants. Additionally, OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of artificial intelligence and issued a Request For Information (RFI) on the use and governance of biometric technologies to understand their extent, variety, and the stakeholders impacted by their use or regulation.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 60, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 59, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the potential issues associated with automated performance evaluation in the workplace?,"[' \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nAn unemployment benefits system in Colorado required, as a condition of accessing benefits, that applicants\nhave a smartphone in order to verify their identity. No alternative human option was readily available,\nwhich denied many people access to benefits.101\n•\nA fraud detection system for unemployment insurance distribution incorrectly flagged entries as fraudulent,\nleading to people with slight discrepancies or complexities in their files having their wages withheld and tax\nreturns seized without any chance to explain themselves or receive a review by a person.102\n• A patient was wrongly denied access to pain medication when the hospital’s software confused her medica\xad\ntion history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors\nwere afraid to override the system, and she was forced to go without pain relief due to the system’s error.103\n• A large corporation automated performance evaluation and other HR functions, leading to workers being\nfired by an automated system without the possibility of human review, appeal or other form of recourse.104 \n48\n']","The potential issues associated with automated performance evaluation in the workplace include workers being fired by an automated system without the possibility of human review, appeal, or other forms of recourse.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 47, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does synthetic content detection play in managing risks associated with AI-generated outputs?,"[' \n51 \ngeneral public participants. For example, expert AI red-teamers could modify or verify the \nprompts written by general public AI red-teamers. These approaches may also expand coverage \nof the AI risk attack surface. \n• \nHuman / AI: Performed by GAI in combination with specialist or non-specialist human teams. \nGAI-led red-teaming can be more cost effective than human red-teamers alone. Human or GAI-\nled AI red-teaming may be better suited for eliciting different types of harms. \n \nA.1.6. Content Provenance \nOverview \nGAI technologies can be leveraged for many applications such as content generation and synthetic data. \nSome aspects of GAI outputs, such as the production of deepfake content, can challenge our ability to \ndistinguish human-generated content from AI-generated synthetic content. To help manage and mitigate \nthese risks, digital transparency mechanisms like provenance data tracking can trace the origin and \nhistory of content. Provenance data tracking and synthetic content detection can help facilitate greater \ninformation access about both authentic and synthetic content to users, enabling better knowledge of \ntrustworthiness in AI systems. When combined with other organizational accountability mechanisms, \ndigital content transparency approaches can enable processes to trace negative outcomes back to their \nsource, improve information integrity, and uphold public trust. Provenance data tracking and synthetic \ncontent detection mechanisms provide information about the origin and history of content to assist in \nGAI risk management efforts. \nProvenance metadata can include information about GAI model developers or creators of GAI content, \ndate/time of creation, location, modifications, and sources. Metadata can be tracked for text, images, \nvideos, audio, and underlying datasets. The implementation of provenance data tracking techniques can \nhelp assess the authenticity, integrity, intellectual property rights, and potential manipulations in digital \ncontent. Some well-known techniques for provenance data tracking include digital watermarking, \nmetadata recording, digital fingerprinting, and human authentication, among others. \nProvenance Data Tracking Approaches \nProvenance data tracking techniques for GAI systems can be used to track the history and origin of data \ninputs, metadata, and synthetic content. Provenance data tracking records the origin and history for \ndigital content, allowing its authenticity to be determined. It consists of techniques to record metadata \nas well as overt and covert digital watermarks on content. Data provenance refers to tracking the origin \nand history of input data through metadata and digital watermarking techniques. Provenance data \ntracking processes can include and assist AI Actors across the lifecycle who may not have full visibility or \ncontrol over the various trade-offs and cascading impacts of early-stage model decisions on downstream \nperformance and synthetic outputs. For example, by selecting a watermarking model to prioritize \nrobustness (the durability of a watermark), an AI actor may inadvertently diminish computational \ncomplexity (the resources required to implement watermarking). Organizational risk management \nefforts for enhancing content provenance include: \n• \nTracking provenance of training data and metadata for GAI systems; \n• \nDocumenting provenance data limitations within GAI systems; \n']","Synthetic content detection plays a crucial role in managing risks associated with AI-generated outputs by helping to distinguish human-generated content from AI-generated synthetic content. It facilitates greater information access about both authentic and synthetic content, enabling users to better understand the trustworthiness of AI systems. Additionally, it can assist in tracing negative outcomes back to their source, improving information integrity, and upholding public trust.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 54, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role does risk management play in the implementation of feedback activities for AI systems?,"[' \n50 \nParticipatory Engagement Methods \nOn an ad hoc or more structured basis, organizations can design and use a variety of channels to engage \nexternal stakeholders in product development or review. Focus groups with select experts can provide \nfeedback on a range of issues. Small user studies can provide feedback from representative groups or \npopulations. Anonymous surveys can be used to poll or gauge reactions to specific features. Participatory \nengagement methods are often less structured than field testing or red teaming, and are more \ncommonly used in early stages of AI or product development. \nField Testing \nField testing involves structured settings to evaluate risks and impacts and to simulate the conditions \nunder which the GAI system will be deployed. Field style tests can be adapted from a focus on user \npreferences and experiences towards AI risks and impacts – both negative and positive. When carried \nout with large groups of users, these tests can provide estimations of the likelihood of risks and impacts \nin real world interactions. \nOrganizations may also collect feedback on outcomes, harms, and user experience directly from users in \nthe production environment after a model has been released, in accordance with human subject \nstandards such as informed consent and compensation. Organizations should follow applicable human \nsubjects research requirements, and best practices such as informed consent and subject compensation, \nwhen implementing feedback activities. \nAI Red-teaming \nAI red-teaming is an evolving practice that references exercises often conducted in a controlled \nenvironment and in collaboration with AI developers building AI models to identify potential adverse \nbehavior or outcomes of a GAI model or system, how they could occur, and stress test safeguards”. AI \nred-teaming can be performed before or after AI models or systems are made available to the broader \npublic; this section focuses on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify flaws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management efforts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n• \nGeneral Public: Performed by general users (not necessarily AI or technical experts) who are \nexpected to use the model or interact with its outputs, and who bring their own lived \nexperiences and perspectives to the task of AI red-teaming. These individuals may have been \nprovided instructions and material to complete tasks which may elicit harmful model behaviors. \nThis type of exercise can be more effective with large groups of AI red-teamers. \n• \nExpert: Performed by specialists with expertise in the domain or specific AI red-teaming context \nof use (e.g., medicine, biotech, cybersecurity). \n• \nCombination: In scenarios when it is difficult to identify and recruit specialists with sufficient \ndomain and contextual expertise, AI red-teaming exercises may leverage both expert and \n']",The answer to given question is not present in context,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 53, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What concerns arise from companies using surveillance software to track employee discussions about union activity?,"["" \n \n \n \nDATA PRIVACY \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nAn insurer might collect data from a person's social media presence as part of deciding what life\ninsurance rates they should be offered.64\n•\nA data broker harvested large amounts of personal data and then suffered a breach, exposing hundreds of\nthousands of people to potential identity theft. 65\n•\nA local public housing authority installed a facial recognition system at the entrance to housing complexes to\nassist law enforcement with identifying individuals viewed via camera when police reports are filed, leading\nthe community, both those living in the housing complex and not, to have videos of them sent to the local\npolice department and made available for scanning by its facial recognition software.66\n•\nCompanies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32\n""]","Concerns arise from companies using surveillance software to track employee discussions about union activity, as it leads to the surveillance of individual employees and allows companies to surreptitiously intervene in discussions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 31, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of the cross-sectoral profile in the context of the AI Risk Management Framework for Generative AI?,"[' \n1 \n1. \nIntroduction \nThis document is a cross-sectoral profile of and companion resource for the AI Risk Management \nFramework (AI RMF 1.0) for Generative AI,1 pursuant to President Biden’s Executive Order (EO) 14110 on \nSafe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is \nintended for voluntary use and to improve the ability of organizations to incorporate trustworthiness \nconsiderations into the design, development, use, and evaluation of AI products, services, and systems. \nA profile is an implementation of the AI RMF functions, categories, and subcategories for a specific \nsetting, application, or technology – in this case, Generative AI (GAI) – based on the requirements, risk \ntolerance, and resources of the Framework user. AI RMF profiles assist organizations in deciding how to \nbest manage AI risks in a manner that is well-aligned with their goals, considers legal/regulatory \nrequirements and best practices, and reflects risk management priorities. Consistent with other AI RMF \nprofiles, this profile offers insights into how risk can be managed across various stages of the AI lifecycle \nand for GAI as a technology. \nAs GAI covers risks of models or applications that can be used across use cases or sectors, this document \nis an AI RMF cross-sectoral profile. Cross-sectoral profiles can be used to govern, map, measure, and \nmanage risks associated with activities or business processes common across sectors, such as the use of \nlarge language models (LLMs), cloud-based services, or acquisition. \nThis document defines risks that are novel to or exacerbated by the use of GAI. After introducing and \ndescribing these risks, the document provides a set of suggested actions to help organizations govern, \nmap, measure, and manage these risks. \n \n \n1 EO 14110 defines Generative AI as “the class of AI models that emulate the structure and characteristics of input \ndata in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital \ncontent.” While not all GAI is derived from foundation models, for purposes of this document, GAI generally refers \nto generative foundation models. The foundation model subcategory of “dual-use foundation models” is defined by \nEO 14110 as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of \nbillions of parameters; is applicable across a wide range of contexts.” \n2 This profile was developed per Section 4.1(a)(i)(A) of EO 14110, which directs the Secretary of Commerce, acting \nthrough the Director of the National Institute of Standards and Technology (NIST), to develop a companion \nresource to the AI RMF, NIST AI 100–1, for generative AI. \n']","The purpose of the cross-sectoral profile in the context of the AI Risk Management Framework for Generative AI is to assist organizations in deciding how to best manage AI risks in a manner that aligns with their goals, considers legal/regulatory requirements and best practices, and reflects risk management priorities. It offers insights into how risk can be managed across various stages of the AI lifecycle and for Generative AI as a technology.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 4, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are proposed in the Blueprint for an AI Bill of Rights to protect the rights of the American public?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\xad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing \nmonitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have\nconfidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14\n']","The Blueprint for an AI Bill of Rights proposes a set of five principles and associated practices to guide the design, use, and deployment of automated systems to protect the rights of the American public. It includes expectations for automated systems, practical steps for implementation, and emphasizes transparency through reporting to ensure that rights, opportunities, and access are respected.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 13, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the significance of the NSF Program on Fairness in Artificial Intelligence in collaboration with Amazon?,"[' \nENDNOTES\n96. National Science Foundation. NSF Program on Fairness in Artificial Intelligence in Collaboration\nwith Amazon (FAI). Accessed July 20, 2022.\nhttps://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm\n97. Kyle Wiggers. Automatic signature verification software threatens to disenfranchise U.S. voters.\nVentureBeat. Oct. 25, 2020.\nhttps://venturebeat.com/2020/10/25/automatic-signature-verification-software-threatens-to\xad\ndisenfranchise-u-s-voters/\n98. Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved Apr 18, 2022.\nhttps://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots\n99. Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures are by the Same Person.\nWhich Ones? New York Times. Oct. 7, 2020.\nhttps://www.nytimes.com/interactive/2020/10/07/upshot/mail-voting-ballots-signature\xad\nmatching.html\n100. Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020.\nhttps://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/\n101. Andrew Kenney. \'I\'m shocked that they need to have a smartphone\': System for unemployment\nbenefits exposes digital divide. USA Today. May 2, 2021.\nhttps://www.usatoday.com/story/tech/news/2021/05/02/unemployment-benefits-system-leaving\xad\npeople-behind/4915248001/\n102. Allie Gross. UIA lawsuit shows how the state criminalizes the unemployed. Detroit Metro-Times.\nSep. 18, 2015.\nhttps://www.metrotimes.com/news/uia-lawsuit-shows-how-the-state-criminalizes-the\xad\nunemployed-2369412\n103. Maia Szalavitz. The Pain Was Unbearable. So Why Did Doctors Turn Her Away? Wired. Aug. 11,\n2021. https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/\n104. Spencer Soper. Fired by Bot at Amazon: ""It\'s You Against the Machine"". Bloomberg, Jun. 28, 2021.\nhttps://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine\xad\nmanagers-and-workers-are-losing-out\n105. Definitions of ‘equity’ and ‘underserved communities’ can be found in the Definitions section of\nthis document as well as in Executive Order on Advancing Racial Equity and Support for Underserved\nCommunities Through the Federal Government:\nhttps://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order\xad\nadvancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n106. HealthCare.gov. Navigator - HealthCare.gov Glossary. Accessed May 2, 2022.\nhttps://www.healthcare.gov/glossary/navigator/\n72\n']",The answer to given question is not present in context,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 71, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be taken to demonstrate the safety and effectiveness of automated systems?,"[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n']","To demonstrate the safety and effectiveness of automated systems, the following measures should be taken: 1. Independent evaluation should be allowed, enabling access for independent evaluators such as researchers and auditors to the system and associated data. 2. Reporting should be regularly updated, including an overview of the system, data used for training, risk assessments, performance testing results, and ongoing monitoring procedures. Reports should be provided in plain language and machine-readable formats.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of the impact documentation process in the context of GAI systems?,"[' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n']","The purpose of the impact documentation process in the context of GAI systems is to document the risks and potential impacts of the AI technology designed, developed, deployed, evaluated, and used, and to communicate about these impacts more broadly.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What should be assessed to understand data privacy risks in the use of training data?,"["" \n27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring \n \n"", ' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n']","To understand data privacy risks in the use of training data, it is important to conduct appropriate diligence on training data use to assess intellectual property and privacy risks, including examining whether the use of proprietary or sensitive training data is consistent with applicable laws.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are proposed in the Blueprint for an AI Bill of Rights to protect the rights of the American public?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\xad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing \nmonitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have\nconfidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14\n']","The Blueprint for an AI Bill of Rights proposes a set of five principles and associated practices to guide the design, use, and deployment of automated systems to protect the rights of the American public. It includes expectations for automated systems, practical steps for implementation, and emphasizes transparency through reporting to ensure that rights, opportunities, and access are respected.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 13, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What actions were taken by the New York state legislature regarding biometric identifying technology in schools?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nDATA PRIVACY \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe Privacy Act of 1974 requires privacy protections for personal information in federal \nrecords systems, including limits on data retention, and also provides individuals a general \nright to access and correct their data. Among other things, the Privacy Act limits the storage of individual \ninformation in federal systems of records, illustrating the principle of limiting the scope of data retention. Under \nthe Privacy Act, federal agencies may only retain data about an individual that is “relevant and necessary” to \naccomplish an agency’s statutory purpose or to comply with an Executive Order of the President. The law allows \nfor individuals to be able to access any of their individual information stored in a federal system of records, if not \nincluded under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agen\xad\ncies must provide a method for an individual to determine if their personal information is stored in a particular \nsystem of records, and must provide procedures for an individual to contest the contents of a record about them. \nFurther, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not \ncomply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … \nopportunities…, or benefits.” \nNIST’s Privacy Framework provides a comprehensive, detailed and actionable approach for \norganizations to manage privacy risks. The NIST Framework gives organizations ways to identify and \ncommunicate their privacy risks and goals to support ethical decision-making in system, product, and service \ndesign or deployment, as well as the measures they are taking to demonstrate compliance with applicable laws \nor regulations. It has been voluntarily adopted by organizations across many different sectors around the world.78\nA school board’s attempt to surveil public school students—undertaken without \nadequate community input—sparked a state-wide biometrics moratorium.79 Reacting to a plan in \nthe city of Lockport, New York, the state’s legislature banned the use of facial recognition systems and other \n“biometric identifying technology” in schools until July 1, 2022.80 The law additionally requires that a report on \nthe privacy, civil rights, and civil liberties implications of the use of such technologies be issued before \nbiometric identification technologies can be used in New York schools. \nFederal law requires employers, and any consultants they may retain, to report the costs \nof surveilling employees in the context of a labor dispute, providing a transparency \nmechanism to help protect worker organizing. Employers engaging in workplace surveillance ""where \nan object there-of, directly or indirectly, is […] to obtain information concerning the activities of employees or a \nlabor organization in connection with a labor dispute"" must report expenditures relating to this surveillance to \nthe Department of Labor Office of Labor-Management Standards, and consultants who employers retain for \nthese purposes must also file reports regarding their activities.81\nPrivacy choices on smartphones show that when technologies are well designed, privacy \nand data agency can be meaningful and not overwhelming. These choices—such as contextual, timely \nalerts about location tracking—are brief, direct, and use-specific. Many of the expectations listed here for \nprivacy by design and use-specific consent mirror those distributed to developers as best practices when \ndeveloping for smart phone devices,82 such as being transparent about how user data will be used, asking for app \npermissions during their use so that the use-context will be clear to users, and ensuring that the app will still \nwork if users deny (or later revoke) some permissions. \n39\n']","The New York state legislature banned the use of facial recognition systems and other biometric identifying technology in schools until July 1, 2022. Additionally, the law requires that a report on the privacy, civil rights, and civil liberties implications of the use of such technologies be issued before biometric identification technologies can be used in New York schools.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 38, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the mental health impacts associated with increased use of surveillance technologies in schools and workplaces?,"[' \n \n \n \n \n \nDATA PRIVACY \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nData privacy is a foundational and cross-cutting principle required for achieving all others in this framework. Surveil\xad\nlance and data collection, sharing, use, and reuse now sit at the foundation of business models across many industries, \nwith more and more companies tracking the behavior of the American public, building individual profiles based on \nthis data, and using this granular-level information as input into automated systems that further track, profile, and \nimpact the American public. Government agencies, particularly law enforcement agencies, also use and help develop \na variety of technologies that enhance and expand surveillance capabilities, which similarly collect data used as input \ninto other automated systems that directly impact people’s lives. Federal law has not grown to address the expanding \nscale of private data collection, or of the ability of governments at all levels to access that data and leverage the means \nof private collection. \nMeanwhile, members of the American public are often unable to access their personal data or make critical decisions \nabout its collection and use. Data brokers frequently collect consumer data from numerous sources without \nconsumers’ permission or knowledge.60 Moreover, there is a risk that inaccurate and faulty data can be used to \nmake decisions about their lives, such as whether they will qualify for a loan or get a job. Use of surveillance \ntechnologies has increased in schools and workplaces, and, when coupled with consequential management and \nevaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and \na reduced ability to use analytical reasoning.61 Documented patterns show that personal data is being aggregated by \ndata brokers to profile communities in harmful ways.62 The impact of all this data harvesting is corrosive, \nbreeding distrust, anxiety, and other mental health problems; chilling speech, protest, and worker organizing; and \nthreatening our democratic process.63 The American public should be protected from these growing risks. \nIncreasingly, some companies are taking these concerns seriously and integrating mechanisms to protect consumer \nprivacy into their products by design and by default, including by minimizing the data they collect, communicating \ncollection and use clearly, and improving security practices. Federal government surveillance and other collection and \nuse of data is governed by legal protections that help to protect civil liberties and provide for limits on data retention \nin some cases. Many states have also enacted consumer data privacy protection regimes to address some of these \nharms. \nHowever, these are not yet standard practices, and the United States lacks a comprehensive statutory or regulatory \nframework governing the rights of the public when it comes to personal data. While a patchwork of laws exists to \nguide the collection and use of personal data in specific contexts, including health, employment, education, and credit, \nit can be unclear how these laws apply in other contexts and in an increasingly automated society. Additional protec\xad\ntions would assure the American public that the automated systems they use are not monitoring their activities, \ncollecting information on their lives, or otherwise surveilling them without context-specific consent or legal authori\xad\nty. \n31\n']","The mental health impacts associated with increased use of surveillance technologies in schools and workplaces include lowered self-confidence, anxiety, depression, and a reduced ability to use analytical reasoning.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 30, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the role of AI actors in the AI system lifecycle?,"[' \n13 \n• \nNot every suggested action applies to every AI Actor14 or is relevant to every AI Actor Task. For \nexample, suggested actions relevant to GAI developers may not be relevant to GAI deployers. \nThe applicability of suggested actions to relevant AI actors should be determined based on \norganizational considerations and their unique uses of GAI systems. \nEach table of suggested actions includes: \n• \nAction ID: Each Action ID corresponds to the relevant AI RMF function and subcategory (e.g., GV-\n1.1-001 corresponds to the first suggested action for Govern 1.1, GV-1.1-002 corresponds to the \nsecond suggested action for Govern 1.1). AI RMF functions are tagged as follows: GV = Govern; \nMP = Map; MS = Measure; MG = Manage. \n• \nSuggested Action: Steps an organization or AI actor can take to manage GAI risks. \n• \nGAI Risks: Tags linking suggested actions with relevant GAI risks. \n• \nAI Actor Tasks: Pertinent AI Actor Tasks for each subcategory. Not every AI Actor Task listed will \napply to every suggested action in the subcategory (i.e., some apply to AI development and \nothers apply to AI deployment). \nThe tables below begin with the AI RMF subcategory, shaded in blue, followed by suggested actions. \n \nGOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.1-001 Align GAI development and use with applicable laws and regulations, including \nthose related to data privacy, copyright and intellectual property law. \nData Privacy; Harmful Bias and \nHomogenization; Intellectual \nProperty \nAI Actor Tasks: Governance and Oversight \n \n \n \n14 AI Actors are defined by the OECD as “those who play an active role in the AI system lifecycle, including \norganizations and individuals that deploy or operate AI.” See Appendix A of the AI RMF for additional descriptions \nof AI Actors and AI Actor Tasks. \n \n \n']","AI actors play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 16, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the significance of human-AI configuration in ensuring the adequacy of GAI system user instructions?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']",The significance of human-AI configuration in ensuring the adequacy of GAI system user instructions is highlighted in the context where it mentions verifying the adequacy of GAI system user instructions through user testing. This suggests that human-AI configuration plays a crucial role in assessing and improving the effectiveness of user instructions.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of the AI Safety Institute established by NIST?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","The purpose of the AI Safety Institute established by NIST is to continue efforts to build the science necessary for safe, secure, and trustworthy development and use of artificial intelligence (AI), in alignment with the 2023 Executive Order on Safe, Secure, and Trustworthy AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What criteria does the framework use to determine which automated systems are in scope for the AI Bill of Rights?,"["" \n \n \nSECTION TITLE\nApplying The Blueprint for an AI Bill of Rights \nWhile many of the concerns addressed in this framework derive from the use of AI, the technical \ncapabilities and specific definitions of such systems change with the speed of innovation, and the potential \nharms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-\npart test to determine what systems are in scope. This framework applies to (1) automated systems that (2) \nhave the potential to meaningfully impact the American public’s rights, opportunities, or access to \ncritical resources or services. These rights, opportunities, and access to critical resources of services should \nbe enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in \nour lives. \nThis framework describes protections that should be applied with respect to all automated systems that \nhave the potential to meaningfully impact individuals' or communities' exercise of: \nRIGHTS, OPPORTUNITIES, OR ACCESS\nCivil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimi\xad\nnation, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both \npublic and private sector contexts; \nEqual opportunities, including equitable access to education, housing, credit, employment, and other \nprograms; or, \nAccess to critical resources or services, such as healthcare, financial services, safety, social services, \nnon-deceptive information about goods and services, and government benefits. \nA list of examples of automated systems for which these principles should be considered is provided in the \nAppendix. The Technical Companion, which follows, offers supportive guidance for any person or entity that \ncreates, deploys, or oversees automated systems. \nConsidered together, the five principles and associated practices of the Blueprint for an AI Bill of \nRights form an overlapping set of backstops against potential harms. This purposefully overlapping \nframework, when taken as a whole, forms a blueprint to help protect the public from harm. \nThe measures taken to realize the vision set forward in this framework should be proportionate \nwith the extent and nature of the harm, or risk of harm, to people's rights, opportunities, and \naccess. \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThe Blueprint for an AI Bill of Rights is an exercise in envisioning a future where the American public is \nprotected from the potential harms, and can fully enjoy the benefits, of automated systems. It describes princi\xad\nples that can help ensure these protections. Some of these protections are already required by the U.S. Constitu\xad\ntion or implemented under existing U.S. laws. For example, government surveillance, and data search and \nseizure are subject to legal requirements and judicial oversight. There are Constitutional requirements for \nhuman review of criminal investigative matters and statutory requirements for judicial review. Civil rights laws \nprotect the American people against discrimination. \n8\n""]","The framework uses a two-part test to determine which automated systems are in scope for the AI Bill of Rights: (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 7, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What procedures should be developed and updated in incident response and recovery plans for GAI systems when a previously unknown risk is identified?,"[' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']","Develop and update GAI system incident response and recovery plans and procedures to address the following: Review and maintenance of policies and procedures to account for newly encountered uses; Review and maintenance of policies and procedures for detection of unanticipated uses; Verify response and recovery plans account for the GAI system value chain; Verify response and recovery plans are updated for and include necessary details to communicate with downstream GAI system Actors: Points-of-Contact (POC), Contact information, notification format.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of structured human feedback exercises in the context of GAI risk measurement and management?,"[' \n29 \nMS-1.1-006 \nImplement continuous monitoring of GAI system impacts to identify whether GAI \noutputs are equitable across various sub-populations. Seek active and direct \nfeedback from affected communities via structured feedback mechanisms or red-\nteaming to monitor and improve outputs. \nHarmful Bias and Homogenization \nMS-1.1-007 \nEvaluate the quality and integrity of data used in training and the provenance of \nAI-generated content, for example by employing techniques like chaos \nengineering and seeking stakeholder feedback. \nInformation Integrity \nMS-1.1-008 \nDefine use cases, contexts of use, capabilities, and negative impacts where \nstructured human feedback exercises, e.g., GAI red-teaming, would be most \nbeneficial for GAI risk measurement and management based on the context of \nuse. \nHarmful Bias and \nHomogenization; CBRN \nInformation or Capabilities \nMS-1.1-009 \nTrack and document risks or opportunities related to all GAI risks that cannot be \nmeasured quantitatively, including explanations as to why some risks cannot be \nmeasured (e.g., due to technological limitations, resource constraints, or \ntrustworthy considerations). Include unmeasured risks in marginal risks. \nInformation Integrity \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are \ninvolved in regular assessments and updates. Domain experts, users, AI Actors external to the team that developed or deployed the \nAI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.3-001 \nDefine relevant groups of interest (e.g., demographic groups, subject matter \nexperts, experience with GAI technology) within the context of use as part of \nplans for gathering structured public feedback. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-002 \nEngage in internal and external evaluations, GAI red-teaming, impact \nassessments, or other structured human feedback exercises in consultation \nwith representative AI Actors with expertise and familiarity in the context of \nuse, and/or who are representative of the populations associated with the \ncontext of use. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-003 \nVerify those conducting structured human feedback exercises are not directly \ninvolved in system development tasks for the same GAI model. \nHuman-AI Configuration; Data \nPrivacy \nAI Actor Tasks: AI Deployment, AI Development, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, \nEnd-Users, Operation and Monitoring, TEVV \n \n']","The purpose of structured human feedback exercises in the context of GAI risk measurement and management is to define use cases, contexts of use, capabilities, and negative impacts where these exercises would be most beneficial. They are aimed at monitoring and improving outputs, evaluating the quality and integrity of data used in training, and tracking risks or opportunities related to GAI that cannot be measured quantitatively.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 32, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the significance of human-AI configuration in managing GAI risks and ensuring information integrity?,"[' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n']","The significance of human-AI configuration in managing GAI risks and ensuring information integrity lies in its role in evaluating content lineage and origin, adapting training programs for digital content transparency, and delineating human proficiency tests from GAI capabilities. It also involves continual monitoring of human-GAI configurations and engaging end-users in prototyping and testing activities to address various scenarios, including crisis situations and ethically sensitive contexts.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What criteria are used to measure AI system performance or assurance in deployment settings?,"[' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n']",AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are some suggested actions to address GAI risks in AI systems?,"[' \n35 \nMEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as \nidentified in the MAP function – to inform responsible use and governance. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.9-001 \nApply and document ML explanation results such as: Analysis of embeddings, \nCounterfactual prompts, Gradient-based attributions, Model \ncompression/surrogate models, Occlusion/term reduction. \nConfabulation \nMS-2.9-002 \nDocument GAI model details including: Proposed use and organizational value; \nAssumptions and limitations, Data collection methodologies; Data provenance; \nData quality; Model architecture (e.g., convolutional neural network, \ntransformers, etc.); Optimization objectives; Training algorithms; RLHF \napproaches; Fine-tuning or retrieval-augmented generation approaches; \nEvaluation data; Ethical considerations; Legal and regulatory requirements. \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting of training data \nsamples, and subsequent reverse engineering, model extraction, and \nmembership inference risks; Revealing biometric, confidential, copyrighted, \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \nTracking or revealing location information of users or members of training \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \nProperty \nMS-2.10-002 \nEngage directly with end-users and other stakeholders to understand their \nexpectations and concerns regarding content provenance. Use this feedback to \nguide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n']","Some suggested actions to address GAI risks in AI systems include: applying and documenting ML explanation results such as analysis of embeddings, counterfactual prompts, gradient-based attributions, model compression/surrogate models, and occlusion/term reduction. Additionally, documenting GAI model details including proposed use and organizational value, assumptions and limitations, data collection methodologies, data provenance, data quality, model architecture, optimization objectives, training algorithms, RLHF approaches, fine-tuning or retrieval-augmented generation approaches, evaluation data, ethical considerations, and legal and regulatory requirements.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 38, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do GAI systems play in augmenting cybersecurity attacks?,"[' \n10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published. \n']","GAI systems may augment cybersecurity attacks by advancing offensive cyber capabilities such as hacking, malware, and phishing. Reports indicate that large language models (LLMs) can discover vulnerabilities in systems and write code to exploit them. Sophisticated threat actors might develop GAI-powered security co-pilots to inform attackers on how to evade threat detection and escalate privileges after gaining system access.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 13, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What precautions should be taken when using derived data sources in automated systems?,"[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n']","Precautions that should be taken when using derived data sources in automated systems include careful tracking and validation of derived data, as it is viewed as potentially high-risk and may lead to feedback loops, compounded harm, or inaccurate results. Such data should be validated against the risk of collateral consequences.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the implications of the lack of explanation for decisions made by automated systems?,"[' \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nA predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of\ngun violence (based on automated analysis of social ties to gang members, criminal histories, previous experi\xad\nences of gun violence, and other factors) and led to individuals being placed on a watch list with no\nexplanation or public transparency regarding how the system came to its conclusions.85 Both police and\nthe public deserve to understand why and how such a system is making these determinations.\n•\nA system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry\nerrors and other system flaws. These flaws were only revealed when an explanation of the system\nwas demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a\ntimely manner.\n42\n', "" \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nAutomated systems now determine opportunities, from employment to credit, and directly shape the American \npublic’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this \nexpansive impact is not always visible. An applicant might not know whether a person rejected their resume or a \nhiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny\xad\ning their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting \ndecisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. \nNotice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable\xad\nness of a recommendation before enacting it. \nIn order to guard against potential harms, the American public needs to know if an automated system is being used. \nClear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like\xad\nwise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a \nparticular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, \nunaccountable, whether by design or by omission. These factors can make explanations both more challenging and \nmore important, and should not be used as a pretext to avoid explaining important decisions to the people impacted \nby those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline \nrequirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the \nvalidity and reasonable use of automated systems. \n•\nA lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home\nhealth-care assistance couldn't determine why, especially since the decision went against historical access\npractices. In a court hearing, the lawyer learned from a witness that the state in which the older client\nlived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it\nharder to understand and contest the decision.\n•\nA formal child welfare investigation is opened against a parent based on an algorithm and without the parent\never being notified that data was being collected and used as part of an algorithmic child maltreatment\nrisk assessment.84 The lack of notice or an explanation makes it harder for those performing child\nmaltreatment assessments to validate the risk assessment and denies parents knowledge that could help them\ncontest a decision.\n41\n""]","The lack of explanation for decisions made by automated systems can lead to several implications, including the inability for individuals to understand or contest decisions that affect their lives. For instance, applicants may not know why their resumes were rejected, defendants may be unaware if their bail decisions are influenced by an automated system labeling them as 'high risk', and individuals may face difficulties in correcting errors or contesting decisions due to a lack of transparency. This opacity can result in unaccountable decision-making processes and can hinder the public's ability to trust the validity and reasonable use of automated systems.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 41, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 40, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What should users be notified about regarding automated systems that impact them?,"[' \nYou should know that an automated system is being used, \nand understand how and why it contributes to outcomes \nthat impact you. Designers, developers, and deployers of automat\xad\ned systems should provide generally accessible plain language docu\xad\nmentation including clear descriptions of the overall system func\xad\ntioning and the role automation plays, notice that such systems are in \nuse, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated \nsystem is not the sole input determining the outcome. Automated \nsystems should provide explanations that are technically valid, \nmeaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible. \nNOTICE AND EXPLANATION\n40\n']","Users should be notified about the use of automated systems, the individual or organization responsible for the system, significant use case or key functionality changes, and how and why an outcome impacting them was determined by the automated system.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 39, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the key considerations regarding data privacy in the context of the AI Bill of Rights?,"['TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \nSAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13\n']",The answer to given question is not present in context,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 12, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be taken during disparity assessment of automated systems to ensure inclusivity and fairness?,"["" \n \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nEnsuring accessibility during design, development, and deployment. Systems should be \ndesigned, developed, and deployed by organizations in ways that ensure accessibility to people with disabili\xad\nties. This should include consideration of a wide variety of disabilities, adherence to relevant accessibility \nstandards, and user experience research both before and after deployment to identify and address any accessi\xad\nbility barriers to the use or effectiveness of the automated system. \nDisparity assessment. Automated systems should be tested using a broad set of measures to assess wheth\xad\ner the system components, both in pre-deployment testing and in-context deployment, produce disparities. \nThe demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex \n(including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifi\xad\ncation protected by law. The broad set of measures assessed should include demographic performance mea\xad\nsures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity \nassessment should be separated from data used for the automated system and privacy protections should be \ninstituted; in some cases it may make sense to perform such assessment using a data sample. For every \ninstance where the deployed automated system leads to different treatment or impacts disfavoring the identi\xad\nfied groups, the entity governing, implementing, or using the system should document the disparity and a \njustification for any continued use of the system. \nDisparity mitigation. When a disparity assessment identifies a disparity against an assessed group, it may \nbe appropriate to take steps to mitigate or eliminate the disparity. In some cases, mitigation or elimination of \nthe disparity may be required by law. \nDisparities that have the potential to lead to algorithmic \ndiscrimination, cause meaningful harm, or violate equity49 goals should be mitigated. When designing and \nevaluating an automated system, steps should be taken to evaluate multiple models and select the one that \nhas the least adverse impact, modify data input choices, or otherwise identify a system with fewer \ndisparities. If adequate mitigation of the disparity is not possible, then the use of the automated system \nshould be reconsidered. One of the considerations in whether to use the system should be the validity of any \ntarget measure; unobservable targets may result in the inappropriate use of proxies. Meeting these \nstandards may require instituting mitigation procedures and other protective measures to address \nalgorithmic discrimination, avoid meaningful harm, and achieve equity goals. \nOngoing monitoring and mitigation. Automated systems should be regularly monitored to assess algo\xad\nrithmic discrimination that might arise from unforeseen interactions of the system with inequities not \naccounted for during the pre-deployment testing, changes to the system after deployment, or changes to the \ncontext of use or associated data. Monitoring and disparity assessment should be performed by the entity \ndeploying or using the automated system to examine whether the system has led to algorithmic discrimina\xad\ntion when deployed. This assessment should be performed regularly and whenever a pattern of unusual \nresults is occurring. It can be performed using a variety of approaches, taking into account whether and how \ndemographic information of impacted people is available, for example via testing with a sample of users or via \nqualitative user experience research. Riskier and higher-impact systems should be monitored and assessed \nmore frequently. Outcomes of this assessment should include additional disparity mitigation, if needed, or \nfallback to earlier procedures in the case that equity standards are no longer met and can't be mitigated, and \nprior mechanisms provide better adherence to equity standards. \n27\nAlgorithmic \nDiscrimination \nProtections \n""]","During disparity assessment of automated systems, measures should include testing using a broad set of measures to assess whether the system components produce disparities. The demographics of the assessed groups should be as inclusive as possible, covering aspects such as race, color, ethnicity, sex, religion, age, national origin, disability, and other classifications protected by law. The assessment should include demographic performance measures, overall and subgroup parity assessment, and calibration. Additionally, demographic data collected for disparity assessment should be separated from data used for the automated system, and privacy protections should be instituted.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 26, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the potential risks associated with generative AI models in the context of disinformation and cybersecurity?,"[' \n10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published. \n']","The potential risks associated with generative AI models in the context of disinformation include the ease of producing or disseminating false, inaccurate, or misleading content at scale, both unintentionally (misinformation) and deliberately (disinformation). GAI systems can enable malicious actors to create targeted disinformation campaigns, generate realistic deepfakes, and produce compelling imagery and propaganda. In terms of cybersecurity, GAI models may lower barriers for offensive capabilities, expand the attack surface, and assist in discovering vulnerabilities and writing exploit code, thereby augmenting cybersecurity attacks such as hacking, malware, and phishing.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 13, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What topics were discussed regarding potential harms and oversight in the development of the AI Bill of Rights?,"[""APPENDIX\n• OSTP conducted meetings with a variety of stakeholders in the private sector and civil society. Some of these\nmeetings were specifically focused on providing ideas related to the development of the Blueprint for an AI\nBill of Rights while others provided useful general context on the positive use cases, potential harms, and/or\noversight possibilities for these technologies. Participants in these conversations from the private sector and\ncivil society included:\nAdobe \nAmerican Civil Liberties Union \n(ACLU) \nThe Aspen Commission on \nInformation Disorder \nThe Awood Center \nThe Australian Human Rights \nCommission \nBiometrics Institute \nThe Brookings Institute \nBSA | The Software Alliance \nCantellus Group \nCenter for American Progress \nCenter for Democracy and \nTechnology \nCenter on Privacy and Technology \nat Georgetown Law \nChristiana Care \nColor of Change \nCoworker \nData Robot \nData Trust Alliance \nData and Society Research Institute \nDeepmind \nEdSAFE AI Alliance \nElectronic Privacy Information \nCenter (EPIC) \nEncode Justice \nEqual AI \nGoogle \nHitachi's AI Policy Committee \nThe Innocence Project \nInstitute of Electrical and \nElectronics Engineers (IEEE) \nIntuit \nLawyers Committee for Civil Rights \nUnder Law \nLegal Aid Society \nThe Leadership Conference on \nCivil and Human Rights \nMeta \nMicrosoft \nThe MIT AI Policy Forum \nMovement Alliance Project \nThe National Association of \nCriminal Defense Lawyers \nO’Neil Risk Consulting & \nAlgorithmic Auditing \nThe Partnership on AI \nPinterest \nThe Plaintext Group \npymetrics \nSAP \nThe Security Industry Association \nSoftware and Information Industry \nAssociation (SIIA) \nSpecial Competitive Studies Project \nThorn \nUnited for Respect \nUniversity of California at Berkeley \nCitris Policy Lab \nUniversity of California at Berkeley \nLabor Center \nUnfinished/Project Liberty \nUpturn \nUS Chamber of Commerce \nUS Chamber of Commerce \nTechnology Engagement Center \nA.I. Working Group\nVibrent Health\nWarehouse Worker Resource\nCenter\nWaymap\n62\n""]","The context mentions that some meetings focused on providing ideas related to the development of the Blueprint for an AI Bill of Rights, and others provided useful general context on the positive use cases, potential harms, and/or oversight possibilities for these technologies. However, specific topics discussed regarding potential harms and oversight are not detailed in the provided context.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 61, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be in place to ensure human alternatives and consideration in the use of automated systems?,"['You should be able to opt out, where appropriate, and \nhave access to a person who can quickly consider and \nremedy problems you encounter. You should be able to opt \nout from automated systems in favor of a human alternative, where \nappropriate. Appropriateness should be determined based on rea\xad\nsonable expectations in a given context and with a focus on ensuring \nbroad accessibility and protecting the public from especially harm\xad\nful impacts. In some cases, a human or other alternative may be re\xad\nquired by law. You should have access to timely human consider\xad\nation and remedy by a fallback and escalation process if an automat\xad\ned system fails, it produces an error, or you would like to appeal or \ncontest its impacts on you. Human consideration and fallback \nshould be accessible, equitable, effective, maintained, accompanied \nby appropriate operator training, and should not impose an unrea\xad\nsonable burden on the public. Automated systems with an intended \nuse within sensitive domains, including, but not limited to, criminal \njustice, employment, education, and health, should additionally be \ntailored to the purpose, provide meaningful access for oversight, \ninclude training for any people interacting with the system, and in\xad\ncorporate human consideration for adverse or high-risk decisions. \nReporting that includes a description of these human governance \nprocesses and assessment of their timeliness, accessibility, out\xad\ncomes, and effectiveness should be made public whenever possible. \nHUMAN ALTERNATIVES, CONSIDERATION\nALLBACK\nF\nAND\n, \n46\n']","Measures to ensure human alternatives and consideration in the use of automated systems include the ability to opt out from automated systems in favor of a human alternative where appropriate, access to timely human consideration and remedy through a fallback and escalation process if an automated system fails, and ensuring that human consideration and fallback are accessible, equitable, effective, and maintained. Additionally, automated systems in sensitive domains should be tailored to their purpose, provide meaningful access for oversight, include training for people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 45, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be taken to ensure that automated systems are safe and effective?,"[' \n \n \nSAFE AND EFFECTIVE SYSTEMS \nYou should be protected from unsafe or ineffective sys\xad\ntems. Automated systems should be developed with consultation \nfrom diverse communities, stakeholders, and domain experts to iden\xad\ntify concerns, risks, and potential impacts of the system. Systems \nshould undergo pre-deployment testing, risk identification and miti\xad\ngation, and ongoing monitoring that demonstrate they are safe and \neffective based on their intended use, mitigation of unsafe outcomes \nincluding those beyond the intended use, and adherence to do\xad\nmain-specific standards. Outcomes of these protective measures \nshould include the possibility of not deploying the system or remov\xad\ning a system from use. Automated systems should not be designed \nwith an intent or reasonably foreseeable possibility of endangering \nyour safety or the safety of your community. They should be designed \nto proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15\n']","To ensure that automated systems are safe and effective, measures should include consultation with diverse communities, stakeholders, and domain experts to identify concerns and risks. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. These measures should demonstrate safety and effectiveness based on intended use, mitigate unsafe outcomes, and adhere to domain-specific standards. Additionally, independent evaluation and reporting should confirm safety and effectiveness, with results made public whenever possible.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 14, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What considerations should be taken into account when using automated systems in sensitive domains?,"[' \nSECTION TITLE\nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nYou should be able to opt out, where appropriate, and have access to a person who can quickly \nconsider and remedy problems you encounter. You should be able to opt out from automated systems in \nfavor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable \nexpectations in a given context and with a focus on ensuring broad accessibility and protecting the public from \nespecially harmful impacts. In some cases, a human or other alternative may be required by law. You should have \naccess to timely human consideration and remedy by a fallback and escalation process if an automated system \nfails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and \nfallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and \nshould not impose an unreasonable burden on the public. Automated systems with an intended use within sensi\xad\ntive domains, including, but not limited to, criminal justice, employment, education, and health, should additional\xad\nly be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting \nwith the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes \na description of these human governance processes and assessment of their timeliness, accessibility, outcomes, \nand effectiveness should be made public whenever possible. \nDefinitions for key terms in The Blueprint for an AI Bill of Rights can be found in Applying the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for actualizing each principle can be found in the Technical Companion. \n7\n']","When using automated systems in sensitive domains, considerations should include tailoring the systems to their intended purpose, providing meaningful access for oversight, ensuring training for individuals interacting with the system, and incorporating human consideration for adverse or high-risk decisions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 6, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are some examples of harms caused by algorithmic bias in automated systems?,"[' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nWhile technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can \nalso lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range \nof error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. \nAutomated systems sometimes rely on data from other systems, including historical data, allowing irrelevant informa\xad\ntion from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposeful\xad\nly designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended \nor unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad\nvators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections \nfrom unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consis\xad\ntently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harm\xad\nful outcomes. \n•\nA proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was imple\xad\nmented at hundreds of hospitals around the country. An independent study showed that the model predictions\nunderperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting\nlikelihood of sepsis.6\n•\nOn social media, Black people who quote and criticize racist messages have had their own speech silenced when\na platform’s automated moderation system failed to distinguish this “counter speech” (or other critique\nand journalism) from the original hateful messages to which such speech responded.7\n•\nA device originally developed to help people track and find lost items has been used as a tool by stalkers to track\nvictims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to\nprotect people from unwanted tracking by alerting people on their phones when a device is found to be moving\nwith them over time and also by having the device make an occasional noise, but not all phones are able\nto receive the notification and the devices remain a safety concern due to their misuse.8 \n•\nAn algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit,\neven if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions\nwere the result of a feedback loop generated from the reuse of data from previous arrests and algorithm\npredictions.9\n16\n']","Examples of harms caused by algorithmic bias in automated systems include: 1) A proprietary model predicting sepsis in hospitalized patients that underperformed and caused alert fatigue by falsely alerting likelihood of sepsis. 2) An automated moderation system on social media that silenced Black people who quoted and criticized racist messages, failing to distinguish their counter speech from the original hateful messages. 3) A device meant to help track lost items being misused by stalkers to track victims' locations, despite manufacturer attempts to implement safety measures. 4) An algorithm used for police deployment that sent officers to neighborhoods they regularly visited, rather than those with the highest crime rates, due to a feedback loop from previous data and predictions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 15, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the challenges associated with value chain and component integration in GAI systems?,"[' \n12 \nCSAM. Even when trained on “clean” data, increasingly capable GAI models can synthesize or produce \nsynthetic NCII and CSAM. Websites, mobile apps, and custom-built models that generate synthetic NCII \nhave moved from niche internet forums to mainstream, automated, and scaled online businesses. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe, Privacy Enhanced \n2.12. \nValue Chain and Component Integration \nGAI value chains involve many third-party components such as procured datasets, pre-trained models, \nand software libraries. These components might be improperly obtained or not properly vetted, leading \nto diminished transparency or accountability for downstream users. While this is a risk for traditional AI \nsystems and some other digital technologies, the risk is exacerbated for GAI due to the scale of the \ntraining data, which may be too large for humans to vet; the difficulty of training foundation models, \nwhich leads to extensive reuse of limited numbers of models; and the extent to which GAI may be \nintegrated into other devices and services. As GAI systems often involve many distinct third-party \ncomponents and data sources, it may be difficult to attribute issues in a system’s behavior to any one of \nthese sources. \nErrors in third-party GAI components can also have downstream impacts on accuracy and robustness. \nFor example, test datasets commonly used to benchmark or validate models can contain label errors. \nInaccuracies in these labels can impact the “stability” or robustness of these benchmarks, which many \nGAI practitioners consider during the model selection process. \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n3. \nSuggested Actions to Manage GAI Risks \nThe following suggested actions target risks unique to or exacerbated by GAI. \nIn addition to the suggested actions below, AI risk management activities and actions set forth in the AI \nRMF 1.0 and Playbook are already applicable for managing GAI risks. Organizations are encouraged to \napply the activities suggested in the AI RMF and its Playbook when managing the risk of GAI systems. \nImplementation of the suggested actions will vary depending on the type of risk, characteristics of GAI \nsystems, stage of the GAI lifecycle, and relevant AI actors involved. \nSuggested actions to manage GAI risks can be found in the tables below: \n• \nThe suggested actions are organized by relevant AI RMF subcategories to streamline these \nactivities alongside implementation of the AI RMF. \n• \nNot every subcategory of the AI RMF is included in this document.13 Suggested actions are \nlisted for only some subcategories. \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later. \n']","Challenges associated with value chain and component integration in GAI systems include the improper acquisition or vetting of third-party components such as datasets, pre-trained models, and software libraries, which can lead to diminished transparency and accountability. The scale of training data may be too large for humans to vet, and the difficulty of training foundation models can result in extensive reuse of a limited number of models. Additionally, it may be difficult to attribute issues in a system's behavior to any one of these sources, and errors in third-party GAI components can have downstream impacts on accuracy and robustness.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 15, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What considerations should be taken into account when determining model release approaches?,"[' \n40 \nMANAGE 1.3: Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and \ndocumented. Risk response options can include mitigating, transferring, avoiding, or accepting. \nAction ID \nSuggested Action \nGAI Risks \nMG-1.3-001 \nDocument trade-offs, decision processes, and relevant measurement and \nfeedback results for risks that do not surpass organizational risk tolerance, for \nexample, in the context of model release: Consider different approaches for \nmodel release, for example, leveraging a staged release approach. Consider \nrelease approaches in the context of the model and its projected use cases. \nMitigate, transfer, or avoid risks that surpass organizational risk tolerances. \nInformation Security \nMG-1.3-002 \nMonitor the robustness and effectiveness of risk controls and mitigation plans \n(e.g., via red-teaming, field testing, participatory engagements, performance \nassessments, user feedback mechanisms). \nHuman-AI Configuration \nAI Actor Tasks: AI Development, AI Deployment, AI Impact Assessment, Operation and Monitoring \n \nMANAGE 2.2: Mechanisms are in place and applied to sustain the value of deployed AI systems. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.2-001 \nCompare GAI system outputs against pre-defined organization risk tolerance, \nguidelines, and principles, and review and test AI-generated content against \nthese guidelines. \nCBRN Information or Capabilities; \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content \nMG-2.2-002 \nDocument training data sources to trace the origin and provenance of AI-\ngenerated content. \nInformation Integrity \nMG-2.2-003 \nEvaluate feedback loops between GAI system content provenance and human \nreviewers, and update where needed. Implement real-time monitoring systems \nto affirm that content provenance protocols remain effective. \nInformation Integrity \nMG-2.2-004 \nEvaluate GAI content and data for representational biases and employ \ntechniques such as re-sampling, re-ranking, or adversarial training to mitigate \nbiases in the generated content. \nInformation Security; Harmful Bias \nand Homogenization \nMG-2.2-005 \nEngage in due diligence to analyze GAI output for harmful content, potential \nmisinformation, and CBRN-related or NCII content. \nCBRN Information or Capabilities; \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content \n']","When determining model release approaches, considerations should include documenting trade-offs, decision processes, and relevant measurement and feedback results for risks that do not surpass organizational risk tolerance. Additionally, different approaches for model release should be considered, such as leveraging a staged release approach and evaluating release approaches in the context of the model and its projected use cases.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 43, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What considerations should be taken into account regarding intellectual property when conducting diligence on training data use?,"["" \n27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring \n \n""]","Considerations regarding intellectual property when conducting diligence on training data use include assessing risks related to intellectual property and privacy, and examining whether the use of proprietary or sensitive training data is consistent with applicable laws.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are some examples of automated systems that should be covered by the Blueprint for an AI Bill of Rights?,"[' \n \n \n \n \n \n \n \n \nAPPENDIX\nExamples of Automated Systems \nThe below examples are meant to illustrate the breadth of automated systems that, insofar as they have the \npotential to meaningfully impact rights, opportunities, or access to critical resources or services, should \nbe covered by the Blueprint for an AI Bill of Rights. These examples should not be construed to limit that \nscope, which includes automated systems that may not yet exist, but which fall under these criteria. \nExamples of automated systems for which the Blueprint for an AI Bill of Rights should be considered include \nthose that have the potential to meaningfully impact: \n• Civil rights, civil liberties, or privacy, including but not limited to:\nSpeech-related systems such as automated content moderation tools; \nSurveillance and criminal justice system algorithms such as risk assessments, predictive \n policing, automated license plate readers, real-time facial recognition systems (especially \n those used in public places or during protected activities like peaceful protests), social media \n monitoring, and ankle monitoring devices; \nVoting-related systems such as signature matching tools; \nSystems with a potential privacy impact such as smart home systems and associated data, \n systems that use or collect health-related data, systems that use or collect education-related \n data, criminal justice system data, ad-targeting systems, and systems that perform big data \n analytics in order to build profiles or infer personal information about individuals; and \nAny system that has the meaningful potential to lead to algorithmic discrimination. \n• Equal opportunities, including but not limited to:\nEducation-related systems such as algorithms that purport to detect student cheating or \n plagiarism, admissions algorithms, online or virtual reality student monitoring systems, \nprojections of student progress or outcomes, algorithms that determine access to resources or \n rograms, and surveillance of classes (whether online or in-person); \nHousing-related systems such as tenant screening algorithms, automated valuation systems that \n estimate the value of homes used in mortgage underwriting or home insurance, and automated \n valuations from online aggregator websites; and \nEmployment-related systems such as workplace algorithms that inform all aspects of the terms \n and conditions of employment including, but not limited to, pay or promotion, hiring or termina- \n tion algorithms, virtual or augmented reality workplace training programs, and electronic work \nplace surveillance and management systems. \n• Access to critical resources and services, including but not limited to:\nHealth and health insurance technologies such as medical AI systems and devices, AI-assisted \n diagnostic tools, algorithms or predictive models used to support clinical decision making, medical \n or insurance health risk assessments, drug addiction risk assessments and associated access alg \n-orithms, wearable technologies, wellness apps, insurance care allocation algorithms, and health\ninsurance cost and underwriting algorithms;\nFinancial system algorithms such as loan allocation algorithms, financial system access determi-\nnation algorithms, credit scoring systems, insurance algorithms including risk assessments, auto\n-mated interest rate determinations, and financial algorithms that apply penalties (e.g., that can\ngarnish wages or withhold tax returns);\n53\n']",Examples of automated systems that should be covered by the Blueprint for an AI Bill of Rights include: speech-related systems such as automated content moderation tools; surveillance and criminal justice system algorithms like risk assessments and predictive policing; voting-related systems such as signature matching tools; privacy-impacting systems like smart home systems and health-related data systems; education-related systems such as algorithms for detecting student cheating; housing-related systems like tenant screening algorithms; and employment-related systems that inform terms of employment.,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 52, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are some concerns related to data privacy in the context of sensitive domains?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\n•\nContinuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep\napnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the\ndevice based on usage data. Patients were not aware that the data would be used in this way or monitored\nby anyone other than their doctor.70 \n•\nA department store company used predictive analytics applied to collected consumer data to determine that a\nteenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her\nhouse, revealing to her father that she was pregnant.71\n•\nSchool audio surveillance systems monitor student conversations to detect potential ""stress indicators"" as\na warning of potential violence.72 Online proctoring systems claim to detect if a student is cheating on an\nexam using biometric markers.73 These systems have the potential to limit student freedom to express a range\nof emotions at school and may inappropriately flag students with disabilities who need accommodations or\nuse screen readers or dictation software as cheating.74\n•\nLocation data, acquired from a data broker, can be used to identify people who visit abortion clinics.75\n•\nCompanies collect student data such as demographic information, free or reduced lunch status, whether\nthey\'ve used drugs, or whether they\'ve expressed interest in LGBTQI+ groups, and then use that data to \nforecast student success.76 Parents and education experts have expressed concern about collection of such\nsensitive data without express parental consent, the lack of transparency in how such data is being used, and\nthe potential for resulting discriminatory impacts.\n• Many employers transfer employee data to third party job verification services. This information is then used\nby potential future employers, banks, or landlords. In one case, a former employee alleged that a\ncompany supplied false data about her job title which resulted in a job offer being revoked.77\n37\n']","Concerns related to data privacy in sensitive domains include the lack of awareness among patients regarding the use of their medical data by insurance companies, the revelation of personal information (such as pregnancy) through targeted advertising, the monitoring of student conversations which may limit emotional expression and unfairly flag students with disabilities, the use of location data to identify individuals visiting abortion clinics, the collection of sensitive student data without parental consent, and the potential for discriminatory impacts from such data usage. Additionally, there are concerns about the accuracy of employee data transferred to third parties, which can affect job opportunities.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 36, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What considerations should be taken into account when reviewing vendor contracts for third-party GAI technologies?,"[' \n22 \nGV-6.2-003 \nEstablish incident response plans for third-party GAI technologies: Align incident \nresponse plans with impacts enumerated in MAP 5.1; Communicate third-party \nGAI incident response plans to all relevant AI Actors; Define ownership of GAI \nincident response functions; Rehearse third-party GAI incident response plans at \na regular cadence; Improve incident response plans based on retrospective \nlearning; Review incident response plans for alignment with relevant breach \nreporting, data protection, data privacy, or other laws. \nData Privacy; Human-AI \nConfiguration; Information \nSecurity; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization \nGV-6.2-004 \nEstablish policies and procedures for continuous monitoring of third-party GAI \nsystems in deployment. \nValue Chain and Component \nIntegration \nGV-6.2-005 \nEstablish policies and procedures that address GAI data redundancy, including \nmodel weights and other system artifacts. \nHarmful Bias and Homogenization \nGV-6.2-006 \nEstablish policies and procedures to test and manage risks related to rollover and \nfallback technologies for GAI systems, acknowledging that rollover and fallback \nmay include manual processing. \nInformation Integrity \nGV-6.2-007 \nReview vendor contracts and avoid arbitrary or capricious termination of critical \nGAI technologies or vendor services and non-standard terms that may amplify or \ndefer liability in unexpected ways and/or contribute to unauthorized data \ncollection by vendors or third-parties (e.g., secondary data use). Consider: Clear \nassignment of liability and responsibility for incidents, GAI system changes over \ntime (e.g., fine-tuning, drift, decay); Request: Notification and disclosure for \nserious incidents arising from third-party data and systems; Service Level \nAgreements (SLAs) in vendor contracts that address incident response, response \ntimes, and availability of critical support. \nHuman-AI Configuration; \nInformation Security; Value Chain \nand Component Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV, Third-party entities \n \nMAP 1.1: Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and prospective settings in \nwhich the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users \nalong with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, \nsociety, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or \nproduct AI lifecycle; and related TEVV and system metrics. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.1-001 \nWhen identifying intended purposes, consider factors such as internal vs. \nexternal use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty \n']","When reviewing vendor contracts for third-party GAI technologies, considerations should include avoiding arbitrary or capricious termination of critical GAI technologies or vendor services, avoiding non-standard terms that may amplify or defer liability in unexpected ways, and preventing unauthorized data collection by vendors or third-parties. Additionally, there should be a clear assignment of liability and responsibility for incidents, acknowledgment of GAI system changes over time, and requirements for notification and disclosure for serious incidents arising from third-party data and systems. Service Level Agreements (SLAs) in vendor contracts should also address incident response, response times, and availability of critical support.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 25, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the expectations for ensuring that automated systems are safe and effective?,"[' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","The expectations for ensuring that automated systems are safe and effective include: 1) Safeguards to protect the public from harm in a proactive and ongoing manner; 2) Avoiding the use of data that is inappropriate or irrelevant to the task at hand; 3) Demonstrating the safety and effectiveness of the system. Additionally, there should be consultation with the public during the design and implementation phases, extensive testing before deployment, and identification and mitigation of potential risks.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the key components of risk identification and mitigation in the development of automated systems?,"[' \n \n \nSAFE AND EFFECTIVE SYSTEMS \nYou should be protected from unsafe or ineffective sys\xad\ntems. Automated systems should be developed with consultation \nfrom diverse communities, stakeholders, and domain experts to iden\xad\ntify concerns, risks, and potential impacts of the system. Systems \nshould undergo pre-deployment testing, risk identification and miti\xad\ngation, and ongoing monitoring that demonstrate they are safe and \neffective based on their intended use, mitigation of unsafe outcomes \nincluding those beyond the intended use, and adherence to do\xad\nmain-specific standards. Outcomes of these protective measures \nshould include the possibility of not deploying the system or remov\xad\ning a system from use. Automated systems should not be designed \nwith an intent or reasonably foreseeable possibility of endangering \nyour safety or the safety of your community. They should be designed \nto proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15\n']","The key components of risk identification and mitigation in the development of automated systems include pre-deployment testing, risk identification and mitigation processes, ongoing monitoring, and adherence to domain-specific standards. These components aim to ensure that systems are safe and effective based on their intended use and to mitigate unsafe outcomes, including those beyond the intended use.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 14, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the implications of bias and discrimination in automated systems on the rights of the American public?,"[' \nSECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-\ndinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm \npaths, to algorithms that can identify diseases in patients. These tools now drive important decisions across \nsectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, \nthese tools hold the potential to redefine every part of our society and make life better for everyone. \nThis important progress must not come at the price of civil rights or democratic values, foundational American \nprinciples that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the \nPresident ordered the full Federal government to work to root out inequity, embed fairness in decision-\nmaking processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The \nPresident has spoken forcefully about the urgent challenges posed to democracy today and has regularly called \non people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the \nbasis for so many more rights that we have come to take for granted that are ingrained in the fabric of this \ncountry.”2\nTo advance President Biden’s vision, the White House Office of Science and Technology Policy has identified \nfive principles that should guide the design, use, and deployment of automated systems to protect the American \npublic in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that \nprotects all people from these threats—and uses technologies in ways that reinforce our highest values. \nResponding to the experiences of the American public, and informed by insights from researchers, \ntechnologists, advocates, journalists, and policymakers, this framework is accompanied by a technical \ncompanion—a handbook for anyone seeking to incorporate these protections into policy and practice, including \ndetailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3\n']","The implications of bias and discrimination in automated systems on the rights of the American public include limiting opportunities, preventing access to critical resources or services, and reflecting or reproducing existing unwanted inequities. These outcomes can threaten people's opportunities, undermine their privacy, and lead to pervasive tracking of their activities, often without their knowledge or consent.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 2, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures are suggested to protect data privacy in evaluations involving human subjects?,"[' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n']","Suggested measures to protect data privacy in evaluations involving human subjects include: anonymizing data to protect the privacy of human subjects, leveraging privacy output filters, removing any personally identifiable information (PII) to prevent potential harm or misuse, and providing human subjects with options to withdraw participation or revoke their consent for present or future use of their data in GAI applications.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of AI impact assessment in relation to feedback from individuals and communities?,"[' \n20 \nGV-4.3-003 \nVerify information sharing and feedback mechanisms among individuals and \norganizations regarding any negative impact from GAI systems. \nInformation Integrity; Data \nPrivacy \nAI Actor Tasks: AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those \nexternal to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI \nrisks. \nAction ID \nSuggested Action \nGAI Risks \nGV-5.1-001 \nAllocate time and resources for outreach, feedback, and recourse processes in GAI \nsystem development. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nGV-5.1-002 \nDocument interactions with GAI systems to users prior to interactive activities, \nparticularly in contexts involving more significant risks. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Design, AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of \ninfringement of a third-party’s intellectual property or other rights. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.1-001 Categorize different types of GAI content with associated third-party rights (e.g., \ncopyright, intellectual property, data privacy). \nData Privacy; Intellectual \nProperty; Value Chain and \nComponent Integration \nGV-6.1-002 Conduct joint educational activities and events in collaboration with third parties \nto promote best practices for managing GAI risks. \nValue Chain and Component \nIntegration \nGV-6.1-003 \nDevelop and validate approaches for measuring the success of content \nprovenance management efforts with third parties (e.g., incidents detected and \nresponse times). \nInformation Integrity; Value Chain \nand Component Integration \nGV-6.1-004 \nDraft and maintain well-defined contracts and service level agreements (SLAs) \nthat specify content ownership, usage rights, quality standards, security \nrequirements, and content provenance expectations for GAI systems. \nInformation Integrity; Information \nSecurity; Intellectual Property \n']","The purpose of AI impact assessment in relation to feedback from individuals and communities is to collect, consider, prioritize, and integrate feedback regarding the potential individual and societal impacts related to AI risks. This process ensures that organizational policies and practices are in place to address these impacts effectively.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 23, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What principles are required for the design and use of trustworthy artificial intelligence in the federal government?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n']","The principles required for the design and use of trustworthy artificial intelligence in the federal government include: (a) lawful and respectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) safe, secure, and resilient; (e) understandable; (f) responsible and traceable; (g) regularly monitored; (h) transparent; and (i) accountable.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What organizational risk tolerances should be applied to the utilization of third-party GAI resources?,"[' \n42 \nMG-2.4-002 \nEstablish and maintain procedures for escalating GAI system incidents to the \norganizational risk management authority when specific criteria for deactivation \nor disengagement is met for a particular context of use or for the GAI system as a \nwhole. \nInformation Security \nMG-2.4-003 \nEstablish and maintain procedures for the remediation of issues which trigger \nincident response processes for the use of a GAI system, and provide stakeholders \ntimelines associated with the remediation plan. \nInformation Security \n \nMG-2.4-004 Establish and regularly review specific criteria that warrants the deactivation of \nGAI systems in accordance with set risk tolerances and appetites. \nInformation Security \n \nAI Actor Tasks: AI Deployment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMG-3.1-001 \nApply organizational risk tolerances and controls (e.g., acquisition and \nprocurement processes; assessing personnel credentials and qualifications, \nperforming background checks; filtering GAI input and outputs, grounding, fine \ntuning, retrieval-augmented generation) to third-party GAI resources: Apply \norganizational risk tolerance to the utilization of third-party datasets and other \nGAI resources; Apply organizational risk tolerances to fine-tuned third-party \nmodels; Apply organizational risk tolerance to existing third-party models \nadapted to a new domain; Reassess risk measurements after fine-tuning third-\nparty GAI models. \nValue Chain and Component \nIntegration; Intellectual Property \nMG-3.1-002 \nTest GAI system value chain risks (e.g., data poisoning, malware, other software \nand hardware vulnerabilities; labor practices; data privacy and localization \ncompliance; geopolitical alignment). \nData Privacy; Information Security; \nValue Chain and Component \nIntegration; Harmful Bias and \nHomogenization \nMG-3.1-003 \nRe-assess model risks after fine-tuning or retrieval-augmented generation \nimplementation and for any third-party GAI models deployed for applications \nand/or use cases that were not evaluated in initial testing. \nValue Chain and Component \nIntegration \nMG-3.1-004 \nTake reasonable measures to review training data for CBRN information, and \nintellectual property, and where appropriate, remove it. Implement reasonable \nmeasures to prevent, flag, or take other action in response to outputs that \nreproduce particular training data (e.g., plagiarized, trademarked, patented, \nlicensed content or trade secret material). \nIntellectual Property; CBRN \nInformation or Capabilities \n']","Organizational risk tolerances that should be applied to the utilization of third-party GAI resources include applying risk tolerances to the utilization of third-party datasets and other GAI resources, fine-tuned third-party models, and existing third-party models adapted to a new domain. Additionally, it involves reassessing risk measurements after fine-tuning third-party GAI models.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 45, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do legal protections play in addressing algorithmic discrimination?,"[' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']","The context mentions that algorithmic discrimination may violate legal protections depending on specific circumstances, indicating that legal protections play a role in addressing algorithmic discrimination.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What protections should be in place for data and inferences related to sensitive domains?,"[' \n \n \n \n \nSECTION TITLE\nDATA PRIVACY\nYou should be protected from abusive data practices via built-in protections and you \nshould have agency over how data about you is used. You should be protected from violations of \nprivacy through design choices that ensure such protections are included by default, including ensuring that \ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \nin plain language, and give you agency over data collection and the specific context of use; current hard-to\xad\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \nreporting that confirms your data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or access. \nNOTICE AND EXPLANATION\nYou should know that an automated system is being used and understand how and why it \ncontributes to outcomes that impact you. Designers, developers, and deployers of automated systems \nshould provide generally accessible plain language documentation including clear descriptions of the overall \nsystem functioning and the role automation plays, notice that such systems are in use, the individual or organiza\xad\ntion responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice \nshould be kept up-to-date and people impacted by the system should be notified of significant use case or key \nfunctionality changes. You should know how and why an outcome impacting you was determined by an \nautomated system, including when the automated system is not the sole input determining the outcome. \nAutomated systems should provide explanations that are technically valid, meaningful and useful to you and to \nany operators or others who need to understand the system, and calibrated to the level of risk based on the \ncontext. Reporting that includes summary information about these automated systems in plain language and \nassessments of the clarity and quality of the notice and explanations should be made public whenever possible. \n6\n', 'You should be protected from abusive data practices via built-in \nprotections and you should have agency over how data about \nyou is used. You should be protected from violations of privacy through \ndesign choices that ensure such protections are included by default, including \nensuring that data collection conforms to reasonable expectations and that \nonly data strictly necessary for the specific context is collected. Designers, de\xad\nvelopers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and de\xad\nletion of your data in appropriate ways and to the greatest extent possible; \nwhere not possible, alternative privacy by design safeguards should be used. \nSystems should not employ user experience and design decisions that obfus\xad\ncate user choice or burden users with defaults that are privacy invasive. Con\xad\nsent should only be used to justify collection of data in cases where it can be \nappropriately and meaningfully given. Any consent requests should be brief, \nbe understandable in plain language, and give you agency over data collection \nand the specific context of use; current hard-to-understand no\xad\ntice-and-choice practices for broad uses of data should be changed. Enhanced \nprotections and restrictions for data and inferences related to sensitive do\xad\nmains, including health, work, education, criminal justice, and finance, and \nfor data pertaining to youth should put you first. In sensitive domains, your \ndata and related inferences should only be used for necessary functions, and \nyou should be protected by ethical review and use prohibitions. You and your \ncommunities should be free from unchecked surveillance; surveillance tech\xad\nnologies should be subject to heightened oversight that includes at least \npre-deployment assessment of their potential harms and scope limits to pro\xad\ntect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the \nuse of such surveillance technologies is likely to limit rights, opportunities, or \naccess. Whenever possible, you should have access to reporting that confirms \nyour data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or \naccess. \nDATA PRIVACY\n30\n']","Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 29, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the potential consequences of using automated systems without protections against algorithmic discrimination?,"["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n""]","The potential consequences of using automated systems without protections against algorithmic discrimination include inequitable outcomes, wrongful and discriminatory arrests due to facial recognition technology, discriminatory hiring decisions informed by biased algorithms, and healthcare algorithms that may discount the severity of diseases in certain racial groups. These issues can lead to systemic biases being amplified and harm to underserved communities.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be taken to address confabulation in GAI system outputs?,"[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n']","To address confabulation in GAI system outputs, the following measures should be taken: review and verify sources and citations in GAI system outputs during pre-deployment risk measurement and ongoing monitoring activities (MS-2.5-003), and avoid extrapolating GAI system performance or capabilities from narrow, non-systematic, and anecdotal assessments (MS-2.5-001).",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are some concerns related to data privacy in the context of sensitive domains?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\n•\nContinuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep\napnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the\ndevice based on usage data. Patients were not aware that the data would be used in this way or monitored\nby anyone other than their doctor.70 \n•\nA department store company used predictive analytics applied to collected consumer data to determine that a\nteenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her\nhouse, revealing to her father that she was pregnant.71\n•\nSchool audio surveillance systems monitor student conversations to detect potential ""stress indicators"" as\na warning of potential violence.72 Online proctoring systems claim to detect if a student is cheating on an\nexam using biometric markers.73 These systems have the potential to limit student freedom to express a range\nof emotions at school and may inappropriately flag students with disabilities who need accommodations or\nuse screen readers or dictation software as cheating.74\n•\nLocation data, acquired from a data broker, can be used to identify people who visit abortion clinics.75\n•\nCompanies collect student data such as demographic information, free or reduced lunch status, whether\nthey\'ve used drugs, or whether they\'ve expressed interest in LGBTQI+ groups, and then use that data to \nforecast student success.76 Parents and education experts have expressed concern about collection of such\nsensitive data without express parental consent, the lack of transparency in how such data is being used, and\nthe potential for resulting discriminatory impacts.\n• Many employers transfer employee data to third party job verification services. This information is then used\nby potential future employers, banks, or landlords. In one case, a former employee alleged that a\ncompany supplied false data about her job title which resulted in a job offer being revoked.77\n37\n']","Concerns related to data privacy in sensitive domains include the lack of awareness among patients regarding the use of their medical data by insurance companies, the revelation of personal information (such as pregnancy) through targeted advertising, the monitoring of student conversations which may limit emotional expression and unfairly flag students with disabilities, the use of location data to identify individuals visiting abortion clinics, the collection of sensitive student data without parental consent, and the potential for discriminatory impacts from such data usage. Additionally, there are concerns about the accuracy of employee data transferred to third parties, which can affect job opportunities.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 36, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What factors should be considered when evaluating the risk-relevant capabilities of GAI?,"[' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n']","Factors to consider when evaluating the risk-relevant capabilities of GAI include abuses and impacts to information integrity, dependencies between GAI and other IT or data systems, harm to fundamental rights or public safety, presentation of obscene, objectionable, offensive, discriminatory, invalid or untruthful output, psychological impacts to humans (e.g., anthropomorphization, algorithmic aversion, emotional entanglement), possibility for malicious use, whether the system introduces significant new security vulnerabilities, anticipated system impact on some groups compared to others, and unreliable decision-making capabilities, validity, adaptability, and variability of GAI system performance over time.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What considerations should be taken into account when using automated systems in sensitive domains?,"[' \nSECTION TITLE\nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nYou should be able to opt out, where appropriate, and have access to a person who can quickly \nconsider and remedy problems you encounter. You should be able to opt out from automated systems in \nfavor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable \nexpectations in a given context and with a focus on ensuring broad accessibility and protecting the public from \nespecially harmful impacts. In some cases, a human or other alternative may be required by law. You should have \naccess to timely human consideration and remedy by a fallback and escalation process if an automated system \nfails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and \nfallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and \nshould not impose an unreasonable burden on the public. Automated systems with an intended use within sensi\xad\ntive domains, including, but not limited to, criminal justice, employment, education, and health, should additional\xad\nly be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting \nwith the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes \na description of these human governance processes and assessment of their timeliness, accessibility, outcomes, \nand effectiveness should be made public whenever possible. \nDefinitions for key terms in The Blueprint for an AI Bill of Rights can be found in Applying the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for actualizing each principle can be found in the Technical Companion. \n7\n']","When using automated systems in sensitive domains, considerations should include tailoring the systems to their intended purpose, providing meaningful access for oversight, ensuring training for individuals interacting with the system, and incorporating human consideration for adverse or high-risk decisions. Additionally, there should be a focus on accessibility, equity, effectiveness, and the maintenance of these systems, along with public reporting on human governance processes and their outcomes.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 6, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What should be included in the summary reporting for automated systems?,"["" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTailored to the level of risk. An assessment should be done to determine the level of risk of the auto\xad\nmated system. In settings where the consequences are high as determined by a risk assessment, or extensive \noversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should \nbe built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully \ntransparent models should be used), rather than as an after-the-decision interpretation. In other settings, the \nextent of explanation provided should be tailored to the risk level. \nValid. The explanation provided by a system should accurately reflect the factors and the influences that led \nto a particular decision, and should be meaningful for the particular customization based on purpose, target, \nand level of risk. While approximation and simplification may be necessary for the system to succeed based on \nthe explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns \nrelated to revealing decision-making information, such simplifications should be done in a scientifically \nsupportable way. Where appropriate based on the explanatory system, error ranges for the explanation should \nbe calculated and included in the explanation, with the choice of presentation of such information balanced \nwith usability and overall interface complexity concerns. \nDemonstrate protections for notice and explanation \nReporting. Summary reporting should document the determinations made based on the above consider\xad\nations, including: the responsible entities for accountability purposes; the goal and use cases for the system, \nidentified users, and impacted populations; the assessment of notice clarity and timeliness; the assessment of \nthe explanation's validity and accessibility; the assessment of the level of risk; and the account and assessment \nof how explanations are tailored, including to the purpose, the recipient of the explanation, and the level of \nrisk. Individualized profile information should be made readily available to the greatest extent possible that \nincludes explanations for any system impacts or inferences. Reporting should be provided in a clear plain \nlanguage and machine-readable manner. \n44\n""]","The summary reporting for automated systems should include: the responsible entities for accountability purposes; the goal and use cases for the system; identified users and impacted populations; the assessment of notice clarity and timeliness; the assessment of the explanation's validity and accessibility; the assessment of the level of risk; and the account and assessment of how explanations are tailored, including to the purpose, the recipient of the explanation, and the level of risk.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 43, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the key considerations for testing and deployment of automated systems to ensure their safety and effectiveness?,"[' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","Key considerations for testing and deployment of automated systems to ensure their safety and effectiveness include extensive testing before deployment, following domain-specific best practices, considering the roles of human operators, mirroring real-world conditions during testing, comparing system performance with existing human-driven procedures, and identifying and mitigating potential risks proactively. Testing should include both automated and human-led testing, and decision possibilities should include the option of not deploying the system if performance does not meet standards.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of pre-deployment testing in the development of automated systems?,"[' \n \n \nSAFE AND EFFECTIVE SYSTEMS \nYou should be protected from unsafe or ineffective sys\xad\ntems. Automated systems should be developed with consultation \nfrom diverse communities, stakeholders, and domain experts to iden\xad\ntify concerns, risks, and potential impacts of the system. Systems \nshould undergo pre-deployment testing, risk identification and miti\xad\ngation, and ongoing monitoring that demonstrate they are safe and \neffective based on their intended use, mitigation of unsafe outcomes \nincluding those beyond the intended use, and adherence to do\xad\nmain-specific standards. Outcomes of these protective measures \nshould include the possibility of not deploying the system or remov\xad\ning a system from use. Automated systems should not be designed \nwith an intent or reasonably foreseeable possibility of endangering \nyour safety or the safety of your community. They should be designed \nto proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15\n']","The purpose of pre-deployment testing in the development of automated systems is to identify risks and potential impacts of the system, ensuring that it is safe and effective based on its intended use, and to mitigate unsafe outcomes, including those beyond the intended use.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 14, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of the AI Bill of Rights in relation to the Executive Order on trustworthy artificial intelligence?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n']","The Blueprint for an AI Bill of Rights is consistent with the Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which requires federal agencies to adhere to nine principles when using AI.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are some examples of how data privacy principles aim to protect against identity theft?,"["" \n \n \n \nDATA PRIVACY \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nAn insurer might collect data from a person's social media presence as part of deciding what life\ninsurance rates they should be offered.64\n•\nA data broker harvested large amounts of personal data and then suffered a breach, exposing hundreds of\nthousands of people to potential identity theft. 65\n•\nA local public housing authority installed a facial recognition system at the entrance to housing complexes to\nassist law enforcement with identifying individuals viewed via camera when police reports are filed, leading\nthe community, both those living in the housing complex and not, to have videos of them sent to the local\npolice department and made available for scanning by its facial recognition software.66\n•\nCompanies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32\n""]","Examples of how data privacy principles aim to protect against identity theft include: a data broker harvesting large amounts of personal data and suffering a breach that exposes individuals to potential identity theft, and an insurer collecting data from a person's social media presence to determine life insurance rates, which could lead to misuse of personal information.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 31, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the concerns associated with unsafe diffusion in the context of AI-generated content?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n', ' \n57 \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \nNational Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook \nNational Institue of Standards and Technology (2023) Framing Risk \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/1-sec-risk \nNational Institute of Standards and Technology (2023) The Language of Trustworthy AI: An In-Depth \nGlossary of Terms https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary \nNational Institue of Standards and Technology (2022) Towards a Standard for Identifying and Managing \nBias in Artificial Intelligence https://www.nist.gov/publications/towards-standard-identifying-and-\nmanaging-bias-artificial-intelligence \nNorthcutt, C. et al. (2021) Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. \narXiv. https://arxiv.org/pdf/2103.14749 \nOECD (2023) ""Advancing accountability in AI: Governing and managing risks throughout the lifecycle for \ntrustworthy AI"", OECD Digital Economy Papers, No. 349, OECD Publishing, Paris. \nhttps://doi.org/10.1787/2448f04b-en \nOECD (2024) ""Defining AI incidents and related terms"" OECD Artificial Intelligence Papers, No. 16, OECD \nPublishing, Paris. https://doi.org/10.1787/d1a8d965-en \nOpenAI (2023) GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-card.pdf \nOpenAI (2024) GPT-4 Technical Report. https://arxiv.org/pdf/2303.08774 \nPadmakumar, V. et al. (2024) Does writing with language models reduce content diversity? ICLR. \nhttps://arxiv.org/pdf/2309.05196 \nPark, P. et. al. (2024) AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5). \narXiv. https://arxiv.org/pdf/2308.14752 \nPartnership on AI (2023) Building a Glossary for Synthetic Media Transparency Methods, Part 1: Indirect \nDisclosure. https://partnershiponai.org/glossary-for-synthetic-media-transparency-methods-part-1-\nindirect-disclosure/ \nQu, Y. et al. (2023) Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-\nTo-Image Models. arXiv. https://arxiv.org/pdf/2305.13873 \nRafat, K. et al. (2023) Mitigating carbon footprint for knowledge distillation based deep learning model \ncompression. PLOS One. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285668 \nSaid, I. et al. (2022) Nonconsensual Distribution of Intimate Images: Exploring the Role of Legal Attitudes \nin Victimization and Perpetration. Sage. \nhttps://journals.sagepub.com/doi/full/10.1177/08862605221122834#bibr47-08862605221122834 \nSandbrink, J. (2023) Artificial intelligence and biological misuse: Differentiating risks of language models \nand biological design tools. arXiv. https://arxiv.org/pdf/2306.13952 \n']",The answer to given question is not present in context,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 60, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures should entities take to maintain data quality in sensitive domains?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \xad\xad\xad\xad\xad\xad\nIn addition to the privacy expectations above for general non-sensitive data, any system collecting, using, shar-\ning, or storing sensitive data should meet the expectations below. Depending on the technological use case and \nbased on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child. \nProvide enhanced protections for data related to sensitive domains \nNecessary functions only. Sensitive data should only be used for functions strictly necessary for that \ndomain or for functions that are required for administrative reasons (e.g., school attendance records), unless \nconsent is acquired, if appropriate, and the additional expectations in this section are met. Consent for non-\nnecessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to \nreceive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance \ncompany) in order to facilitate payment for such a need, that data should only be used for that purpose. \nEthical review and use prohibitions. Any use of sensitive data or decision process based in part on sensi-\ntive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go \nthrough a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an indepen-\ndent ethics committee or similarly robust process). In some cases, this ethical review may determine that data \nshould not be used or shared for specific uses even with consent. Some novel uses of automated systems in this \ncontext, where the algorithm is dynamically developing and where the science behind the use case is not well \nestablished, may also count as human subject experimentation, and require special review under organizational \ncompliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and \ngovernance procedures. \nData quality. In sensitive domains, entities should be especially careful to maintain the quality of data to \navoid adverse consequences arising from decision-making based on flawed or inaccurate data. Such care is \nnecessary in a fragmented, complex data ecosystem and for datasets that have limited access such as for fraud \nprevention and law enforcement. It should be not left solely to individuals to carry the burden of reviewing and \ncorrecting data. Entities should conduct regular, independent audits and take prompt corrective measures to \nmaintain accurate, timely, and complete data. \nLimit access to sensitive data and derived data. Sensitive data and derived data should not be sold, \nshared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based \non a principle of local control, such that those individuals closest to the data subject have more access while \nthose who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a \nsuperintendent does not). \nReporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities devel-\noping technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data \nshould, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches \nthat resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a \ndescription of any data sold, shared, or made public, and how that data was assessed to determine it did not pres-\nent a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added \nbased on these procedures. Reporting should be provided in a clear and machine-readable manner. \n38\n']","Entities should be especially careful to maintain the quality of data in sensitive domains to avoid adverse consequences arising from decision-making based on flawed or inaccurate data. This includes conducting regular, independent audits and taking prompt corrective measures to maintain accurate, timely, and complete data.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 37, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of implementing a supplier risk assessment framework in evaluating third-party entities?,"[' \n21 \nGV-6.1-005 \nImplement a use-cased based supplier risk assessment framework to evaluate and \nmonitor third-party entities’ performance and adherence to content provenance \nstandards and technologies to detect anomalies and unauthorized changes; \nservices acquisition and value chain risk management; and legal compliance. \nData Privacy; Information \nIntegrity; Information Security; \nIntellectual Property; Value Chain \nand Component Integration \nGV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party \nGAI processes and standards. \nInformation Integrity \nGV-6.1-007 Inventory all third-party entities with access to organizational content and \nestablish approved GAI technology and service provider lists. \nValue Chain and Component \nIntegration \nGV-6.1-008 Maintain records of changes to content made by third parties to promote content \nprovenance, including sources, timestamps, metadata. \nInformation Integrity; Value Chain \nand Component Integration; \nIntellectual Property \nGV-6.1-009 \nUpdate and integrate due diligence processes for GAI acquisition and \nprocurement vendor assessments to include intellectual property, data privacy, \nsecurity, and other risks. For example, update processes to: Address solutions that \nmay rely on embedded GAI technologies; Address ongoing monitoring, \nassessments, and alerting, dynamic risk assessments, and real-time reporting \ntools for monitoring third-party GAI risks; Consider policy adjustments across GAI \nmodeling libraries, tools and APIs, fine-tuned models, and embedded tools; \nAssess GAI vendors, open-source or proprietary GAI tools, or GAI service \nproviders against incident or vulnerability databases. \nData Privacy; Human-AI \nConfiguration; Information \nSecurity; Intellectual Property; \nValue Chain and Component \nIntegration; Harmful Bias and \nHomogenization \nGV-6.1-010 \nUpdate GAI acceptable use policies to address proprietary and open-source GAI \ntechnologies and data, and contractors, consultants, and other third-party \npersonnel. \nIntellectual Property; Value Chain \nand Component Integration \nAI Actor Tasks: Operation and Monitoring, Procurement, Third-party entities \n \nGOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be \nhigh-risk. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.2-001 \nDocument GAI risks associated with system value chain to identify over-reliance \non third-party data and to identify fallbacks. \nValue Chain and Component \nIntegration \nGV-6.2-002 \nDocument incidents involving third-party GAI data and systems, including open-\ndata and open-source software. \nIntellectual Property; Value Chain \nand Component Integration \n']","The purpose of implementing a supplier risk assessment framework in evaluating third-party entities is to assess and monitor their performance and adherence to content provenance standards, detect anomalies and unauthorized changes, manage services acquisition and value chain risks, and ensure legal compliance.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 24, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of creating measurement error models for pre-deployment metrics in the context of TEVV processes?,"[' \n38 \nMEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.13-001 \nCreate measurement error models for pre-deployment metrics to demonstrate \nconstruct validity for each metric (i.e., does the metric effectively operationalize \nthe desired concept): Measure or estimate, and document, biases or statistical \nvariance in applied metrics or structured human feedback processes; Leverage \ndomain expertise when modeling complex societal constructs such as hateful \ncontent. \nConfabulation; Information \nIntegrity; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV \n \nMEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available \nmeasurement techniques or where metrics are not yet available. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.2-001 \nEstablish processes for identifying emergent GAI system risks including \nconsulting with external AI Actors. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are \nestablished and integrated into AI system evaluation metrics. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.3-001 \nConduct impact assessments on how AI-generated content might affect \ndifferent social, economic, and cultural groups. \nHarmful Bias and Homogenization \nMS-3.3-002 \nConduct studies to understand how end users perceive and interact with GAI \ncontent and accompanying content provenance within context of use. Assess \nwhether the content aligns with their expectations and how they may act upon \nthe information presented. \nHuman-AI Configuration; \nInformation Integrity \nMS-3.3-003 \nEvaluate potential biases and stereotypes that could emerge from the AI-\ngenerated content using appropriate methodologies including computational \ntesting methods as well as evaluating structured feedback input. \nHarmful Bias and Homogenization \n']","The purpose of creating measurement error models for pre-deployment metrics in the context of TEVV processes is to demonstrate construct validity for each metric, ensuring that the metric effectively operationalizes the desired concept. This involves measuring or estimating and documenting biases or statistical variance in applied metrics or structured human feedback processes, while leveraging domain expertise when modeling complex societal constructs such as hateful content.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 41, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do legal protections play in addressing algorithmic discrimination?,"["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n"", ' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']","The context mentions that algorithmic discrimination may violate legal protections, indicating that legal protections play a role in addressing algorithmic discrimination by providing a framework that designers, developers, and deployers of automated systems must adhere to in order to protect individuals and communities from unjustified different treatment based on various classifications.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the potential risks associated with the production and access to obscene and abusive content?,"[' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']","The potential risks associated with the production and access to obscene and abusive content include eased production of and access to obscene, degrading, and/or abusive imagery, which can cause harm. This includes synthetic child sexual abuse material (CSAM) and nonconsensual intimate images (NCII) of adults.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What efforts is NIST making to ensure the development of safe and trustworthy AI?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","NIST is making efforts to ensure the development of safe and trustworthy AI by developing measurements, technology, tools, and standards that advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence. They have established the U.S. AI Safety Institute and the AI Safety Institute Consortium to build the necessary science for safe, secure, and trustworthy development and use of AI, in alignment with the 2023 Executive Order on Safe, Secure, and Trustworthy AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What considerations are important for governing across the AI value chain in the context of generative AI?,"[' \n47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations were derived as overarching themes from the GAI PWG \nconsultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, \nand Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and \nusing GAI and also inform the Actions to Manage GAI risks. Information included about the primary \nconsiderations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG. \nAcknowledgments: These considerations could not have been surfaced without the helpful analysis and \ncontributions from the community and NIST staff GAI PWG leads: George Awad, Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any other technology system, governance principles and techniques can be used to manage risks \nrelated to generative AI models, capabilities, and applications. Organizations may choose to apply their \nexisting risk tiering to GAI systems, or they may opt to revise or update AI system risk levels to address \nthese unique GAI risks. This section describes how organizational governance regimes may be re-\nevaluated and adjusted for GAI contexts. It also addresses third-party considerations for governing across \nthe AI value chain. \nA.1.2. Organizational Governance \nGAI opportunities, risks and long-term performance characteristics are typically less well-understood \nthan non-generative AI tools and may be perceived and acted upon by humans in ways that vary greatly. \nAccordingly, GAI may call for different levels of oversight from AI Actors or different human-AI \nconfigurations in order to manage their risks effectively. Organizations’ use of GAI systems may also \nwarrant additional human review, tracking and documentation, and greater management oversight. \nAI technology can produce varied outputs in multiple modalities and present many classes of user \ninterfaces. This leads to a broader set of AI Actors interacting with GAI systems for widely differing \napplications and contexts of use. These can include data labeling and preparation, development of GAI \nmodels, content moderation, code generation and review, text generation and editing, image and video \ngeneration, summarization, search, and chat. These activities can take place within organizational \nsettings or in the public domain. \nOrganizations can restrict AI applications that cause harm, exceed stated risk tolerances, or that conflict \nwith their tolerances or values. Governance tools and protocols that are applied to other types of AI \nsystems can be applied to GAI systems. These plans and actions include: \n• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance \n']","The important considerations for governing across the AI value chain in the context of generative AI include organizational governance, oversight levels, human-AI configurations, human review, tracking and documentation, and management oversight. Additionally, governance tools and protocols that apply to other types of AI systems can also be applied to generative AI systems, including accessibility, AI actor credentials, alignment to organizational values, auditing, change-management controls, commercial use, and data provenance.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 50, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the suggested actions to address confabulation in GAI systems?,"[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","The suggested actions to address confabulation in GAI systems include: 1) Avoid extrapolating GAI system performance or capabilities from narrow, non-systematic, and anecdotal assessments (MS-2.5-001). 2) Review and verify sources and citations in GAI system outputs during pre-deployment risk measurement and ongoing monitoring activities (MS-2.5-003). 3) Evaluate GAI system performance in real-world scenarios to observe its behavior in practical environments and reveal issues that might not surface in controlled and optimized testing environments (MS-4.2-002).",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the implications of bias and discrimination in automated systems on the rights of the American public?,"[' \nSECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-\ndinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm \npaths, to algorithms that can identify diseases in patients. These tools now drive important decisions across \nsectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, \nthese tools hold the potential to redefine every part of our society and make life better for everyone. \nThis important progress must not come at the price of civil rights or democratic values, foundational American \nprinciples that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the \nPresident ordered the full Federal government to work to root out inequity, embed fairness in decision-\nmaking processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The \nPresident has spoken forcefully about the urgent challenges posed to democracy today and has regularly called \non people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the \nbasis for so many more rights that we have come to take for granted that are ingrained in the fabric of this \ncountry.”2\nTo advance President Biden’s vision, the White House Office of Science and Technology Policy has identified \nfive principles that should guide the design, use, and deployment of automated systems to protect the American \npublic in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that \nprotects all people from these threats—and uses technologies in ways that reinforce our highest values. \nResponding to the experiences of the American public, and informed by insights from researchers, \ntechnologists, advocates, journalists, and policymakers, this framework is accompanied by a technical \ncompanion—a handbook for anyone seeking to incorporate these protections into policy and practice, including \ndetailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3\n']","The implications of bias and discrimination in automated systems on the rights of the American public include limiting opportunities, preventing access to critical resources or services, and reflecting and reproducing existing unwanted inequities. These outcomes can undermine civil rights and democratic values, which are foundational American principles.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 2, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What was the purpose of the Request For Information (RFI) issued by OSTP regarding biometric technologies?,"['APPENDIX\nSummaries of Additional Engagements: \n• OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n• OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative \nConsumer Technology Association \nCourtney Radsch \nCoworker \nCyber Farm Labs \nData & Society Research Institute \nData for Black Lives \nData to Actionable Knowledge Lab \nat Harvard University \nDeloitte \nDev Technology Group \nDigital Therapeutics Alliance \nDigital Welfare State & Human \nRights Project and Center for \nHuman Rights and Global Justice at \nNew York University School of \nLaw, and Temple University \nInstitute for Law, Innovation & \nTechnology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier Foundation \nElectronic Privacy Information \nCenter, Center for Digital \nDemocracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information Technology \nResearch and Development \nInteragency Working Group \nHireVue \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at Science Applications \nInternational Corporation \nInformation Technology and \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation \nInternational Business Machines \nCorporation \nInternational Committee of the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner, Dan Berger, \nMargaret Hu, and Sara Katsanis \nJonathan Barry-Blocker \nJoseph Turow \nJoy Buolamwini \nJoy Mack \nKaren Bureau \nLamont Gholston \nLawyers’ Committee for Civil \nRights Under Law \n60\n']","The purpose of the Request For Information (RFI) issued by OSTP regarding biometric technologies was to understand the extent and variety of biometric technologies in past, current, or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 59, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What actions are suggested to address risks associated with intellectual property infringement in organizational GAI systems?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","The suggested action to address risks associated with intellectual property infringement in organizational GAI systems is to compile statistics on actual policy violations, take-down requests, and intellectual property infringement, and analyze transparency reports across demographic and language groups.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role does human-AI integration play in enhancing customer service?,"["" \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nHealthcare “navigators” help people find their way through online signup forms to choose \nand obtain healthcare. A Navigator is “an individual or organization that's trained and able to help \nconsumers, small businesses, and their employees as they look for health coverage options through the \nMarketplace (a government web site), including completing eligibility and enrollment forms.”106 For \nthe 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could \n“train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive \nhealth coverage.”107\nThe customer service industry has successfully integrated automated services such as \nchat-bots and AI-driven call response systems with escalation to a human support \nteam.108 Many businesses now use partially automated customer service platforms that help answer customer \nquestions and compile common problems for human agents to review. These integrated human-AI \nsystems allow companies to provide faster customer care while maintaining human agents to answer \ncalls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to \nsuccessful customer service.109\nBallot curing laws in at least 24 states require a fallback system that allows voters to \ncorrect their ballot and have it counted in the case that a voter signature matching \nalgorithm incorrectly flags their ballot as invalid or there is another issue with their \nballot, and review by an election official does not rectify the problem. Some federal \ncourts have found that such cure procedures are constitutionally required.110 \nBallot \ncuring processes vary among states, and include direct phone calls, emails, or mail contact by election \nofficials.111 Voters are asked to provide alternative information or a new signature to verify the validity of their \nballot. \n52\n""]","Human-AI integration plays a key role in enhancing customer service by allowing companies to provide faster customer care through partially automated customer service platforms. These systems help answer customer questions and compile common problems for human agents to review, while maintaining human agents to respond to complicated requests. This integration is viewed as essential for successful customer service.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 51, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of the NIST AI Risk Management Framework?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n']","The purpose of the NIST AI Risk Management Framework is to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. It aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the different stages of the AI lifecycle where risks can arise?,"[' \n2 \nThis work was informed by public feedback and consultations with diverse stakeholder groups as part of NIST’s \nGenerative AI Public Working Group (GAI PWG). The GAI PWG was an open, transparent, and collaborative \nprocess, facilitated via a virtual workspace, to obtain multistakeholder input on GAI risk management and to \ninform NIST’s approach. \nThe focus of the GAI PWG was limited to four primary considerations relevant to GAI: Governance, Content \nProvenance, Pre-deployment Testing, and Incident Disclosure (further described in Appendix A). As such, the \nsuggested actions in this document primarily address these considerations. \nFuture revisions of this profile will include additional AI RMF subcategories, risks, and suggested actions based \non additional considerations of GAI as the space evolves and empirical evidence indicates additional risks. A \nglossary of terms pertinent to GAI risk management will be developed and hosted on NIST’s Trustworthy & \nResponsible AI Resource Center (AIRC), and added to The Language of Trustworthy AI: An In-Depth Glossary of \nTerms. \nThis document was also informed by public comments and consultations from several Requests for Information. \n \n2. \nOverview of Risks Unique to or Exacerbated by GAI \nIn the context of the AI RMF, risk refers to the composite measure of an event’s probability (or \nlikelihood) of occurring and the magnitude or degree of the consequences of the corresponding event. \nSome risks can be assessed as likely to materialize in a given context, particularly those that have been \nempirically demonstrated in similar contexts. Other risks may be unlikely to materialize in a given \ncontext, or may be more speculative and therefore uncertain. \nAI risks can differ from or intensify traditional software risks. Likewise, GAI can exacerbate existing AI \nrisks, and creates unique risks. GAI risks can vary along many dimensions: \n• \nStage of the AI lifecycle: Risks can arise during design, development, deployment, operation, \nand/or decommissioning. \n• \nScope: Risks may exist at individual model or system levels, at the application or implementation \nlevels (i.e., for a specific use case), or at the ecosystem level – that is, beyond a single system or \norganizational context. Examples of the latter include the expansion of “algorithmic \nmonocultures,3” resulting from repeated use of the same model, or impacts on access to \nopportunity, labor markets, and the creative economies.4 \n• \nSource of risk: Risks may emerge from factors related to the design, training, or operation of the \nGAI model itself, stemming in some cases from GAI model or system inputs, and in other cases, \nfrom GAI system outputs. Many GAI risks, however, originate from human behavior, including \n \n \n3 “Algorithmic monocultures” refers to the phenomenon in which repeated use of the same model or algorithm in \nconsequential decision-making settings like employment and lending can result in increased susceptibility by \nsystems to correlated failures (like unexpected shocks), due to multiple actors relying on the same algorithm. \n4 Many studies have projected the impact of AI on the workforce and labor markets. Fewer studies have examined \nthe impact of GAI on the labor market, though some industry surveys indicate that that both employees and \nemployers are pondering this disruption. \n']","Risks can arise during the design, development, deployment, operation, and/or decommissioning stages of the AI lifecycle.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 5, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do technical protections play in the implementation of the Blueprint for an AI Bill of Rights?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\xad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing \nmonitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have\nconfidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14\n']","Technical protections and practices laid out in the Blueprint for an AI Bill of Rights help guard the American public against many potential and actual harms associated with automated systems. They provide a framework for the design, use, and deployment of these systems to protect the rights of individuals, ensuring transparency and accountability in their operation.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 13, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What protections does the AI Bill of Rights provide against algorithmic discrimination?,"[' AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor \nbe designed to proactively protect you from harms \nimpacts of automated systems. You should be protected from inappropriate or irrelevant data use in the \ndesign, development, and deployment of automated systems, and from the compounded harm of its reuse. \nIndependent evaluation and reporting that confirms that the system is safe and effective, including reporting of \nsteps taken to mitigate potential harms, should be performed and the results made public whenever possible. \nALGORITHMIC DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms and systems should be used and designed in \nan equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified \ndifferent treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including \npregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other \nclassification protected by law. Depending on the specific circumstances, such algorithmic discrimination \nmay violate legal protections. Designers, developers, and deployers of automated systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to use and design systems in an equitable way. This protection should include proactive \nequity assessments as part of the system design, use of representative data and protection against proxies \nfor demographic features, ensuring accessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent \nevaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5\n']","The AI Bill of Rights provides protections against algorithmic discrimination by ensuring that individuals should not face discrimination by algorithms. It mandates that systems should be designed and used in an equitable way, taking proactive and continuous measures to protect individuals and communities from algorithmic discrimination. This includes conducting proactive equity assessments, using representative data, ensuring accessibility for people with disabilities, performing pre-deployment and ongoing disparity testing, and providing clear organizational oversight. Additionally, independent evaluation and reporting, including algorithmic impact assessments and disparity testing results, should be made public whenever possible to confirm these protections.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 4, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does the 2023 Executive Order on Safe AI play in NIST's efforts to develop trustworthy artificial intelligence?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","The 2023 Executive Order on Safe, Secure, and Trustworthy AI plays a significant role in NIST's efforts by guiding the establishment of the U.S. AI Safety Institute and the AI Safety Institute Consortium, which are aimed at building the necessary science for the safe, secure, and trustworthy development and use of AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the importance of transparency in the context of watch lists used by predictive policing systems?,"[' \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nA predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of\ngun violence (based on automated analysis of social ties to gang members, criminal histories, previous experi\xad\nences of gun violence, and other factors) and led to individuals being placed on a watch list with no\nexplanation or public transparency regarding how the system came to its conclusions.85 Both police and\nthe public deserve to understand why and how such a system is making these determinations.\n•\nA system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry\nerrors and other system flaws. These flaws were only revealed when an explanation of the system\nwas demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a\ntimely manner.\n42\n']","Transparency is important in the context of watch lists used by predictive policing systems because both police and the public deserve to understand why and how the system makes its determinations. Without transparency, individuals may be placed on a watch list without explanation, leading to a lack of accountability and understanding of the system's conclusions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 41, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of establishing feedback processes for end users and impacted communities in AI system evaluation metrics?,"[' \n38 \nMEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.13-001 \nCreate measurement error models for pre-deployment metrics to demonstrate \nconstruct validity for each metric (i.e., does the metric effectively operationalize \nthe desired concept): Measure or estimate, and document, biases or statistical \nvariance in applied metrics or structured human feedback processes; Leverage \ndomain expertise when modeling complex societal constructs such as hateful \ncontent. \nConfabulation; Information \nIntegrity; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV \n \nMEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available \nmeasurement techniques or where metrics are not yet available. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.2-001 \nEstablish processes for identifying emergent GAI system risks including \nconsulting with external AI Actors. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are \nestablished and integrated into AI system evaluation metrics. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.3-001 \nConduct impact assessments on how AI-generated content might affect \ndifferent social, economic, and cultural groups. \nHarmful Bias and Homogenization \nMS-3.3-002 \nConduct studies to understand how end users perceive and interact with GAI \ncontent and accompanying content provenance within context of use. Assess \nwhether the content aligns with their expectations and how they may act upon \nthe information presented. \nHuman-AI Configuration; \nInformation Integrity \nMS-3.3-003 \nEvaluate potential biases and stereotypes that could emerge from the AI-\ngenerated content using appropriate methodologies including computational \ntesting methods as well as evaluating structured feedback input. \nHarmful Bias and Homogenization \n']","The purpose of establishing feedback processes for end users and impacted communities in AI system evaluation metrics is to allow these groups to report problems and appeal system outcomes, ensuring that the impact of AI-generated content on different social, economic, and cultural groups is assessed and understood.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 41, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are suggested to ensure information integrity in the context of AI systems?,"[' \n28 \nMAP 5.2: Practices and personnel for supporting regular engagement with relevant AI Actors and integrating feedback about \npositive, negative, and unanticipated impacts are in place and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.2-001 \nDetermine context-based measures to identify if new impacts are present due to \nthe GAI system, including regular engagements with downstream AI Actors to \nidentify and quantify new contexts of unanticipated impacts of GAI systems. \nHuman-AI Configuration; Value \nChain and Component Integration \nMP-5.2-002 \nPlan regular engagements with AI Actors responsible for inputs to GAI systems, \nincluding third-party data and algorithms, to review and evaluate unanticipated \nimpacts. \nHuman-AI Configuration; Value \nChain and Component Integration \nAI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring \n \nMEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any \ndiscrepancies in how content provenance mechanisms work across diverse \npopulations. \nInformation Integrity; Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to evaluate structured public feedback exercises \ninformed by representative AI Actors. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.1-005 \nEvaluate novel methods and technologies for the measurement of GAI-related \nrisks including in content provenance, offensive cyber, and CBRN, while \nmaintaining the models’ ability to produce valid, reliable, and factually accurate \noutputs. \nInformation Integrity; CBRN \nInformation or Capabilities; \nObscene, Degrading, and/or \nAbusive Content \n']","Suggested measures to ensure information integrity in the context of AI systems include employing methods to trace the origin and modifications of digital content, integrating tools designed to analyze content provenance and detect data anomalies, verifying the authenticity of digital signatures, and identifying patterns associated with misinformation or manipulation. Additionally, it is recommended to disaggregate evaluation metrics by demographic factors to identify discrepancies in how content provenance mechanisms work across diverse populations.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 31, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the limitations of current pre-deployment testing approaches for GAI applications?,"[' \n49 \nearly lifecycle TEVV approaches are developed and matured for GAI, organizations may use \nrecommended “pre-deployment testing” practices to measure performance, capabilities, limits, risks, \nand impacts. This section describes risk measurement and estimation as part of pre-deployment TEVV, \nand examines the state of play for pre-deployment testing methodologies. \nLimitations of Current Pre-deployment Test Approaches \nCurrently available pre-deployment TEVV processes used for GAI applications may be inadequate, non-\nsystematically applied, or fail to reflect or mismatched to deployment contexts. For example, the \nanecdotal testing of GAI system capabilities through video games or standardized tests designed for \nhumans (e.g., intelligence tests, professional licensing exams) does not guarantee GAI system validity or \nreliability in those domains. Similarly, jailbreaking or prompt engineering tests may not systematically \nassess validity or reliability risks. \nMeasurement gaps can arise from mismatches between laboratory and real-world settings. Current \ntesting approaches often remain focused on laboratory conditions or restricted to benchmark test \ndatasets and in silico techniques that may not extrapolate well to—or directly assess GAI impacts in real-\nworld conditions. For example, current measurement gaps for GAI make it difficult to precisely estimate \nits potential ecosystem-level or longitudinal risks and related political, social, and economic impacts. \nGaps between benchmarks and real-world use of GAI systems may likely be exacerbated due to prompt \nsensitivity and broad heterogeneity of contexts of use. \nA.1.5. Structured Public Feedback \nStructured public feedback can be used to evaluate whether GAI systems are performing as intended \nand to calibrate and verify traditional measurement methods. Examples of structured feedback include, \nbut are not limited to: \n• \nParticipatory Engagement Methods: Methods used to solicit feedback from civil society groups, \naffected communities, and users, including focus groups, small user studies, and surveys. \n• \nField Testing: Methods used to determine how people interact with, consume, use, and make \nsense of AI-generated information, and subsequent actions and effects, including UX, usability, \nand other structured, randomized experiments. \n• \nAI Red-teaming: A structured testing exercise used to probe an AI system to find flaws and \nvulnerabilities such as inaccurate, harmful, or discriminatory outputs, often in a controlled \nenvironment and in collaboration with system developers. \nInformation gathered from structured public feedback can inform design, implementation, deployment \napproval, maintenance, or decommissioning decisions. Results and insights gleaned from these exercises \ncan serve multiple purposes, including improving data quality and preprocessing, bolstering governance \ndecision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation. \n']","Current pre-deployment TEVV processes used for GAI applications may be inadequate, non-systematically applied, or fail to reflect or be mismatched to deployment contexts. Anecdotal testing of GAI system capabilities through video games or standardized tests designed for humans does not guarantee GAI system validity or reliability. Additionally, jailbreaking or prompt engineering tests may not systematically assess validity or reliability risks. Measurement gaps can arise from mismatches between laboratory and real-world settings, and current testing approaches often remain focused on laboratory conditions or restricted to benchmark test datasets that may not extrapolate well to real-world conditions.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 52, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are suggested to ensure effective human-AI configuration in the context of GAI systems?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']",The suggested measures to ensure effective human-AI configuration in the context of GAI systems include documenting the instructions given to data annotators or AI red-teamers (MS-2.8-002) and verifying the adequacy of GAI system user instructions through user testing (MS-2.8-004).,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What issues does the automated sentiment analyzer address regarding bias in online statements?,"[' \n \n \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \xad\xad\xad\n•\nAn automated sentiment analyzer, a tool often used by technology platforms to determine whether a state-\nment posted online expresses a positive or negative sentiment, was found to be biased against Jews and gay\npeople. For example, the analyzer marked the statement “I’m a Jew” as representing a negative sentiment,\nwhile “I’m a Christian” was identified as expressing a positive sentiment.36 This could lead to the\npreemptive blocking of social media comments such as: “I’m gay.” A related company with this bias concern\nhas made their data public to encourage researchers to help address the issue37 and has released reports\nidentifying and measuring this problem as well as detailing attempts to address it.38\n•\nSearches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly39 sexualized content, rather\nthan role models, toys, or activities.40 Some search engines have been working to reduce the prevalence of\nthese results, but the problem remains.41\n•\nAdvertisement delivery systems that predict who is most likely to click on a job advertisement end up deliv-\nering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermar-\nket cashier ads to women and jobs with taxi companies to primarily Black people.42\xad\n•\nBody scanners, used by TSA at airport checkpoints, require the operator to select a “male” or “female”\nscanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of\nthe passenger’s gender identity. These scanners are more likely to flag transgender travelers as requiring\nextra screening done by a person. Transgender travelers have described degrading experiences associated\nwith these extra screenings.43 TSA has recently announced plans to implement a gender-neutral algorithm44 \nwhile simultaneously enhancing the security effectiveness capabilities of the existing technology. \n•\nThe National Disabled Law Students Association expressed concerns that individuals with disabilities were\nmore likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disabili-\nty-specific access needs such as needing longer breaks or using screen readers or dictation software.45 \n•\nAn algorithm designed to identify patients with high needs for healthcare systematically assigned lower\nscores (indicating that they were not as high need) to Black patients than to those of white patients, even\nwhen those patients had similar numbers of chronic conditions and other markers of health.46 In addition,\nhealthcare clinical algorithms that are used by physicians to guide clinical decisions may include\nsociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or\nethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections \n']","The automated sentiment analyzer addresses bias in online statements by identifying that it was found to be biased against Jews and gay people. For instance, it marked the statement 'I’m a Jew' as negative while identifying 'I’m a Christian' as positive. This bias could lead to the preemptive blocking of social media comments such as 'I’m gay.'",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 24, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the expectations for automated systems regarding safety and effectiveness?,"[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n']","The expectations for automated systems regarding safety and effectiveness include the need for independent evaluation, where evaluators should have access to the system and associated data to perform evaluations. Additionally, entities responsible for automated systems should provide regularly-updated reports that cover an overview of the system, data used for training, risk management assessments, performance testing results, and ongoing monitoring procedures. These reports should be presented in plain language and a machine-readable format.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What criteria are used to measure AI system performance or assurance in deployment settings?,"[' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n']",AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What threat does automatic signature verification software pose to U.S. voters?,"[' \nENDNOTES\n96. National Science Foundation. NSF Program on Fairness in Artificial Intelligence in Collaboration\nwith Amazon (FAI). Accessed July 20, 2022.\nhttps://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm\n97. Kyle Wiggers. Automatic signature verification software threatens to disenfranchise U.S. voters.\nVentureBeat. Oct. 25, 2020.\nhttps://venturebeat.com/2020/10/25/automatic-signature-verification-software-threatens-to\xad\ndisenfranchise-u-s-voters/\n98. Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved Apr 18, 2022.\nhttps://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots\n99. Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures are by the Same Person.\nWhich Ones? New York Times. Oct. 7, 2020.\nhttps://www.nytimes.com/interactive/2020/10/07/upshot/mail-voting-ballots-signature\xad\nmatching.html\n100. Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020.\nhttps://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/\n101. Andrew Kenney. \'I\'m shocked that they need to have a smartphone\': System for unemployment\nbenefits exposes digital divide. USA Today. May 2, 2021.\nhttps://www.usatoday.com/story/tech/news/2021/05/02/unemployment-benefits-system-leaving\xad\npeople-behind/4915248001/\n102. Allie Gross. UIA lawsuit shows how the state criminalizes the unemployed. Detroit Metro-Times.\nSep. 18, 2015.\nhttps://www.metrotimes.com/news/uia-lawsuit-shows-how-the-state-criminalizes-the\xad\nunemployed-2369412\n103. Maia Szalavitz. The Pain Was Unbearable. So Why Did Doctors Turn Her Away? Wired. Aug. 11,\n2021. https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/\n104. Spencer Soper. Fired by Bot at Amazon: ""It\'s You Against the Machine"". Bloomberg, Jun. 28, 2021.\nhttps://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine\xad\nmanagers-and-workers-are-losing-out\n105. Definitions of ‘equity’ and ‘underserved communities’ can be found in the Definitions section of\nthis document as well as in Executive Order on Advancing Racial Equity and Support for Underserved\nCommunities Through the Federal Government:\nhttps://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order\xad\nadvancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n106. HealthCare.gov. Navigator - HealthCare.gov Glossary. Accessed May 2, 2022.\nhttps://www.healthcare.gov/glossary/navigator/\n72\n']",Automatic signature verification software threatens to disenfranchise U.S. voters.,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 71, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures are being taken to ensure equitable design in automated systems to protect against algorithmic discrimination?,"["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n""]","Many companies, non-profits, and federal government agencies are taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, which has led to changes or the prevention of product launches to avoid public harm. Federal government agencies are developing standards and guidance for the use of automated systems to help prevent bias. Non-profits and companies have created best practices for audits and impact assessments to identify potential algorithmic discrimination and provide transparency in mitigating such biases.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What factors should be considered to ensure information integrity in the context of GAI risk management?,"[' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n']","Factors to consider to ensure information integrity in the context of GAI risk management include abuses and impacts to information integrity, dependencies between GAI and other IT or data systems, harm to fundamental rights or public safety, presentation of obscene, objectionable, offensive, discriminatory, invalid or untruthful output, psychological impacts to humans, possibility for malicious use, introduction of significant new security vulnerabilities, anticipated system impact on some groups compared to others, and unreliable decision-making capabilities.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the reasons for implementing enhanced data protections in sensitive domains?,"[' \n \n \n \nDATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\nSome domains, including health, employment, education, criminal justice, and personal finance, have long been \nsingled out as sensitive domains deserving of enhanced data protections. This is due to the intimate nature of these \ndomains as well as the inability of individuals to opt out of these domains in any meaningful way, and the \nhistorical discrimination that has often accompanied data knowledge.69 Domains understood by the public to be \nsensitive also change over time, including because of technological developments. Tracking and monitoring \ntechnologies, personal tracking devices, and our extensive data footprints are used and misused more than ever \nbefore; as such, the protections afforded by current legal guidelines may be inadequate. The American public \ndeserves assurances that data related to such sensitive domains is protected and used appropriately and only in \nnarrowly defined contexts with clear benefits to the individual and/or society. \nTo this end, automated systems that collect, use, share, or store data related to these sensitive domains should meet \nadditional expectations. Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined \nbelow); are generated by technologies used in a sensitive domain; can be used to infer data from a sensitive domain or \nsensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioral data, \ngeolocation data, data related to interaction with the criminal justice system, relationship history and legal status such \nas custody and divorce information, and home, work, or school environmental data); or have the reasonable potential \nto be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm \ndue to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even \nif not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video \ndata. “Sensitive domains” are those in which activities being conducted can cause material harms, including signifi\xad\ncant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains \nthat have historically been singled out as deserving of enhanced data protections or where such enhanced protections \nare reasonably expected by the public include, but are not limited to, health, family planning and care, employment, \neducation, criminal justice, and personal finance. In the context of this framework, such domains are considered \nsensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains \nand data that are considered sensitive are understood to change over time based on societal norms and context. \n36\n']","Enhanced data protections in sensitive domains are implemented due to the intimate nature of these domains, the inability of individuals to opt out meaningfully, and the historical discrimination that has often accompanied data knowledge. Additionally, the protections afforded by current legal guidelines may be inadequate given the misuse of tracking technologies and the extensive data footprints individuals leave behind. The American public deserves assurances that data related to sensitive domains is protected and used appropriately, only in narrowly defined contexts with clear benefits to individuals and society.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 35, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are some of the potential harms associated with automated systems?,"[' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nWhile technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can \nalso lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range \nof error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. \nAutomated systems sometimes rely on data from other systems, including historical data, allowing irrelevant informa\xad\ntion from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposeful\xad\nly designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended \nor unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad\nvators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections \nfrom unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consis\xad\ntently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harm\xad\nful outcomes. \n•\nA proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was imple\xad\nmented at hundreds of hospitals around the country. An independent study showed that the model predictions\nunderperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting\nlikelihood of sepsis.6\n•\nOn social media, Black people who quote and criticize racist messages have had their own speech silenced when\na platform’s automated moderation system failed to distinguish this “counter speech” (or other critique\nand journalism) from the original hateful messages to which such speech responded.7\n•\nA device originally developed to help people track and find lost items has been used as a tool by stalkers to track\nvictims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to\nprotect people from unwanted tracking by alerting people on their phones when a device is found to be moving\nwith them over time and also by having the device make an occasional noise, but not all phones are able\nto receive the notification and the devices remain a safety concern due to their misuse.8 \n•\nAn algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit,\neven if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions\nwere the result of a feedback loop generated from the reuse of data from previous arrests and algorithm\npredictions.9\n16\n']","Some potential harms associated with automated systems include: reliance on unproven technologies that may not work as intended, causing substantial and unjustified harm; the use of historical data that can lead to irrelevant information affecting decision-making; technologies designed to violate safety, such as those facilitating stalking; unintended harms from intended or unintended uses; and issues like alert fatigue from false alerts, as seen in a sepsis prediction model. Additionally, automated moderation systems may fail to distinguish between counter-speech and hateful messages, silencing critics.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 15, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the significance of human-AI configuration in managing GAI risks and ensuring information integrity?,"[' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n']","The significance of human-AI configuration in managing GAI risks and ensuring information integrity lies in its role in evaluating content lineage and origin, adapting training programs for digital content transparency, developing certification programs for managing GAI risks, delineating human proficiency tests from GAI capabilities, and implementing systems to monitor and track outcomes of human-GAI configurations for future improvements. Involving end-users, practitioners, and operators in prototyping and testing activities is also crucial, especially in various scenarios including crisis situations or ethically sensitive contexts.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the key oversight functions involved in the GAI lifecycle?,"[' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n']","The key oversight functions involved in the GAI lifecycle include senior leadership, legal, compliance, and internal evaluation.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of the AI Safety Institute established by NIST?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","The purpose of the AI Safety Institute established by NIST is to continue efforts to build the science necessary for safe, secure, and trustworthy development and use of artificial intelligence (AI), in alignment with the 2023 Executive Order on Safe, Secure, and Trustworthy AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of using structured feedback mechanisms in relation to AI-generated content?,"[' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']",The purpose of using structured feedback mechanisms in relation to AI-generated content is to solicit and capture user input about the content to detect subtle shifts in quality or alignment with community and societal values.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are suggested to mitigate risks related to harmful bias in generative AI systems?,"[' \n43 \nMG-3.1-005 Review various transparency artifacts (e.g., system cards and model cards) for \nthird-party models. \nInformation Integrity; Information \nSecurity; Value Chain and \nComponent Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring, Third-party entities \n \nMANAGE 3.2: Pre-trained models which are used for development are monitored as part of AI system regular monitoring and \nmaintenance. \nAction ID \nSuggested Action \nGAI Risks \nMG-3.2-001 \nApply explainable AI (XAI) techniques (e.g., analysis of embeddings, model \ncompression/distillation, gradient-based attributions, occlusion/term reduction, \ncounterfactual prompts, word clouds) as part of ongoing continuous \nimprovement processes to mitigate risks related to unexplainable GAI systems. \nHarmful Bias and Homogenization \nMG-3.2-002 \nDocument how pre-trained models have been adapted (e.g., fine-tuned, or \nretrieval-augmented generation) for the specific generative task, including any \ndata augmentations, parameter adjustments, or other modifications. Access to \nun-tuned (baseline) models supports debugging the relative influence of the pre-\ntrained weights compared to the fine-tuned model weights or other system \nupdates. \nInformation Integrity; Data Privacy \nMG-3.2-003 \nDocument sources and types of training data and their origins, potential biases \npresent in the data related to the GAI application and its content provenance, \narchitecture, training process of the pre-trained model including information on \nhyperparameters, training duration, and any fine-tuning or retrieval-augmented \ngeneration processes applied. \nInformation Integrity; Harmful Bias \nand Homogenization; Intellectual \nProperty \nMG-3.2-004 Evaluate user reported problematic content and integrate feedback into system \nupdates. \nHuman-AI Configuration, \nDangerous, Violent, or Hateful \nContent \nMG-3.2-005 \nImplement content filters to prevent the generation of inappropriate, harmful, \nfalse, illegal, or violent content related to the GAI application, including for CSAM \nand NCII. These filters can be rule-based or leverage additional machine learning \nmodels to flag problematic inputs and outputs. \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMG-3.2-006 \nImplement real-time monitoring processes for analyzing generated content \nperformance and trustworthiness characteristics related to content provenance \nto identify deviations from the desired standards and trigger alerts for human \nintervention. \nInformation Integrity \n']","To mitigate risks related to harmful bias in generative AI systems, the suggested measures include applying explainable AI (XAI) techniques as part of ongoing continuous improvement processes, documenting how pre-trained models have been adapted for specific generative tasks, and documenting sources and types of training data along with potential biases present in the data.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 46, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the implications of bias and discrimination in automated systems on the rights of the American public?,"[' \nSECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-\ndinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm \npaths, to algorithms that can identify diseases in patients. These tools now drive important decisions across \nsectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, \nthese tools hold the potential to redefine every part of our society and make life better for everyone. \nThis important progress must not come at the price of civil rights or democratic values, foundational American \nprinciples that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the \nPresident ordered the full Federal government to work to root out inequity, embed fairness in decision-\nmaking processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The \nPresident has spoken forcefully about the urgent challenges posed to democracy today and has regularly called \non people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the \nbasis for so many more rights that we have come to take for granted that are ingrained in the fabric of this \ncountry.”2\nTo advance President Biden’s vision, the White House Office of Science and Technology Policy has identified \nfive principles that should guide the design, use, and deployment of automated systems to protect the American \npublic in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that \nprotects all people from these threats—and uses technologies in ways that reinforce our highest values. \nResponding to the experiences of the American public, and informed by insights from researchers, \ntechnologists, advocates, journalists, and policymakers, this framework is accompanied by a technical \ncompanion—a handbook for anyone seeking to incorporate these protections into policy and practice, including \ndetailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3\n']","The implications of bias and discrimination in automated systems on the rights of the American public include limiting opportunities, preventing access to critical resources or services, and reflecting or reproducing existing unwanted inequities. These outcomes can threaten people's opportunities, undermine their privacy, and lead to pervasive tracking of their activity, often without their knowledge or consent.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 2, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the main principles outlined in the AI Bill of Rights and how do they aim to protect the rights of the American public?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\xad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing \nmonitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have\nconfidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14\n']","The main principles outlined in the AI Bill of Rights are not explicitly listed in the provided context. However, the context discusses the Blueprint for an AI Bill of Rights, which consists of five principles aimed at guiding the design, use, and deployment of automated systems to protect the rights of the American public. It emphasizes the importance of technical protections and practices to guard against potential harms and outlines expectations for automated systems, including transparency and reporting.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 13, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures are suggested to assess the environmental impact of AI model training and management activities?,"[' \n37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training data and verify \ntraining data is not overly homogenous or GAI-produced to mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP \nfunction – are assessed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical environments when deploying GAI systems. \nDangerous, Violent, or Hateful \nContent \nMS-2.12-002 Document anticipated environmental impacts of model development, \nmaintenance, and deployment in product design decisions. \nEnvironmental \nMS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water \nconsumption) for training, fine tuning, and deploying models: Verify tradeoffs \nbetween resources used at inference time versus additional resources required \nat training time. \nEnvironmental \nMS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and \napplications, and address green-washing concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","The suggested measures to assess the environmental impact of AI model training and management activities include: 1) Assessing safety to physical environments when deploying GAI systems, 2) Documenting anticipated environmental impacts of model development, maintenance, and deployment in product design decisions, 3) Measuring or estimating environmental impacts such as energy and water consumption for training, fine-tuning, and deploying models, and verifying trade-offs between resources used at inference time versus additional resources required at training time, and 4) Verifying the effectiveness of carbon capture or offset programs for GAI training and applications, while addressing green-washing concerns.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 40, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do fraud detection algorithms play in the adjudication of benefits and penalties?,"['APPENDIX\nSystems that impact the safety of communities such as automated traffic control systems, elec \n-ctrical grid controls, smart city technologies, and industrial emissions and environmental\nimpact control algorithms; and\nSystems related to access to benefits or services or assignment of penalties such as systems that\nsupport decision-makers who adjudicate benefits such as collating or analyzing information or\nmatching records, systems which similarly assist in the adjudication of administrative or criminal\npenalties, fraud detection algorithms, services or benefits access control algorithms, biometric\nsystems used as access control, and systems which make benefits or services related decisions on a\nfully or partially autonomous basis (such as a determination to revoke benefits).\n54\n']",Fraud detection algorithms assist in the adjudication of benefits and penalties by analyzing information and matching records to support decision-makers.,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 53, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does community participation play in the design of technology for democratic values?,"["" \n \n \n \n \nAPPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n•\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n•\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n•\nLiz O’Sullivan, CEO, Parity AI\n•\nTimnit Gebru, Independent Scholar\n•\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n•\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n•\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n•\nBlake Hall, CEO and Founder, ID.Me\n•\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n•\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58\n""]","Community participation plays a crucial role in the design of technology for democratic values by emphasizing human-computer interaction that involves the community, ensuring that the technology is non-discriminatory, explainable, and privacy-aware. Engaging with impacted communities helps to understand the potential harms of technologies and build protection by design into future systems.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 57, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the policies and procedures related to human-AI configuration in the oversight of AI systems?,"[' \n18 \nGOVERN 3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations \nand oversight of AI systems. \nAction ID \nSuggested Action \nGAI Risks \nGV-3.2-001 \nPolicies are in place to bolster oversight of GAI systems with independent \nevaluations or assessments of GAI models or systems where the type and \nrobustness of evaluations are proportional to the identified risks. \nCBRN Information or Capabilities; \nHarmful Bias and Homogenization \nGV-3.2-002 \nConsider adjustment of organizational roles and components across lifecycle \nstages of large or complex GAI systems, including: Test and evaluation, validation, \nand red-teaming of GAI systems; GAI content moderation; GAI system \ndevelopment and engineering; Increased accessibility of GAI tools, interfaces, and \nsystems, Incident response and containment. \nHuman-AI Configuration; \nInformation Security; Harmful Bias \nand Homogenization \nGV-3.2-003 \nDefine acceptable use policies for GAI interfaces, modalities, and human-AI \nconfigurations (i.e., for chatbots and decision-making tasks), including criteria for \nthe kinds of queries GAI applications should refuse to respond to. \nHuman-AI Configuration \nGV-3.2-004 \nEstablish policies for user feedback mechanisms for GAI systems which include \nthorough instructions and any mechanisms for recourse. \nHuman-AI Configuration \nGV-3.2-005 \nEngage in threat modeling to anticipate potential risks from GAI systems. \nCBRN Information or Capabilities; \nInformation Security \nAI Actors: AI Design \n \nGOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, \ndevelopment, deployment, and uses of AI systems to minimize potential negative impacts. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.1-001 \nEstablish policies and procedures that address continual improvement processes \nfor GAI risk measurement. Address general risks associated with a lack of \nexplainability and transparency in GAI systems by using ample documentation and \ntechniques such as: application of gradient-based attributions, occlusion/term \nreduction, counterfactual prompts and prompt engineering, and analysis of \nembeddings; Assess and update risk measurement approaches at regular \ncadences. \nConfabulation \nGV-4.1-002 \nEstablish policies, procedures, and processes detailing risk measurement in \ncontext of use with standardized measurement protocols and structured public \nfeedback exercises such as AI red-teaming or independent external evaluations. \nCBRN Information and Capability; \nValue Chain and Component \nIntegration \n']","Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems. This includes establishing acceptable use policies for GAI interfaces, modalities, and human-AI configurations, as well as defining criteria for the kinds of queries GAI applications should refuse to respond to.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 21, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of the AI Risk Management Framework for Generative AI?,"[' \n1 \n1. \nIntroduction \nThis document is a cross-sectoral profile of and companion resource for the AI Risk Management \nFramework (AI RMF 1.0) for Generative AI,1 pursuant to President Biden’s Executive Order (EO) 14110 on \nSafe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is \nintended for voluntary use and to improve the ability of organizations to incorporate trustworthiness \nconsiderations into the design, development, use, and evaluation of AI products, services, and systems. \nA profile is an implementation of the AI RMF functions, categories, and subcategories for a specific \nsetting, application, or technology – in this case, Generative AI (GAI) – based on the requirements, risk \ntolerance, and resources of the Framework user. AI RMF profiles assist organizations in deciding how to \nbest manage AI risks in a manner that is well-aligned with their goals, considers legal/regulatory \nrequirements and best practices, and reflects risk management priorities. Consistent with other AI RMF \nprofiles, this profile offers insights into how risk can be managed across various stages of the AI lifecycle \nand for GAI as a technology. \nAs GAI covers risks of models or applications that can be used across use cases or sectors, this document \nis an AI RMF cross-sectoral profile. Cross-sectoral profiles can be used to govern, map, measure, and \nmanage risks associated with activities or business processes common across sectors, such as the use of \nlarge language models (LLMs), cloud-based services, or acquisition. \nThis document defines risks that are novel to or exacerbated by the use of GAI. After introducing and \ndescribing these risks, the document provides a set of suggested actions to help organizations govern, \nmap, measure, and manage these risks. \n \n \n1 EO 14110 defines Generative AI as “the class of AI models that emulate the structure and characteristics of input \ndata in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital \ncontent.” While not all GAI is derived from foundation models, for purposes of this document, GAI generally refers \nto generative foundation models. The foundation model subcategory of “dual-use foundation models” is defined by \nEO 14110 as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of \nbillions of parameters; is applicable across a wide range of contexts.” \n2 This profile was developed per Section 4.1(a)(i)(A) of EO 14110, which directs the Secretary of Commerce, acting \nthrough the Director of the National Institute of Standards and Technology (NIST), to develop a companion \nresource to the AI RMF, NIST AI 100–1, for generative AI. \n']","The purpose of the AI Risk Management Framework (AI RMF) for Generative AI is to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. It assists organizations in deciding how to best manage AI risks in alignment with their goals, legal/regulatory requirements, and best practices.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 4, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What does the term 'underserved communities' refer to in the context of the AI Bill of Rights?,"[' \n \n \nApplying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data and metadata are sensitive if they pertain to an individual in a sensitive domain \n(defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a \nsensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric \ndata, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship \nhistory and legal status such as custody and divorce information, and home, work, or school environmental \ndata); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful \nharm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about \nthose who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, \nbut is not limited to, numerical, text, image, audio, or video data. \nSENSITIVE DOMAINS: “Sensitive domains” are those in which activities being conducted can cause material \nharms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liber\xad\nties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections \nor where such enhanced protections are reasonably expected by the public include, but are not limited to, \nhealth, family planning and care, employment, education, criminal justice, and personal finance. In the context \nof this framework, such domains are considered sensitive whether or not the specifics of a system context \nwould necessitate coverage under existing law, and domains and data that are considered sensitive are under\xad\nstood to change over time based on societal norms and context. \nSURVEILLANCE TECHNOLOGY: “Surveillance technology” refers to products or services marketed for \nor that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or \nretain data, identifying information, or communications concerning individuals or groups. This framework \nlimits its focus to both government and commercial use of surveillance technologies when juxtaposed with \nreal-time or subsequent automated analysis and when such systems have a potential for meaningful impact \non individuals’ or communities’ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved communities” refers to communities that have \nbeen systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as \nexemplified by the list in the preceding definition of “equity.” \n11\n']","The term 'underserved communities' refers to communities that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 10, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the challenges associated with value chain and component integration in GAI systems?,"[' \n12 \nCSAM. Even when trained on “clean” data, increasingly capable GAI models can synthesize or produce \nsynthetic NCII and CSAM. Websites, mobile apps, and custom-built models that generate synthetic NCII \nhave moved from niche internet forums to mainstream, automated, and scaled online businesses. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe, Privacy Enhanced \n2.12. \nValue Chain and Component Integration \nGAI value chains involve many third-party components such as procured datasets, pre-trained models, \nand software libraries. These components might be improperly obtained or not properly vetted, leading \nto diminished transparency or accountability for downstream users. While this is a risk for traditional AI \nsystems and some other digital technologies, the risk is exacerbated for GAI due to the scale of the \ntraining data, which may be too large for humans to vet; the difficulty of training foundation models, \nwhich leads to extensive reuse of limited numbers of models; and the extent to which GAI may be \nintegrated into other devices and services. As GAI systems often involve many distinct third-party \ncomponents and data sources, it may be difficult to attribute issues in a system’s behavior to any one of \nthese sources. \nErrors in third-party GAI components can also have downstream impacts on accuracy and robustness. \nFor example, test datasets commonly used to benchmark or validate models can contain label errors. \nInaccuracies in these labels can impact the “stability” or robustness of these benchmarks, which many \nGAI practitioners consider during the model selection process. \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n3. \nSuggested Actions to Manage GAI Risks \nThe following suggested actions target risks unique to or exacerbated by GAI. \nIn addition to the suggested actions below, AI risk management activities and actions set forth in the AI \nRMF 1.0 and Playbook are already applicable for managing GAI risks. Organizations are encouraged to \napply the activities suggested in the AI RMF and its Playbook when managing the risk of GAI systems. \nImplementation of the suggested actions will vary depending on the type of risk, characteristics of GAI \nsystems, stage of the GAI lifecycle, and relevant AI actors involved. \nSuggested actions to manage GAI risks can be found in the tables below: \n• \nThe suggested actions are organized by relevant AI RMF subcategories to streamline these \nactivities alongside implementation of the AI RMF. \n• \nNot every subcategory of the AI RMF is included in this document.13 Suggested actions are \nlisted for only some subcategories. \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later. \n']","Challenges associated with value chain and component integration in GAI systems include the improper acquisition or vetting of third-party components such as datasets, pre-trained models, and software libraries, which can lead to diminished transparency and accountability. The scale of training data may be too large for humans to vet, and the difficulty of training foundation models can result in extensive reuse of a limited number of models. Additionally, it may be difficult to attribute issues in a system's behavior to any one of these sources, and errors in third-party GAI components can have downstream impacts on accuracy and robustness.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 15, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What should entities do to proactively identify and manage risks associated with collecting, using, sharing, or storing sensitive data?","[' \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTraditional terms of service—the block of text that the public is accustomed to clicking through when using a web\xad\nsite or digital app—are not an adequate mechanism for protecting privacy. The American public should be protect\xad\ned via built-in privacy protections, data minimization, use and collection limitations, and transparency, in addition \nto being entitled to clear mechanisms to control access to and use of their data—including their metadata—in a \nproactive, informed, and ongoing way. Any automated system collecting, using, sharing, or storing personal data \nshould meet these expectations. \nProtect privacy by design and by default \nPrivacy by design and by default. Automated systems should be designed and built with privacy protect\xad\ned by default. Privacy risks should be assessed throughout the development life cycle, including privacy risks \nfrom reidentification, and appropriate technical and policy mitigation measures should be implemented. This \nincludes potential harms to those who are not users of the automated system, but who may be harmed by \ninferred data, purposeful privacy violations, or community surveillance or other community harms. Data \ncollection should be minimized and clearly communicated to the people whose data is collected. Data should \nonly be collected or used for the purposes of training or testing machine learning models if such collection and \nuse is legal and consistent with the expectations of the people whose data is collected. User experience \nresearch should be conducted to confirm that people understand what data is being collected about them and \nhow it will be used, and that this collection matches their expectations and desires. \nData collection and use-case scope limits. Data collection should be limited in scope, with specific, \nnarrow identified goals, to avoid ""mission creep."" Anticipated data collection should be determined to be \nstrictly necessary to the identified goals and should be minimized as much as possible. Data collected based on \nthese identified goals and for a specific context should not be used in a different context without assessing for \nnew privacy risks and implementing appropriate mitigation measures, which may include express consent. \nClear timelines for data retention should be established, with data deleted as soon as possible in accordance \nwith legal or policy-based limitations. Determined data retention timelines should be documented and justi\xad\nfied. \nRisk identification and mitigation. Entities that collect, use, share, or store sensitive data should \nattempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropri\xad\nately to identified risks. Appropriate responses include determining not to process data when the privacy risks \noutweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not \ninclude sharing or transferring the privacy risks to users via notice or consent requests where users could not \nreasonably be expected to understand the risks without further support. \nPrivacy-preserving security. Entities creating, using, or governing automated systems should follow \nprivacy and security best practices designed to ensure data and metadata do not leak beyond the specific \nconsented use case. Best practices could include using privacy-enhancing cryptography or other types of \nprivacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with \nconventional system security protocols. \n33\n']","Entities that collect, use, share, or store sensitive data should attempt to proactively identify harms and seek to manage them to avoid, mitigate, and respond appropriately to identified risks. Appropriate responses include determining not to process data when the privacy risks outweigh the benefits or implementing measures to mitigate acceptable risks.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 32, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role do algorithmic harms play in shaping the principles of the Blueprint for an AI Bill of Rights?,"[' \n \n \nABOUT THIS FRAMEWORK\xad\xad\xad\xad\xad\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the \ndesign, use, and deployment of automated systems to protect the rights of the American public in the age of \nartificial intel-ligence. Developed through extensive consultation with the American public, these principles are \na blueprint for building and deploying automated systems that are aligned with democratic values and protect \ncivil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of \nall sizes—to uphold these values. Experts from across the private sector, governments, and international \nconsortia have published principles and frameworks to guide the responsible use of automated systems; this \nframework provides a national values statement and toolkit that is sector-agnostic to inform building these \nprotections into policy, practice, or the technological design process. Where existing law or policy—such as \nsector-specific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an \nAI Bill of Rights should be used to inform policy decisions.\nLISTENING TO THE AMERICAN PUBLIC\nThe White House Office of Science and Technology Policy has led a year-long process to seek and distill input \nfrom people across the country—from impacted communities and industry stakeholders to technology develop-\ners and other experts across fields and sectors, as well as policymakers throughout the Federal government—on \nthe issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listen-\ning sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4\n']",The answer to given question is not present in context,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 3, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of reviewing transparency artifacts in the context of third-party models?,"[' \n43 \nMG-3.1-005 Review various transparency artifacts (e.g., system cards and model cards) for \nthird-party models. \nInformation Integrity; Information \nSecurity; Value Chain and \nComponent Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring, Third-party entities \n \nMANAGE 3.2: Pre-trained models which are used for development are monitored as part of AI system regular monitoring and \nmaintenance. \nAction ID \nSuggested Action \nGAI Risks \nMG-3.2-001 \nApply explainable AI (XAI) techniques (e.g., analysis of embeddings, model \ncompression/distillation, gradient-based attributions, occlusion/term reduction, \ncounterfactual prompts, word clouds) as part of ongoing continuous \nimprovement processes to mitigate risks related to unexplainable GAI systems. \nHarmful Bias and Homogenization \nMG-3.2-002 \nDocument how pre-trained models have been adapted (e.g., fine-tuned, or \nretrieval-augmented generation) for the specific generative task, including any \ndata augmentations, parameter adjustments, or other modifications. Access to \nun-tuned (baseline) models supports debugging the relative influence of the pre-\ntrained weights compared to the fine-tuned model weights or other system \nupdates. \nInformation Integrity; Data Privacy \nMG-3.2-003 \nDocument sources and types of training data and their origins, potential biases \npresent in the data related to the GAI application and its content provenance, \narchitecture, training process of the pre-trained model including information on \nhyperparameters, training duration, and any fine-tuning or retrieval-augmented \ngeneration processes applied. \nInformation Integrity; Harmful Bias \nand Homogenization; Intellectual \nProperty \nMG-3.2-004 Evaluate user reported problematic content and integrate feedback into system \nupdates. \nHuman-AI Configuration, \nDangerous, Violent, or Hateful \nContent \nMG-3.2-005 \nImplement content filters to prevent the generation of inappropriate, harmful, \nfalse, illegal, or violent content related to the GAI application, including for CSAM \nand NCII. These filters can be rule-based or leverage additional machine learning \nmodels to flag problematic inputs and outputs. \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMG-3.2-006 \nImplement real-time monitoring processes for analyzing generated content \nperformance and trustworthiness characteristics related to content provenance \nto identify deviations from the desired standards and trigger alerts for human \nintervention. \nInformation Integrity \n']","The purpose of reviewing transparency artifacts in the context of third-party models is to ensure information integrity, security, and effective value chain and component integration.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 46, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What types of automated systems should be covered by the AI Bill of Rights?,"[' \n \n \n \n \n \n \n \n \nAPPENDIX\nExamples of Automated Systems \nThe below examples are meant to illustrate the breadth of automated systems that, insofar as they have the \npotential to meaningfully impact rights, opportunities, or access to critical resources or services, should \nbe covered by the Blueprint for an AI Bill of Rights. These examples should not be construed to limit that \nscope, which includes automated systems that may not yet exist, but which fall under these criteria. \nExamples of automated systems for which the Blueprint for an AI Bill of Rights should be considered include \nthose that have the potential to meaningfully impact: \n• Civil rights, civil liberties, or privacy, including but not limited to:\nSpeech-related systems such as automated content moderation tools; \nSurveillance and criminal justice system algorithms such as risk assessments, predictive \n policing, automated license plate readers, real-time facial recognition systems (especially \n those used in public places or during protected activities like peaceful protests), social media \n monitoring, and ankle monitoring devices; \nVoting-related systems such as signature matching tools; \nSystems with a potential privacy impact such as smart home systems and associated data, \n systems that use or collect health-related data, systems that use or collect education-related \n data, criminal justice system data, ad-targeting systems, and systems that perform big data \n analytics in order to build profiles or infer personal information about individuals; and \nAny system that has the meaningful potential to lead to algorithmic discrimination. \n• Equal opportunities, including but not limited to:\nEducation-related systems such as algorithms that purport to detect student cheating or \n plagiarism, admissions algorithms, online or virtual reality student monitoring systems, \nprojections of student progress or outcomes, algorithms that determine access to resources or \n rograms, and surveillance of classes (whether online or in-person); \nHousing-related systems such as tenant screening algorithms, automated valuation systems that \n estimate the value of homes used in mortgage underwriting or home insurance, and automated \n valuations from online aggregator websites; and \nEmployment-related systems such as workplace algorithms that inform all aspects of the terms \n and conditions of employment including, but not limited to, pay or promotion, hiring or termina- \n tion algorithms, virtual or augmented reality workplace training programs, and electronic work \nplace surveillance and management systems. \n• Access to critical resources and services, including but not limited to:\nHealth and health insurance technologies such as medical AI systems and devices, AI-assisted \n diagnostic tools, algorithms or predictive models used to support clinical decision making, medical \n or insurance health risk assessments, drug addiction risk assessments and associated access alg \n-orithms, wearable technologies, wellness apps, insurance care allocation algorithms, and health\ninsurance cost and underwriting algorithms;\nFinancial system algorithms such as loan allocation algorithms, financial system access determi-\nnation algorithms, credit scoring systems, insurance algorithms including risk assessments, auto\n-mated interest rate determinations, and financial algorithms that apply penalties (e.g., that can\ngarnish wages or withhold tax returns);\n53\n']","The types of automated systems that should be covered by the AI Bill of Rights include those that have the potential to meaningfully impact civil rights, civil liberties, or privacy, equal opportunities, and access to critical resources and services. Examples include speech-related systems, surveillance and criminal justice algorithms, voting-related systems, education-related systems, housing-related systems, employment-related systems, health technologies, and financial system algorithms.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 52, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the significance of content provenance in managing risks associated with AI-generated synthetic content?,"[' \n51 \ngeneral public participants. For example, expert AI red-teamers could modify or verify the \nprompts written by general public AI red-teamers. These approaches may also expand coverage \nof the AI risk attack surface. \n• \nHuman / AI: Performed by GAI in combination with specialist or non-specialist human teams. \nGAI-led red-teaming can be more cost effective than human red-teamers alone. Human or GAI-\nled AI red-teaming may be better suited for eliciting different types of harms. \n \nA.1.6. Content Provenance \nOverview \nGAI technologies can be leveraged for many applications such as content generation and synthetic data. \nSome aspects of GAI outputs, such as the production of deepfake content, can challenge our ability to \ndistinguish human-generated content from AI-generated synthetic content. To help manage and mitigate \nthese risks, digital transparency mechanisms like provenance data tracking can trace the origin and \nhistory of content. Provenance data tracking and synthetic content detection can help facilitate greater \ninformation access about both authentic and synthetic content to users, enabling better knowledge of \ntrustworthiness in AI systems. When combined with other organizational accountability mechanisms, \ndigital content transparency approaches can enable processes to trace negative outcomes back to their \nsource, improve information integrity, and uphold public trust. Provenance data tracking and synthetic \ncontent detection mechanisms provide information about the origin and history of content to assist in \nGAI risk management efforts. \nProvenance metadata can include information about GAI model developers or creators of GAI content, \ndate/time of creation, location, modifications, and sources. Metadata can be tracked for text, images, \nvideos, audio, and underlying datasets. The implementation of provenance data tracking techniques can \nhelp assess the authenticity, integrity, intellectual property rights, and potential manipulations in digital \ncontent. Some well-known techniques for provenance data tracking include digital watermarking, \nmetadata recording, digital fingerprinting, and human authentication, among others. \nProvenance Data Tracking Approaches \nProvenance data tracking techniques for GAI systems can be used to track the history and origin of data \ninputs, metadata, and synthetic content. Provenance data tracking records the origin and history for \ndigital content, allowing its authenticity to be determined. It consists of techniques to record metadata \nas well as overt and covert digital watermarks on content. Data provenance refers to tracking the origin \nand history of input data through metadata and digital watermarking techniques. Provenance data \ntracking processes can include and assist AI Actors across the lifecycle who may not have full visibility or \ncontrol over the various trade-offs and cascading impacts of early-stage model decisions on downstream \nperformance and synthetic outputs. For example, by selecting a watermarking model to prioritize \nrobustness (the durability of a watermark), an AI actor may inadvertently diminish computational \ncomplexity (the resources required to implement watermarking). Organizational risk management \nefforts for enhancing content provenance include: \n• \nTracking provenance of training data and metadata for GAI systems; \n• \nDocumenting provenance data limitations within GAI systems; \n']","Content provenance is significant in managing risks associated with AI-generated synthetic content as it involves digital transparency mechanisms like provenance data tracking, which can trace the origin and history of content. This helps in distinguishing human-generated content from AI-generated synthetic content, facilitating greater information access about both authentic and synthetic content. Provenance data tracking can assist in assessing authenticity, integrity, intellectual property rights, and potential manipulations in digital content, thereby improving information integrity and upholding public trust.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 54, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do legal protections play in addressing algorithmic discrimination?,"[' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']","The context mentions that algorithmic discrimination may violate legal protections depending on specific circumstances, indicating that legal protections play a role in addressing algorithmic discrimination.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures should be taken to ensure that surveillance technologies do not infringe on privacy and civil liberties?,"[' \n \n \n \n \nSECTION TITLE\nDATA PRIVACY\nYou should be protected from abusive data practices via built-in protections and you \nshould have agency over how data about you is used. You should be protected from violations of \nprivacy through design choices that ensure such protections are included by default, including ensuring that \ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \nin plain language, and give you agency over data collection and the specific context of use; current hard-to\xad\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \nreporting that confirms your data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or access. \nNOTICE AND EXPLANATION\nYou should know that an automated system is being used and understand how and why it \ncontributes to outcomes that impact you. Designers, developers, and deployers of automated systems \nshould provide generally accessible plain language documentation including clear descriptions of the overall \nsystem functioning and the role automation plays, notice that such systems are in use, the individual or organiza\xad\ntion responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice \nshould be kept up-to-date and people impacted by the system should be notified of significant use case or key \nfunctionality changes. You should know how and why an outcome impacting you was determined by an \nautomated system, including when the automated system is not the sole input determining the outcome. \nAutomated systems should provide explanations that are technically valid, meaningful and useful to you and to \nany operators or others who need to understand the system, and calibrated to the level of risk based on the \ncontext. Reporting that includes summary information about these automated systems in plain language and \nassessments of the clarity and quality of the notice and explanations should be made public whenever possible. \n6\n']","Surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the requirements for employers regarding workplace surveillance during a labor dispute?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nDATA PRIVACY \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe Privacy Act of 1974 requires privacy protections for personal information in federal \nrecords systems, including limits on data retention, and also provides individuals a general \nright to access and correct their data. Among other things, the Privacy Act limits the storage of individual \ninformation in federal systems of records, illustrating the principle of limiting the scope of data retention. Under \nthe Privacy Act, federal agencies may only retain data about an individual that is “relevant and necessary” to \naccomplish an agency’s statutory purpose or to comply with an Executive Order of the President. The law allows \nfor individuals to be able to access any of their individual information stored in a federal system of records, if not \nincluded under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agen\xad\ncies must provide a method for an individual to determine if their personal information is stored in a particular \nsystem of records, and must provide procedures for an individual to contest the contents of a record about them. \nFurther, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not \ncomply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … \nopportunities…, or benefits.” \nNIST’s Privacy Framework provides a comprehensive, detailed and actionable approach for \norganizations to manage privacy risks. The NIST Framework gives organizations ways to identify and \ncommunicate their privacy risks and goals to support ethical decision-making in system, product, and service \ndesign or deployment, as well as the measures they are taking to demonstrate compliance with applicable laws \nor regulations. It has been voluntarily adopted by organizations across many different sectors around the world.78\nA school board’s attempt to surveil public school students—undertaken without \nadequate community input—sparked a state-wide biometrics moratorium.79 Reacting to a plan in \nthe city of Lockport, New York, the state’s legislature banned the use of facial recognition systems and other \n“biometric identifying technology” in schools until July 1, 2022.80 The law additionally requires that a report on \nthe privacy, civil rights, and civil liberties implications of the use of such technologies be issued before \nbiometric identification technologies can be used in New York schools. \nFederal law requires employers, and any consultants they may retain, to report the costs \nof surveilling employees in the context of a labor dispute, providing a transparency \nmechanism to help protect worker organizing. Employers engaging in workplace surveillance ""where \nan object there-of, directly or indirectly, is […] to obtain information concerning the activities of employees or a \nlabor organization in connection with a labor dispute"" must report expenditures relating to this surveillance to \nthe Department of Labor Office of Labor-Management Standards, and consultants who employers retain for \nthese purposes must also file reports regarding their activities.81\nPrivacy choices on smartphones show that when technologies are well designed, privacy \nand data agency can be meaningful and not overwhelming. These choices—such as contextual, timely \nalerts about location tracking—are brief, direct, and use-specific. Many of the expectations listed here for \nprivacy by design and use-specific consent mirror those distributed to developers as best practices when \ndeveloping for smart phone devices,82 such as being transparent about how user data will be used, asking for app \npermissions during their use so that the use-context will be clear to users, and ensuring that the app will still \nwork if users deny (or later revoke) some permissions. \n39\n']","Federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute. Employers engaging in workplace surveillance aimed at obtaining information concerning the activities of employees or a labor organization in connection with a labor dispute must report expenditures relating to this surveillance to the Department of Labor Office of Labor-Management Standards, and consultants who employers retain for these purposes must also file reports regarding their activities.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 38, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the importance of documenting roles and responsibilities related to managing AI risks within an organization?,"[' \n17 \nGOVERN 1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that \ndoes not increase risks or decrease the organization’s trustworthiness. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.7-001 Protocols are put in place to ensure GAI systems are able to be deactivated when \nnecessary. \nInformation Security; Value Chain \nand Component Integration \nGV-1.7-002 \nConsider the following factors when decommissioning GAI systems: Data \nretention requirements; Data security, e.g., containment, protocols, Data leakage \nafter decommissioning; Dependencies between upstream, downstream, or other \ndata, internet of things (IOT) or AI systems; Use of open-source data or models; \nUsers’ emotional entanglement with GAI functions. \nHuman-AI Configuration; \nInformation Security; Value Chain \nand Component Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nGOVERN 2.1: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are \ndocumented and are clear to individuals and teams throughout the organization. \nAction ID \nSuggested Action \nGAI Risks \nGV-2.1-001 \nEstablish organizational roles, policies, and procedures for communicating GAI \nincidents and performance to AI Actors and downstream stakeholders (including \nthose potentially impacted), via community or official resources (e.g., AI incident \ndatabase, AVID, CVE, NVD, or OECD AI incident monitor). \nHuman-AI Configuration; Value \nChain and Component Integration \nGV-2.1-002 Establish procedures to engage teams for GAI system incident response with \ndiverse composition and responsibilities based on the particular incident type. \nHarmful Bias and Homogenization \nGV-2.1-003 Establish processes to verify the AI Actors conducting GAI incident response tasks \ndemonstrate and maintain the appropriate skills and training. \nHuman-AI Configuration \nGV-2.1-004 When systems may raise national security risks, involve national security \nprofessionals in mapping, measuring, and managing those risks. \nCBRN Information or Capabilities; \nDangerous, Violent, or Hateful \nContent; Information Security \nGV-2.1-005 \nCreate mechanisms to provide protections for whistleblowers who report, based \non reasonable belief, when the organization violates relevant laws or poses a \nspecific and empirically well-substantiated negative risk to public safety (or has \nalready caused harm). \nCBRN Information or Capabilities; \nDangerous, Violent, or Hateful \nContent \nAI Actor Tasks: Governance and Oversight \n \n']","The importance of documenting roles and responsibilities related to managing AI risks within an organization is to ensure that these roles and lines of communication are clear to individuals and teams throughout the organization. This clarity helps in mapping, measuring, and managing AI risks effectively.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 20, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the importance of assessing the proportion of synthetic to non-synthetic training data in AI model development?,"[' \n37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training data and verify \ntraining data is not overly homogenous or GAI-produced to mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP \nfunction – are assessed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical environments when deploying GAI systems. \nDangerous, Violent, or Hateful \nContent \nMS-2.12-002 Document anticipated environmental impacts of model development, \nmaintenance, and deployment in product design decisions. \nEnvironmental \nMS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water \nconsumption) for training, fine tuning, and deploying models: Verify tradeoffs \nbetween resources used at inference time versus additional resources required \nat training time. \nEnvironmental \nMS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and \napplications, and address green-washing concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","The importance of assessing the proportion of synthetic to non-synthetic training data in AI model development is to verify that the training data is not overly homogenous or generated by Generative AI (GAI), which helps mitigate concerns of model collapse.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 40, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the significance of technological diffusion in the context of integrating AI technologies within communities?,"["" \n \n \n \nAPPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures – providing examples of how strong transparency requirements in smart city projects \nhelped to reshape power and give more voice to those lacking the financial or political power to effect change. \nIn discussion of technical and governance interventions that that are needed to protect against the harms \nof these technologies, various panelists emphasized the need for transparency, data collection, and \nflexible and reactive policy development, analogous to how software is continuously updated and deployed. \nSome panelists pointed out that companies need clear guidelines to have a consistent environment for \ninnovation, with principles and guardrails being the key to fostering responsible innovation. \nPanel 2: The Criminal Justice System. This event explored current and emergent uses of technology in \nthe criminal justice system and considered how they advance or undermine public safety, justice, and \ndemocratic values. \nWelcome: \n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nBen Winters, Counsel, Electronic Privacy Information Center\nModerator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity \nPanelists: \n•\nSean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab\n•\nKristian Lum, Researcher\n•\nJumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers\n•\nStanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University\nCollege of Medicine\n•\nMyaisha Hayes, Campaign Strategies Director, MediaJustice\nPanelists discussed uses of technology within the criminal justice system, including the use of predictive \npolicing, pretrial risk assessments, automated license plate readers, and prison communication tools. The \ndiscussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; \nsuch strategies might include data-driven approaches, but the focus on safety should be primary, and \ntechnology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, various panelists emphasized that transparency is important but is not enough to achieve \naccountability. Some panelists discussed their individual views on additional system needs for validity, and \nagreed upon the importance of advisory boards and compensated community input early in the design process \n(before the technology is built and instituted). Various panelists also emphasized the importance of regulation \nthat includes limits to the type and cost of such technologies. \n56\n"", "" \n \n \n \n \nAPPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n•\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n•\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n•\nLiz O’Sullivan, CEO, Parity AI\n•\nTimnit Gebru, Independent Scholar\n•\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n•\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n•\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n•\nBlake Hall, CEO and Founder, ID.Me\n•\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n•\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58\n""]","Technological diffusion is significant in the context of integrating AI technologies within communities as it emphasizes the importance of thoughtful and responsible development and integration of technology. Panelists noted that examining how technological diffusion has worked in urban planning can provide lessons on balancing ownership rights, use rights, and community health, safety, and welfare, ensuring better representation of all voices, especially those traditionally marginalized by technological advances.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 55, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 57, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of ballot curing laws in the voting process?,"["" \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nHealthcare “navigators” help people find their way through online signup forms to choose \nand obtain healthcare. A Navigator is “an individual or organization that's trained and able to help \nconsumers, small businesses, and their employees as they look for health coverage options through the \nMarketplace (a government web site), including completing eligibility and enrollment forms.”106 For \nthe 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could \n“train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive \nhealth coverage.”107\nThe customer service industry has successfully integrated automated services such as \nchat-bots and AI-driven call response systems with escalation to a human support \nteam.108 Many businesses now use partially automated customer service platforms that help answer customer \nquestions and compile common problems for human agents to review. These integrated human-AI \nsystems allow companies to provide faster customer care while maintaining human agents to answer \ncalls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to \nsuccessful customer service.109\nBallot curing laws in at least 24 states require a fallback system that allows voters to \ncorrect their ballot and have it counted in the case that a voter signature matching \nalgorithm incorrectly flags their ballot as invalid or there is another issue with their \nballot, and review by an election official does not rectify the problem. Some federal \ncourts have found that such cure procedures are constitutionally required.110 \nBallot \ncuring processes vary among states, and include direct phone calls, emails, or mail contact by election \nofficials.111 Voters are asked to provide alternative information or a new signature to verify the validity of their \nballot. \n52\n""]","Ballot curing laws are designed to allow voters to correct their ballot and have it counted in cases where a voter signature matching algorithm incorrectly flags their ballot as invalid or when there are other issues with their ballot. These laws ensure that voters have a fallback system to verify the validity of their ballot, which may include direct contact from election officials.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 51, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does technology play in implementing or improving social welfare systems?,"["" \n \n \n \n \nAPPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n•\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n•\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n•\nLiz O’Sullivan, CEO, Parity AI\n•\nTimnit Gebru, Independent Scholar\n•\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n•\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n•\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n•\nBlake Hall, CEO and Founder, ID.Me\n•\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n•\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58\n""]",The answer to given question is not present in context,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 57, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What actions are suggested to address risks associated with intellectual property infringement in organizational GAI systems?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","The suggested action to address risks associated with intellectual property infringement in organizational GAI systems is to compile statistics on actual policy violations, take-down requests, and intellectual property infringement, and analyze transparency reports across demographic and language groups.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What problems does AI-enabled nudification technology seek to address and protect against?,"[' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nAI-enabled “nudification” technology that creates images where people appear to be nude—including apps that\nenable non-technical users to create or alter images of individuals without their consent—has proliferated at an\nalarming rate. Such technology is becoming a common form of image-based abuse that disproportionately\nimpacts women. As these tools become more sophisticated, they are producing altered images that are increasing\xad\nly realistic and are difficult for both humans and AI to detect as inauthentic. Regardless of authenticity, the expe\xad\nrience of harm to victims of non-consensual intimate images can be devastatingly real—affecting their personal\nand professional lives, and impacting their mental and physical health.10\n•\nA company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its driv\xad\ners, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond\ntheir control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.11\n17\n']","AI-enabled nudification technology seeks to address and protect against image-based abuse, particularly the creation of non-consensual intimate images that disproportionately impact women. It aims to combat the proliferation of apps that allow users to create or alter images of individuals without their consent, which can lead to devastating harm to victims.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 16, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What precautions should be taken when using derived data sources in automated systems?,"[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n']","Precautions that should be taken when using derived data sources in automated systems include careful tracking and validation of derived data, as it may be high-risk and could lead to feedback loops, compounded harm, or inaccurate results. Such data should be validated against the risk of collateral consequences.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are indirect prompt injection attacks and how do they exploit vulnerabilities in GAI-integrated applications?,"[' \n11 \nvalue chain (e.g., data inputs, processing, GAI training, or deployment environments), conventional \ncybersecurity practices may need to adapt or evolve. \nFor instance, prompt injection involves modifying what input is provided to a GAI system so that it \nbehaves in unintended ways. In direct prompt injections, attackers might craft malicious prompts and \ninput them directly to a GAI system, with a variety of downstream negative consequences to \ninterconnected systems. Indirect prompt injection attacks occur when adversaries remotely (i.e., without \na direct interface) exploit LLM-integrated applications by injecting prompts into data likely to be \nretrieved. Security researchers have already demonstrated how indirect prompt injections can exploit \nvulnerabilities by stealing proprietary data or running malicious code remotely on a machine. Merely \nquerying a closed production model can elicit previously undisclosed information about that model. \nAnother cybersecurity risk to GAI is data poisoning, in which an adversary compromises a training \ndataset used by a model to manipulate its outputs or operation. Malicious tampering with data or parts \nof the model could exacerbate risks associated with GAI system outputs. \nTrustworthy AI Characteristics: Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n2.10. \nIntellectual Property \nIntellectual property risks from GAI systems may arise where the use of copyrighted works is not a fair \nuse under the fair use doctrine. If a GAI system’s training data included copyrighted material, GAI \noutputs displaying instances of training data memorization (see Data Privacy above) could infringe on \ncopyright. \nHow GAI relates to copyright, including the status of generated content that is similar to but does not \nstrictly copy work protected by copyright, is currently being debated in legal fora. Similar discussions are \ntaking place regarding the use or emulation of personal identity, likeness, or voice without permission. \nTrustworthy AI Characteristics: Accountable and Transparent, Fair with Harmful Bias Managed, Privacy \nEnhanced \n2.11. \nObscene, Degrading, and/or Abusive Content \nGAI can ease the production of and access to illegal non-consensual intimate imagery (NCII) of adults, \nand/or child sexual abuse material (CSAM). GAI-generated obscene, abusive or degrading content can \ncreate privacy, psychological and emotional, and even physical harms, and in some cases may be illegal. \nGenerated explicit or obscene AI content may include highly realistic “deepfakes” of real individuals, \nincluding children. The spread of this kind of material can have downstream negative consequences: in \nthe context of CSAM, even if the generated images do not resemble specific individuals, the prevalence \nof such images can divert time and resources from efforts to find real-world victims. Outside of CSAM, \nthe creation and spread of NCII disproportionately impacts women and sexual minorities, and can have \nsubsequent negative consequences including decline in overall mental health, substance abuse, and \neven suicidal thoughts. \nData used for training GAI models may unintentionally include CSAM and NCII. A recent report noted \nthat several commonly used GAI training datasets were found to contain hundreds of known images of \n']",Indirect prompt injection attacks occur when adversaries remotely exploit LLM-integrated applications by injecting prompts into data likely to be retrieved. These attacks can exploit vulnerabilities by stealing proprietary data or running malicious code remotely on a machine.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 14, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the significance of digital content transparency in relation to the societal impacts of AI?,"["" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","The significance of digital content transparency in relation to the societal impacts of AI lies in providing input for training materials about the capabilities and limitations of GAI systems. This transparency is crucial for AI actors, professionals, and the public to understand the societal impacts of AI and the role of diverse and inclusive content generation.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of engaging in threat modeling for GAI systems?,"[' \n18 \nGOVERN 3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations \nand oversight of AI systems. \nAction ID \nSuggested Action \nGAI Risks \nGV-3.2-001 \nPolicies are in place to bolster oversight of GAI systems with independent \nevaluations or assessments of GAI models or systems where the type and \nrobustness of evaluations are proportional to the identified risks. \nCBRN Information or Capabilities; \nHarmful Bias and Homogenization \nGV-3.2-002 \nConsider adjustment of organizational roles and components across lifecycle \nstages of large or complex GAI systems, including: Test and evaluation, validation, \nand red-teaming of GAI systems; GAI content moderation; GAI system \ndevelopment and engineering; Increased accessibility of GAI tools, interfaces, and \nsystems, Incident response and containment. \nHuman-AI Configuration; \nInformation Security; Harmful Bias \nand Homogenization \nGV-3.2-003 \nDefine acceptable use policies for GAI interfaces, modalities, and human-AI \nconfigurations (i.e., for chatbots and decision-making tasks), including criteria for \nthe kinds of queries GAI applications should refuse to respond to. \nHuman-AI Configuration \nGV-3.2-004 \nEstablish policies for user feedback mechanisms for GAI systems which include \nthorough instructions and any mechanisms for recourse. \nHuman-AI Configuration \nGV-3.2-005 \nEngage in threat modeling to anticipate potential risks from GAI systems. \nCBRN Information or Capabilities; \nInformation Security \nAI Actors: AI Design \n \nGOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, \ndevelopment, deployment, and uses of AI systems to minimize potential negative impacts. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.1-001 \nEstablish policies and procedures that address continual improvement processes \nfor GAI risk measurement. Address general risks associated with a lack of \nexplainability and transparency in GAI systems by using ample documentation and \ntechniques such as: application of gradient-based attributions, occlusion/term \nreduction, counterfactual prompts and prompt engineering, and analysis of \nembeddings; Assess and update risk measurement approaches at regular \ncadences. \nConfabulation \nGV-4.1-002 \nEstablish policies, procedures, and processes detailing risk measurement in \ncontext of use with standardized measurement protocols and structured public \nfeedback exercises such as AI red-teaming or independent external evaluations. \nCBRN Information and Capability; \nValue Chain and Component \nIntegration \n']",Engaging in threat modeling for GAI systems is intended to anticipate potential risks from these systems.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 21, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do GAI systems play in augmenting cybersecurity attacks?,"[' \n10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published. \n']","GAI systems may augment cybersecurity attacks by advancing offensive cyber capabilities such as hacking, malware, and phishing. Reports indicate that large language models (LLMs) can discover vulnerabilities in systems and write code to exploit them. Sophisticated threat actors might develop GAI-powered security co-pilots to inform attackers on how to evade threat detection and escalate privileges after gaining system access.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 13, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role does user consent play in the collection and use of personal data?,"['You should be protected from abusive data practices via built-in \nprotections and you should have agency over how data about \nyou is used. You should be protected from violations of privacy through \ndesign choices that ensure such protections are included by default, including \nensuring that data collection conforms to reasonable expectations and that \nonly data strictly necessary for the specific context is collected. Designers, de\xad\nvelopers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and de\xad\nletion of your data in appropriate ways and to the greatest extent possible; \nwhere not possible, alternative privacy by design safeguards should be used. \nSystems should not employ user experience and design decisions that obfus\xad\ncate user choice or burden users with defaults that are privacy invasive. Con\xad\nsent should only be used to justify collection of data in cases where it can be \nappropriately and meaningfully given. Any consent requests should be brief, \nbe understandable in plain language, and give you agency over data collection \nand the specific context of use; current hard-to-understand no\xad\ntice-and-choice practices for broad uses of data should be changed. Enhanced \nprotections and restrictions for data and inferences related to sensitive do\xad\nmains, including health, work, education, criminal justice, and finance, and \nfor data pertaining to youth should put you first. In sensitive domains, your \ndata and related inferences should only be used for necessary functions, and \nyou should be protected by ethical review and use prohibitions. You and your \ncommunities should be free from unchecked surveillance; surveillance tech\xad\nnologies should be subject to heightened oversight that includes at least \npre-deployment assessment of their potential harms and scope limits to pro\xad\ntect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the \nuse of such surveillance technologies is likely to limit rights, opportunities, or \naccess. Whenever possible, you should have access to reporting that confirms \nyour data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or \naccess. \nDATA PRIVACY\n30\n']","User consent plays a crucial role in the collection and use of personal data, as it should only be used to justify data collection in cases where it can be appropriately and meaningfully given. Consent requests should be brief, understandable in plain language, and provide individuals with agency over data collection and its specific context of use.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 29, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role do algorithmic impact assessments play in the expectations for automated systems?,"["" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and \nexplanations as to how and why a decision was made or an action was taken by the system. These expectations are \nexplained below. \nProvide clear, timely, understandable, and accessible notice of use and explanations \xad\nGenerally accessible plain language documentation. The entity responsible for using the automated \nsystem should ensure that documentation describing the overall system (including any human components) is \npublic and easy to find. The documentation should describe, in plain language, how the system works and how \nany automated component is used to determine an action or decision. It should also include expectations about \nreporting described throughout this framework, such as the algorithmic impact assessments described as \npart of Algorithmic Discrimination Protections. \nAccountable. Notices should clearly identify the entity responsible for designing each component of the \nsystem and the entity using it. \nTimely and up-to-date. Users should receive notice of the use of automated systems in advance of using or \nwhile being impacted by the technology. An explanation should be available with the decision itself, or soon \nthereafter. Notice should be kept up-to-date and people impacted by the system should be notified of use case \nor key functionality changes. \nBrief and clear. Notices and explanations should be assessed, such as by research on users’ experiences, \nincluding user testing, to ensure that the people using or impacted by the automated system are able to easily \nfind notices and explanations, read them quickly, and understand and act on them. This includes ensuring that \nnotices and explanations are accessible to users with disabilities and are available in the language(s) and read-\ning level appropriate for the audience. Notices and explanations may need to be available in multiple forms, \n(e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the \nAmerican public. \nProvide explanations as to how and why a decision was made or an action was taken by an \nautomated system \nTailored to the purpose. Explanations should be tailored to the specific purpose for which the user is \nexpected to use the explanation, and should clearly state that purpose. An informational explanation might \ndiffer from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the \ncontext of a dispute or contestation process. For the purposes of this framework, 'explanation' should be \nconstrued broadly. An explanation need not be a plain-language statement about causality but could consist of \nany mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the \nstated purpose. Tailoring should be assessed (e.g., via user experience research). \nTailored to the target of the explanation. Explanations should be targeted to specific audiences and \nclearly state that audience. An explanation provided to the subject of a decision might differ from one provided \nto an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience \nresearch). \n43\n""]",The answer to given question is not present in context,simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 42, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the purpose of establishing transparency policies for GAI applications?,"[' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n']","The purpose of establishing transparency policies for GAI applications is to document the origin and history of training data and generated data, which advances digital content transparency while balancing the proprietary nature of training approaches.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of the NIST AI Risk Management Framework?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n']","The purpose of the NIST AI Risk Management Framework is to help manage risks posed to individuals, organizations, and society by AI. It aims to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What issues related to bias and discrimination are associated with the use of automated systems in decision-making?,"[' \nSECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-\ndinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm \npaths, to algorithms that can identify diseases in patients. These tools now drive important decisions across \nsectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, \nthese tools hold the potential to redefine every part of our society and make life better for everyone. \nThis important progress must not come at the price of civil rights or democratic values, foundational American \nprinciples that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the \nPresident ordered the full Federal government to work to root out inequity, embed fairness in decision-\nmaking processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The \nPresident has spoken forcefully about the urgent challenges posed to democracy today and has regularly called \non people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the \nbasis for so many more rights that we have come to take for granted that are ingrained in the fabric of this \ncountry.”2\nTo advance President Biden’s vision, the White House Office of Science and Technology Policy has identified \nfive principles that should guide the design, use, and deployment of automated systems to protect the American \npublic in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that \nprotects all people from these threats—and uses technologies in ways that reinforce our highest values. \nResponding to the experiences of the American public, and informed by insights from researchers, \ntechnologists, advocates, journalists, and policymakers, this framework is accompanied by a technical \ncompanion—a handbook for anyone seeking to incorporate these protections into policy and practice, including \ndetailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3\n']","Automated systems in decision-making have been associated with issues such as reflecting and reproducing existing unwanted inequities, embedding new harmful bias and discrimination, and being unsafe or ineffective in areas like patient care, hiring, and credit decisions.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 2, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the importance of pre-deployment testing in the AI lifecycle?,"[' \n48 \n• Data protection \n• Data retention \n• Consistency in use of defining key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n• Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV practices \n• Secure software development practices \n• Stakeholder engagement \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as different levels of human-AI configurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations \nOrganizations may seek to acquire, embed, incorporate, or use open-source or proprietary third-party \nGAI models, systems, or generated data for various applications across an enterprise. Use of these GAI \ntools and inputs has implications for all functions of the organization – including but not limited to \nacquisition, human resources, legal, compliance, and IT services – regardless of whether they are carried \nout by employees or third parties. Many of the actions cited above are relevant and options for \naddressing third-party considerations. \nThird party GAI integrations may give rise to increased intellectual property, data privacy, or information \nsecurity risks, pointing to the need for clear guidelines for transparency and risk management regarding \nthe collection and use of third-party data for model inputs. Organizations may consider varying risk \ncontrols for foundation models, fine-tuned models, and embedded tools, enhanced processes for \ninteracting with external GAI technologies or service providers. Organizations can apply standard or \nexisting risk controls and processes to proprietary or open-source GAI technologies, data, and third-party \nservice providers, including acquisition and procurement due diligence, requests for software bills of \nmaterials (SBOMs), application of service level agreements (SLAs), and statement on standards for \nattestation engagement (SSAE) reports to help with third-party transparency and risk management for \nGAI systems. \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in which GAI systems may be developed, used, and repurposed \ncomplicates risk mapping and pre-deployment measurement efforts. Robust test, evaluation, validation, \nand verification (TEVV) processes can be iteratively applied – and documented – in early stages of the AI \nlifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous \n']","The importance of pre-deployment testing in the AI lifecycle lies in its ability to complicate risk mapping and pre-deployment measurement efforts due to the diverse ways and contexts in which GAI systems may be developed, used, and repurposed. Robust test, evaluation, validation, and verification (TEVV) processes can be iteratively applied and documented in the early stages of the AI lifecycle, ensuring that the systems are properly assessed before deployment.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 51, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What role do civil liberties play in the context of surveillance systems?,"[' \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nProtect the public from unchecked surveillance \nHeightened oversight of surveillance. Surveillance or monitoring systems should be subject to \nheightened oversight that includes at a minimum assessment of potential harms during design (before deploy\xad\nment) and in an ongoing manner, to ensure that the American public’s rights, opportunities, and access are \nprotected. This assessment should be done before deployment and should give special attention to ensure \nthere is not algorithmic discrimination, especially based on community membership, when deployed in a \nspecific real-world context. Such assessment should then be reaffirmed in an ongoing manner as long as the \nsystem is in use. \nLimited and proportionate surveillance. Surveillance should be avoided unless it is strictly necessary \nto achieve a legitimate purpose and it is proportionate to the need. Designers, developers, and deployers of \nsurveillance systems should use the least invasive means of monitoring available and restrict monitoring to the \nminimum number of subjects possible. To the greatest extent possible consistent with law enforcement and \nnational security needs, individuals subject to monitoring should be provided with clear and specific notice \nbefore it occurs and be informed about how the data gathered through surveillance will be used. \nScope limits on surveillance to protect rights and democratic values. Civil liberties and civil \nrights must not be limited by the threat of surveillance or harassment facilitated or aided by an automated \nsystem. Surveillance systems should not be used to monitor the exercise of democratic rights, such as voting, \nprivacy, peaceful assembly, speech, or association, in a way that limits the exercise of civil rights or civil liber\xad\nties. Information about or algorithmically-determined assumptions related to identity should be carefully \nlimited if used to target or guide surveillance systems in order to avoid algorithmic discrimination; such iden\xad\ntity-related information includes group characteristics or affiliations, geographic designations, location-based \nand association-based inferences, social networks, and biometrics. Continuous surveillance and monitoring \nsystems should not be used in physical or digital workplaces (regardless of employment status), public educa\xad\ntional institutions, and public accommodations. Continuous surveillance and monitoring systems should not \nbe used in a way that has the effect of limiting access to critical resources or services or suppressing the exer\xad\ncise of rights, even where the organization is not under a particular duty to protect those rights. \nProvide the public with mechanisms for appropriate and meaningful consent, access, and \ncontrol over their data \nUse-specific consent. Consent practices should not allow for abusive surveillance practices. Where data \ncollectors or automated systems seek consent, they should seek it for specific, narrow use contexts, for specif\xad\nic time durations, and for use by specific entities. Consent should not extend if any of these conditions change; \nconsent should be re-acquired before using data if the use case changes, a time limit elapses, or data is trans\xad\nferred to another entity (including being shared or sold). Consent requested should be limited in scope and \nshould not request consent beyond what is required. Refusal to provide consent should be allowed, without \nadverse effects, to the greatest extent possible based on the needs of the use case. \nBrief and direct consent requests. When seeking consent from users short, plain language consent \nrequests should be used so that users understand for what use contexts, time span, and entities they are \nproviding data and metadata consent. User experience research should be performed to ensure these consent \nrequests meet performance standards for readability and comprehension. This includes ensuring that consent \nrequests are accessible to users with disabilities and are available in the language(s) and reading level appro\xad\npriate for the audience. User experience design choices that intentionally obfuscate or manipulate user \nchoice (i.e., “dark patterns”) should be not be used. \n34\n']","Civil liberties play a crucial role in the context of surveillance systems by ensuring that civil rights are not limited by the threat of surveillance or harassment facilitated by automated systems. Surveillance systems should not monitor the exercise of democratic rights, such as voting, privacy, peaceful assembly, speech, or association, in a way that restricts these civil liberties. Additionally, information related to identity should be carefully limited to avoid algorithmic discrimination, and continuous surveillance should not be used in ways that suppress the exercise of rights.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 33, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What measures are suggested to assess the environmental impact of AI model training and management activities?,"[' \n37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training data and verify \ntraining data is not overly homogenous or GAI-produced to mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP \nfunction – are assessed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical environments when deploying GAI systems. \nDangerous, Violent, or Hateful \nContent \nMS-2.12-002 Document anticipated environmental impacts of model development, \nmaintenance, and deployment in product design decisions. \nEnvironmental \nMS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water \nconsumption) for training, fine tuning, and deploying models: Verify tradeoffs \nbetween resources used at inference time versus additional resources required \nat training time. \nEnvironmental \nMS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and \napplications, and address green-washing concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","The suggested measures to assess the environmental impact of AI model training and management activities include: 1) Assessing safety to physical environments when deploying GAI systems, 2) Documenting anticipated environmental impacts of model development, maintenance, and deployment in product design decisions, 3) Measuring or estimating environmental impacts such as energy and water consumption for training, fine-tuning, and deploying models, and verifying trade-offs between resources used at inference time versus additional resources required at training time, and 4) Verifying the effectiveness of carbon capture or offset programs for GAI training and applications, while addressing green-washing concerns.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 40, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What should designers and developers provide to ensure clear understanding of system functioning in automated systems?,"[' \nYou should know that an automated system is being used, \nand understand how and why it contributes to outcomes \nthat impact you. Designers, developers, and deployers of automat\xad\ned systems should provide generally accessible plain language docu\xad\nmentation including clear descriptions of the overall system func\xad\ntioning and the role automation plays, notice that such systems are in \nuse, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated \nsystem is not the sole input determining the outcome. Automated \nsystems should provide explanations that are technically valid, \nmeaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible. \nNOTICE AND EXPLANATION\n40\n']","Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation that includes clear descriptions of the overall system functioning and the role automation plays.",simple,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 39, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What is the role of the National Institute of Standards and Technology in advancing artificial intelligence?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","The National Institute of Standards and Technology (NIST) develops measurements, technology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence (AI) to realize its full commercial and societal benefits without harm to people or the planet. NIST has conducted both fundamental and applied work on AI for more than a decade and is helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy AI.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What is the purpose of using structured feedback mechanisms in relation to AI-generated content?,"[' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']",The purpose of using structured feedback mechanisms in relation to AI-generated content is to solicit and capture user input about the content to detect subtle shifts in quality or alignment with community and societal values.,simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What measures are suggested to ensure information integrity in the deployment of GAI systems?,"[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n']","Suggested measures to ensure information integrity in the deployment of GAI systems include verifying GAI system training data and TEVV data provenance, and ensuring that fine-tuning or retrieval-augmented generation data is grounded. Additionally, it is recommended to review and verify sources and citations in GAI system outputs during pre-deployment risk measurement and ongoing monitoring activities.",simple,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What steps should automated systems take to avoid bias and support equity for marginalized groups?,"[' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n']","Automated systems should take several steps to avoid bias and support equity for marginalized groups, including conducting proactive equity assessments during the design phase to identify potential discrimination, using representative and robust data that reflects local communities, and guarding against the use of demographic proxies that could lead to algorithmic discrimination. These steps should be integrated throughout the design, development, and deployment processes to ensure ongoing protection against algorithmic discrimination.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True Why is user consent important for protecting personal data?,"[' \n \n \n \n \nSECTION TITLE\nDATA PRIVACY\nYou should be protected from abusive data practices via built-in protections and you \nshould have agency over how data about you is used. You should be protected from violations of \nprivacy through design choices that ensure such protections are included by default, including ensuring that \ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \nin plain language, and give you agency over data collection and the specific context of use; current hard-to\xad\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \nreporting that confirms your data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or access. \nNOTICE AND EXPLANATION\nYou should know that an automated system is being used and understand how and why it \ncontributes to outcomes that impact you. Designers, developers, and deployers of automated systems \nshould provide generally accessible plain language documentation including clear descriptions of the overall \nsystem functioning and the role automation plays, notice that such systems are in use, the individual or organiza\xad\ntion responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice \nshould be kept up-to-date and people impacted by the system should be notified of significant use case or key \nfunctionality changes. You should know how and why an outcome impacting you was determined by an \nautomated system, including when the automated system is not the sole input determining the outcome. \nAutomated systems should provide explanations that are technically valid, meaningful and useful to you and to \nany operators or others who need to understand the system, and calibrated to the level of risk based on the \ncontext. Reporting that includes summary information about these automated systems in plain language and \nassessments of the clarity and quality of the notice and explanations should be made public whenever possible. \n6\n', 'You should be protected from abusive data practices via built-in \nprotections and you should have agency over how data about \nyou is used. You should be protected from violations of privacy through \ndesign choices that ensure such protections are included by default, including \nensuring that data collection conforms to reasonable expectations and that \nonly data strictly necessary for the specific context is collected. Designers, de\xad\nvelopers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and de\xad\nletion of your data in appropriate ways and to the greatest extent possible; \nwhere not possible, alternative privacy by design safeguards should be used. \nSystems should not employ user experience and design decisions that obfus\xad\ncate user choice or burden users with defaults that are privacy invasive. Con\xad\nsent should only be used to justify collection of data in cases where it can be \nappropriately and meaningfully given. Any consent requests should be brief, \nbe understandable in plain language, and give you agency over data collection \nand the specific context of use; current hard-to-understand no\xad\ntice-and-choice practices for broad uses of data should be changed. Enhanced \nprotections and restrictions for data and inferences related to sensitive do\xad\nmains, including health, work, education, criminal justice, and finance, and \nfor data pertaining to youth should put you first. In sensitive domains, your \ndata and related inferences should only be used for necessary functions, and \nyou should be protected by ethical review and use prohibitions. You and your \ncommunities should be free from unchecked surveillance; surveillance tech\xad\nnologies should be subject to heightened oversight that includes at least \npre-deployment assessment of their potential harms and scope limits to pro\xad\ntect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the \nuse of such surveillance technologies is likely to limit rights, opportunities, or \naccess. Whenever possible, you should have access to reporting that confirms \nyour data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or \naccess. \nDATA PRIVACY\n30\n']","User consent is important for protecting personal data because it ensures that data collection is justified only in cases where consent can be appropriately and meaningfully given. Consent requests should be brief, understandable in plain language, and provide individuals with agency over their data collection and its specific context of use. This approach helps to prevent abusive data practices and ensures that individuals have control over how their data is used.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 29, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What steps, like bias testing, ensure fair automated systems?","["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n"", ' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']","Many companies, non-profits, and federal government agencies are taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, which has led to products being changed or not launched to prevent harm. Federal government agencies are developing standards and guidance for the use of automated systems to help prevent bias. Non-profits and companies have developed best practices for audits and impact assessments to identify potential algorithmic discrimination and provide transparency in mitigating such biases.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does the NAACP impact civil rights in tech governance?,"['APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation \nStephanie Dinkins and the Future \nHistories Studio at Stony Brook \nUniversity \nTechNet \nThe Alliance for Media Arts and \nCulture, MIT Open Documentary \nLab and Co-Creation Studio, and \nImmerse \nThe International Brotherhood of \nTeamsters \nThe Leadership Conference on \nCivil and Human Rights \nThorn \nU.S. Chamber of Commerce’s \nTechnology Engagement Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \nXR Association \nXR Safety Initiative \n• As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor members of the public. The listening sessions together drew upwards of 300 participants. The Science and\nTechnology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61\n', 'APPENDIX\nSummaries of Additional Engagements: \n• OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n• OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative \nConsumer Technology Association \nCourtney Radsch \nCoworker \nCyber Farm Labs \nData & Society Research Institute \nData for Black Lives \nData to Actionable Knowledge Lab \nat Harvard University \nDeloitte \nDev Technology Group \nDigital Therapeutics Alliance \nDigital Welfare State & Human \nRights Project and Center for \nHuman Rights and Global Justice at \nNew York University School of \nLaw, and Temple University \nInstitute for Law, Innovation & \nTechnology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier Foundation \nElectronic Privacy Information \nCenter, Center for Digital \nDemocracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information Technology \nResearch and Development \nInteragency Working Group \nHireVue \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at Science Applications \nInternational Corporation \nInformation Technology and \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation \nInternational Business Machines \nCorporation \nInternational Committee of the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner, Dan Berger, \nMargaret Hu, and Sara Katsanis \nJonathan Barry-Blocker \nJoseph Turow \nJoy Buolamwini \nJoy Mack \nKaren Bureau \nLamont Gholston \nLawyers’ Committee for Civil \nRights Under Law \n60\n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 60, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 59, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does DARPA's XAI tackle opaque AI decision-making challenges?,"["" \n \nENDNOTES\n85. Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The\nChicago Sun Times. May 18, 2017.\nhttps://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought\xad\nto-keep-secret\n86. Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case.\nACLU. Jun. 2, 2017.\nhttps://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking\xad\nhighlighted-idaho-aclu-case\n87. Illinois General Assembly. Biometric Information Privacy Act. Effective Oct. 3, 2008.\nhttps://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57\n88. Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022.\nhttps://partnershiponai.org/paper/about-ml-reference-document/1/\n89. See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker\nBarnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru.\nModel Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and\nTransparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 220–229. https://\ndl.acm.org/doi/10.1145/3287560.3287596\n90. Sarah Ammermann. Adverse Action Notice Requirements Under the ECOA and the FCRA. Consumer\nCompliance Outlook. Second Quarter 2013.\nhttps://consumercomplianceoutlook.org/2013/second-quarter/adverse-action-notice-requirements\xad\nunder-ecoa-fcra/\n91. Federal Trade Commission. Using Consumer Reports for Credit Decisions: What to Know About\nAdverse Action and Risk-Based Pricing Notices. Accessed May 2, 2022.\nhttps://www.ftc.gov/business-guidance/resources/using-consumer-reports-credit-decisions-what\xad\nknow-about-adverse-action-risk-based-pricing-notices#risk\n92. Consumer Financial Protection Bureau. CFPB Acts to Protect the Public from Black-Box Credit\nModels Using Complex Algorithms. May 26, 2022.\nhttps://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black\xad\nbox-credit-models-using-complex-algorithms/\n93. Anthony Zaller. California Passes Law Regulating Quotas In Warehouses – What Employers Need to\nKnow About AB 701. Zaller Law Group California Employment Law Report. Sept. 24, 2021.\nhttps://www.californiaemploymentlawreport.com/2021/09/california-passes-law-regulating-quotas\xad\nin-warehouses-what-employers-need-to-know-about-ab-701/\n94. National Institute of Standards and Technology. AI Fundamental Research – Explainability.\nAccessed Jun. 4, 2022.\nhttps://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability\n95. DARPA. Explainable Artificial Intelligence (XAI). Accessed July 20, 2022.\nhttps://www.darpa.mil/program/explainable-artificial-intelligence\n71\n""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 70, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What concerns did panelists raise about AI in policing and its impact on safety and democracy?,"["" \n \n \n \nAPPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures – providing examples of how strong transparency requirements in smart city projects \nhelped to reshape power and give more voice to those lacking the financial or political power to effect change. \nIn discussion of technical and governance interventions that that are needed to protect against the harms \nof these technologies, various panelists emphasized the need for transparency, data collection, and \nflexible and reactive policy development, analogous to how software is continuously updated and deployed. \nSome panelists pointed out that companies need clear guidelines to have a consistent environment for \ninnovation, with principles and guardrails being the key to fostering responsible innovation. \nPanel 2: The Criminal Justice System. This event explored current and emergent uses of technology in \nthe criminal justice system and considered how they advance or undermine public safety, justice, and \ndemocratic values. \nWelcome: \n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nBen Winters, Counsel, Electronic Privacy Information Center\nModerator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity \nPanelists: \n•\nSean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab\n•\nKristian Lum, Researcher\n•\nJumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers\n•\nStanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University\nCollege of Medicine\n•\nMyaisha Hayes, Campaign Strategies Director, MediaJustice\nPanelists discussed uses of technology within the criminal justice system, including the use of predictive \npolicing, pretrial risk assessments, automated license plate readers, and prison communication tools. The \ndiscussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; \nsuch strategies might include data-driven approaches, but the focus on safety should be primary, and \ntechnology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, various panelists emphasized that transparency is important but is not enough to achieve \naccountability. Some panelists discussed their individual views on additional system needs for validity, and \nagreed upon the importance of advisory boards and compensated community input early in the design process \n(before the technology is built and instituted). Various panelists also emphasized the importance of regulation \nthat includes limits to the type and cost of such technologies. \n56\n"", "" \n \n \n \n \nAPPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n•\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n•\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n•\nLiz O’Sullivan, CEO, Parity AI\n•\nTimnit Gebru, Independent Scholar\n•\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n•\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n•\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n•\nBlake Hall, CEO and Founder, ID.Me\n•\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n•\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58\n""]","Panelists raised concerns about the validity of AI systems used in policing, noting that adverse or irrelevant data can lead to a replication of unjust outcomes. They highlighted issues such as confirmation bias and the tendency to defer to potentially inaccurate automated systems. The impact of these systems on individuals and communities is seen as potentially severe, with concerns that they lack individualization, undermine the belief in people's ability to change for the better, and can lead to job loss and custody issues. Additionally, surveillance technologies can create chilling effects in communities and send negative signals about how community members are viewed. Panelists emphasized that while transparency is important, it is not sufficient for achieving accountability.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 55, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 57, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does the OSTP play in the AI Bill of Rights regarding public input and civil liberties?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nAbout this Document \nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was \npublished by the White House Office of Science and Technology Policy in October 2022. This framework was \nreleased one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered \nworld.” Its release follows a year of public engagement to inform this initiative. The framework is available \nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights \nAbout the Office of Science and Technology Policy \nThe Office of Science and Technology Policy (OSTP) was established by the National Science and Technology \nPolicy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office \nof the President with advice on the scientific, engineering, and technological aspects of the economy, national \nsecurity, health, foreign relations, the environment, and the technological recovery and use of resources, among \nother topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of \nManagement and Budget (OMB) with an annual review and analysis of Federal research and development in \nbudgets, and serves as a source of scientific and technological analysis and judgment for the President with \nrespect to major policies, plans, and programs of the Federal Government. \nLegal Disclaimer \nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper \npublished by the White House Office of Science and Technology Policy. It is intended to support the \ndevelopment of policies and practices that protect civil rights and promote democratic values in the building, \ndeployment, and governance of automated systems. \nThe Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It \ndoes not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles \nin whole or in part may not be appropriate given the intended use of automated systems to achieve government \nagency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of \nautomated systems in certain settings such as AI systems used as part of school building security or automated \nhealth diagnostic systems. \nThe Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of \nequities, for example, between the protection of sensitive law enforcement information and the principle of \nnotice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and \nother law enforcement equities. Even in contexts where these principles may not apply in whole or in part, \nfederal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as \nexisting policies and safeguards that govern automated systems, including, for example, Executive Order 13960, \nPromoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). \nThis white paper recognizes that national security (which includes certain law enforcement and \nhomeland security activities) and defense activities are of increased sensitivity and interest to our nation’s \nadversaries and are often subject to special requirements, such as those governing classified information and \nother protected data. Such activities require alternative, compatible safeguards through existing policies that \ngovern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and \nResponsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and \nFramework. The implementation of these policies to national security and defense activities can be informed by \nthe Blueprint for an AI Bill of Rights where feasible. \nThe Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or \n', ' \n \n \nABOUT THIS FRAMEWORK\xad\xad\xad\xad\xad\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the \ndesign, use, and deployment of automated systems to protect the rights of the American public in the age of \nartificial intel-ligence. Developed through extensive consultation with the American public, these principles are \na blueprint for building and deploying automated systems that are aligned with democratic values and protect \ncivil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of \nall sizes—to uphold these values. Experts from across the private sector, governments, and international \nconsortia have published principles and frameworks to guide the responsible use of automated systems; this \nframework provides a national values statement and toolkit that is sector-agnostic to inform building these \nprotections into policy, practice, or the technological design process. Where existing law or policy—such as \nsector-specific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an \nAI Bill of Rights should be used to inform policy decisions.\nLISTENING TO THE AMERICAN PUBLIC\nThe White House Office of Science and Technology Policy has led a year-long process to seek and distill input \nfrom people across the country—from impacted communities and industry stakeholders to technology develop-\ners and other experts across fields and sectors, as well as policymakers throughout the Federal government—on \nthe issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listen-\ning sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4\n']","The Office of Science and Technology Policy (OSTP) plays a crucial role in the AI Bill of Rights by leading a year-long process to seek and distill input from various stakeholders, including impacted communities, industry stakeholders, technology developers, and policymakers. This engagement informs the development of policies and practices that protect civil rights and promote democratic values in the governance of automated systems.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 1, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 3, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do Model Cards enhance AI transparency and accountability amid privacy issues?,"["" \n \nENDNOTES\n85. Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The\nChicago Sun Times. May 18, 2017.\nhttps://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought\xad\nto-keep-secret\n86. Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case.\nACLU. Jun. 2, 2017.\nhttps://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking\xad\nhighlighted-idaho-aclu-case\n87. Illinois General Assembly. Biometric Information Privacy Act. Effective Oct. 3, 2008.\nhttps://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57\n88. Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022.\nhttps://partnershiponai.org/paper/about-ml-reference-document/1/\n89. See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker\nBarnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru.\nModel Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and\nTransparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 220–229. https://\ndl.acm.org/doi/10.1145/3287560.3287596\n90. Sarah Ammermann. Adverse Action Notice Requirements Under the ECOA and the FCRA. Consumer\nCompliance Outlook. Second Quarter 2013.\nhttps://consumercomplianceoutlook.org/2013/second-quarter/adverse-action-notice-requirements\xad\nunder-ecoa-fcra/\n91. Federal Trade Commission. Using Consumer Reports for Credit Decisions: What to Know About\nAdverse Action and Risk-Based Pricing Notices. Accessed May 2, 2022.\nhttps://www.ftc.gov/business-guidance/resources/using-consumer-reports-credit-decisions-what\xad\nknow-about-adverse-action-risk-based-pricing-notices#risk\n92. Consumer Financial Protection Bureau. CFPB Acts to Protect the Public from Black-Box Credit\nModels Using Complex Algorithms. May 26, 2022.\nhttps://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black\xad\nbox-credit-models-using-complex-algorithms/\n93. Anthony Zaller. California Passes Law Regulating Quotas In Warehouses – What Employers Need to\nKnow About AB 701. Zaller Law Group California Employment Law Report. Sept. 24, 2021.\nhttps://www.californiaemploymentlawreport.com/2021/09/california-passes-law-regulating-quotas\xad\nin-warehouses-what-employers-need-to-know-about-ab-701/\n94. National Institute of Standards and Technology. AI Fundamental Research – Explainability.\nAccessed Jun. 4, 2022.\nhttps://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability\n95. DARPA. Explainable Artificial Intelligence (XAI). Accessed July 20, 2022.\nhttps://www.darpa.mil/program/explainable-artificial-intelligence\n71\n"", "" \n65. See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info\nAppears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://\nwww.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles\xad\nin-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server. WIRED,\nNov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/\n66. Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash. New York Times.\nSept. 24, 2019.\nhttps://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html\n67. Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust\nUnions. Newsweek. Dec. 13, 2021.\nhttps://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust\xad\nunions-1658603\n68. See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and\nagainst Weight Watchers and their subsidiary Kurbo\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)\n69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109\n(1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children's Online\nPrivacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection and\nStatistical Efficiency Act (CIPSEA) (116 Stat. 2899)\n70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov.\n21, 2018.\nhttps://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true\n71. Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012.\nhttps://www.nytimes.com/2012/02/19/magazine/shopping-habits.html\n72. Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology\nSchools are Using to Monitor Students. ProPublica. Jun. 25, 2019.\nhttps://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology\xad\nschools-are-using-to-monitor-students/\n73. Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are\nfighting back. Washington Post. Nov. 12, 2020.\nhttps://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/\n74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government\nTechnology. May 24, 2022.\nhttps://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;\nLydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability\nDiscrimination In New Surveillance Technologies: How new surveillance technologies in education,\npolicing, health care, and the workplace disproportionately harm disabled people. Center for Democracy\nand Technology Report. May 24, 2022.\nhttps://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how\xad\nnew-surveillance-technologies-in-education-policing-health-care-and-the-workplace\xad\ndisproportionately-harm-disabled-people/\n69\n""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 70, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 68, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What does the AI Bill of Rights suggest for protecting civil rights in tech?,"[' \nSECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-\ndinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm \npaths, to algorithms that can identify diseases in patients. These tools now drive important decisions across \nsectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, \nthese tools hold the potential to redefine every part of our society and make life better for everyone. \nThis important progress must not come at the price of civil rights or democratic values, foundational American \nprinciples that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the \nPresident ordered the full Federal government to work to root out inequity, embed fairness in decision-\nmaking processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The \nPresident has spoken forcefully about the urgent challenges posed to democracy today and has regularly called \non people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the \nbasis for so many more rights that we have come to take for granted that are ingrained in the fabric of this \ncountry.”2\nTo advance President Biden’s vision, the White House Office of Science and Technology Policy has identified \nfive principles that should guide the design, use, and deployment of automated systems to protect the American \npublic in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that \nprotects all people from these threats—and uses technologies in ways that reinforce our highest values. \nResponding to the experiences of the American public, and informed by insights from researchers, \ntechnologists, advocates, journalists, and policymakers, this framework is accompanied by a technical \ncompanion—a handbook for anyone seeking to incorporate these protections into policy and practice, including \ndetailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3\n', ' \n \n \n \n \n \n \n \n \n \nBLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022 \n']","The AI Bill of Rights suggests guiding the design, use, and deployment of automated systems to protect the American public, ensuring that these technologies reinforce civil rights and democratic values. It emphasizes the need to root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 2, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 0, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What steps are taken to ensure fair use of automated systems?,"["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n"", ' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']","Many companies, non-profits, and federal government agencies are taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, which has led to changes or prevented harmful product launches. Federal agencies are developing standards and guidance for the use of automated systems to help prevent bias. Non-profits and companies have developed best practices for audits and impact assessments to identify potential algorithmic discrimination and provide transparency in mitigating such biases.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the AI ethics for intel and their alignment with NIST standards?,"[' \nENDNOTES\n12. Expectations about reporting are intended for the entity developing or using the automated system. The\nresulting reports can be provided to the public, regulators, auditors, industry standards groups, or others\nengaged in independent review, and should be made public as much as possible consistent with law,\nregulation, and policy, and noting that intellectual property or law enforcement considerations may prevent\npublic release. These reporting expectations are important for transparency, so the American people can\nhave confidence that their rights, opportunities, and access as well as their expectations around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation has publicly described the health and other benefits of these\n“traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow\xad\nVehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups are here to help.\nA growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI\nmodels. MIT Technology Review. Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for\xad\nin-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget. Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985\xad\nImplementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\nDepartment-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation\xad\nPathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for\xad\nthe-intelligence-community\n64\n', ' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 63, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What's the role of incident response plans in assessing GAI performance and AI Actor communication during incidents?,"[' \n45 \nMG-4.1-007 \nVerify that AI Actors responsible for monitoring reported issues can effectively \nevaluate GAI system performance including the application of content \nprovenance data tracking techniques, and promptly escalate issues for response. \nHuman-AI Configuration; \nInformation Integrity \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, Domain Experts, End-Users, Human Factors, Operation and \nMonitoring \n \nMANAGE 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular \nengagement with interested parties, including relevant AI Actors. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.2-001 Conduct regular monitoring of GAI systems and publish reports detailing the \nperformance, feedback received, and improvements made. \nHarmful Bias and Homogenization \nMG-4.2-002 \nPractice and follow incident response plans for addressing the generation of \ninappropriate or harmful content and adapt processes based on findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents with \nrelevant AI Actors, to understand the root causes and implement preventive \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior to ease \nnon-technical stakeholders understanding of GAI system functionality. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Affected Individuals and Communities, End-Users, Operation and \nMonitoring, TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors, including affected communities. Processes for tracking, \nresponding to, and recovering from incidents and errors are followed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.3-001 \nConduct after-action assessments for GAI system incidents to verify incident \nresponse and recovery processes are followed and effective, including to follow \nprocedures for communicating incidents to relevant AI Actors and where \napplicable, relevant legal and regulatory bodies. \nInformation Security \nMG-4.3-002 Establish and maintain policies and procedures to record and track GAI system \nreported errors, near-misses, and negative impacts. \nConfabulation; Information \nIntegrity \n', ' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']","Incident response plans play a crucial role in assessing GAI performance by providing structured procedures for addressing the generation of inappropriate or harmful content. They ensure that incidents are communicated to relevant AI Actors, including affected communities, and that processes for tracking, responding to, and recovering from incidents are followed and documented. This structured approach helps in understanding the root causes of incidents and implementing preventive measures, thereby enhancing overall AI Actor communication during such events.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 48, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do GAI incident docs help AI Actors assess and manage system performance?,"[' \n53 \nDocumenting, reporting, and sharing information about GAI incidents can help mitigate and prevent \nharmful outcomes by assisting relevant AI Actors in tracing impacts to their source. Greater awareness \nand standardization of GAI incident reporting could promote this transparency and improve GAI risk \nmanagement across the AI ecosystem. \nDocumentation and Involvement of AI Actors \nAI Actors should be aware of their roles in reporting AI incidents. To better understand previous incidents \nand implement measures to prevent similar ones in the future, organizations could consider developing \nguidelines for publicly available incident reporting which include information about AI actor \nresponsibilities. These guidelines would help AI system operators identify GAI incidents across the AI \nlifecycle and with AI Actors regardless of role. Documentation and review of third-party inputs and \nplugins for GAI systems is especially important for AI Actors in the context of incident disclosure; LLM \ninputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents. \n \n', ' \n45 \nMG-4.1-007 \nVerify that AI Actors responsible for monitoring reported issues can effectively \nevaluate GAI system performance including the application of content \nprovenance data tracking techniques, and promptly escalate issues for response. \nHuman-AI Configuration; \nInformation Integrity \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, Domain Experts, End-Users, Human Factors, Operation and \nMonitoring \n \nMANAGE 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular \nengagement with interested parties, including relevant AI Actors. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.2-001 Conduct regular monitoring of GAI systems and publish reports detailing the \nperformance, feedback received, and improvements made. \nHarmful Bias and Homogenization \nMG-4.2-002 \nPractice and follow incident response plans for addressing the generation of \ninappropriate or harmful content and adapt processes based on findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents with \nrelevant AI Actors, to understand the root causes and implement preventive \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior to ease \nnon-technical stakeholders understanding of GAI system functionality. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Affected Individuals and Communities, End-Users, Operation and \nMonitoring, TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors, including affected communities. Processes for tracking, \nresponding to, and recovering from incidents and errors are followed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.3-001 \nConduct after-action assessments for GAI system incidents to verify incident \nresponse and recovery processes are followed and effective, including to follow \nprocedures for communicating incidents to relevant AI Actors and where \napplicable, relevant legal and regulatory bodies. \nInformation Security \nMG-4.3-002 Establish and maintain policies and procedures to record and track GAI system \nreported errors, near-misses, and negative impacts. \nConfabulation; Information \nIntegrity \n']","GAI incident documentation helps AI Actors assess and manage system performance by facilitating smoother sharing of information regarding incidents, which includes logging, recording, and analyzing GAI incidents. This documentation allows AI Actors to trace impacts to their source, understand previous incidents, and implement measures to prevent similar occurrences in the future. Additionally, regular information sharing and maintaining change management records empower AI Actors in responding to and managing AI incidents effectively.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 56, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 48, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What principles did the White House OSTP set for civil rights in automated systems, and how was public input involved?","[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nAbout this Document \nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was \npublished by the White House Office of Science and Technology Policy in October 2022. This framework was \nreleased one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered \nworld.” Its release follows a year of public engagement to inform this initiative. The framework is available \nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights \nAbout the Office of Science and Technology Policy \nThe Office of Science and Technology Policy (OSTP) was established by the National Science and Technology \nPolicy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office \nof the President with advice on the scientific, engineering, and technological aspects of the economy, national \nsecurity, health, foreign relations, the environment, and the technological recovery and use of resources, among \nother topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of \nManagement and Budget (OMB) with an annual review and analysis of Federal research and development in \nbudgets, and serves as a source of scientific and technological analysis and judgment for the President with \nrespect to major policies, plans, and programs of the Federal Government. \nLegal Disclaimer \nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper \npublished by the White House Office of Science and Technology Policy. It is intended to support the \ndevelopment of policies and practices that protect civil rights and promote democratic values in the building, \ndeployment, and governance of automated systems. \nThe Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It \ndoes not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles \nin whole or in part may not be appropriate given the intended use of automated systems to achieve government \nagency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of \nautomated systems in certain settings such as AI systems used as part of school building security or automated \nhealth diagnostic systems. \nThe Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of \nequities, for example, between the protection of sensitive law enforcement information and the principle of \nnotice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and \nother law enforcement equities. Even in contexts where these principles may not apply in whole or in part, \nfederal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as \nexisting policies and safeguards that govern automated systems, including, for example, Executive Order 13960, \nPromoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). \nThis white paper recognizes that national security (which includes certain law enforcement and \nhomeland security activities) and defense activities are of increased sensitivity and interest to our nation’s \nadversaries and are often subject to special requirements, such as those governing classified information and \nother protected data. Such activities require alternative, compatible safeguards through existing policies that \ngovern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and \nResponsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and \nFramework. The implementation of these policies to national security and defense activities can be informed by \nthe Blueprint for an AI Bill of Rights where feasible. \nThe Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or \n', ' \n \n \nABOUT THIS FRAMEWORK\xad\xad\xad\xad\xad\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the \ndesign, use, and deployment of automated systems to protect the rights of the American public in the age of \nartificial intel-ligence. Developed through extensive consultation with the American public, these principles are \na blueprint for building and deploying automated systems that are aligned with democratic values and protect \ncivil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of \nall sizes—to uphold these values. Experts from across the private sector, governments, and international \nconsortia have published principles and frameworks to guide the responsible use of automated systems; this \nframework provides a national values statement and toolkit that is sector-agnostic to inform building these \nprotections into policy, practice, or the technological design process. Where existing law or policy—such as \nsector-specific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an \nAI Bill of Rights should be used to inform policy decisions.\nLISTENING TO THE AMERICAN PUBLIC\nThe White House Office of Science and Technology Policy has led a year-long process to seek and distill input \nfrom people across the country—from impacted communities and industry stakeholders to technology develop-\ners and other experts across fields and sectors, as well as policymakers throughout the Federal government—on \nthe issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listen-\ning sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4\n']","The Blueprint for an AI Bill of Rights includes five principles and associated practices to guide the design, use, and deployment of automated systems to protect the rights of the American public. It was developed through extensive consultation with the American public, which involved a year-long process of seeking and distilling input from impacted communities, industry stakeholders, technology developers, and policymakers. This public engagement included panel discussions, public listening sessions, and a formal request for information, allowing various voices to shape the principles aimed at preventing algorithmic and data-driven harms.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 1, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 3, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do training and feedback improve understanding of digital content transparency in GAI systems?,"[' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","Training and feedback improve understanding of digital content transparency in GAI systems by providing input for training materials about the capabilities and limitations of GAI systems related to digital content transparency. This includes actively seeking feedback on generated content quality and potential biases, as well as assessing the general awareness among end users and impacted communities about the availability of feedback channels.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What leads to model collapse in AI, especially with synthetic data and biases?","[' \n9 \nand reduced content diversity). Overly homogenized outputs can themselves be incorrect, or they may \nlead to unreliable decision-making or amplify harmful biases. These phenomena can flow from \nfoundation models to downstream models and systems, with the foundation models acting as \n“bottlenecks,” or single points of failure. \nOverly homogenized content can contribute to “model collapse.” Model collapse can occur when model \ntraining over-relies on synthetic data, resulting in data points disappearing from the distribution of the \nnew model’s outputs. In addition to threatening the robustness of the model overall, model collapse \ncould lead to homogenized outputs, including by amplifying any homogenization from the model used to \ngenerate the synthetic training data. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Valid and Reliable \n2.7. Human-AI Configuration \nGAI system use can involve varying risks of misconfigurations and poor interactions between a system \nand a human who is interacting with it. Humans bring their unique perspectives, experiences, or domain-\nspecific expertise to interactions with AI systems but may not have detailed knowledge of AI systems and \nhow they work. As a result, human experts may be unnecessarily “averse” to GAI systems, and thus \ndeprive themselves or others of GAI’s beneficial uses. \nConversely, due to the complexity and increasing reliability of GAI technology, over time, humans may \nover-rely on GAI systems or may unjustifiably perceive GAI content to be of higher quality than that \nproduced by other sources. This phenomenon is an example of automation bias, or excessive deference \nto automated systems. Automation bias can exacerbate other risks of GAI, such as risks of confabulation \nor risks of bias or homogenization. \nThere may also be concerns about emotional entanglement between humans and GAI systems, which \ncould lead to negative psychological impacts. \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe, Valid and Reliable \n2.8. Information Integrity \nInformation integrity describes the “spectrum of information and associated patterns of its creation, \nexchange, and consumption in society.” High-integrity information can be trusted; “distinguishes fact \nfrom fiction, opinion, and inference; acknowledges uncertainties; and is transparent about its level of \nvetting. This information can be linked to the original source(s) with appropriate evidence. High-integrity \ninformation is also accurate and reliable, can be verified and authenticated, has a clear chain of custody, \nand creates reasonable expectations about when its validity may expire.”11 \n \n \n11 This definition of information integrity is derived from the 2022 White House Roadmap for Researchers on \nPriorities Related to Information Integrity Research and Development. \n', ' \n8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles \n']","Model collapse in AI can occur when model training over-relies on synthetic data, resulting in data points disappearing from the distribution of the new model's outputs. This phenomenon threatens the robustness of the model overall and can lead to homogenized outputs, amplifying any homogenization from the model used to generate the synthetic training data.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 12, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 11, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are Idaho's rules on pretrial risk assessment transparency and their alignment with federal ethical AI standards?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\nSome U.S government agencies have developed specific frameworks for ethical use of AI \nsystems. The Department of Energy (DOE) has activated the AI Advancement Council that oversees coordina-\ntion and advises on implementation of the DOE AI Strategy and addresses issues and/or escalations on the \nethical use and development of AI systems.20 The Department of Defense has adopted Artificial Intelligence \nEthical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its national \nsecurity and defense activities.21 Similarly, the U.S. Intelligence Community (IC) has developed the Principles \nof Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to \ndevelop and use AI in furtherance of the IC\'s mission, as well as an AI Ethics Framework to help implement \nthese principles.22\nThe National Science Foundation (NSF) funds extensive research to help foster the \ndevelopment of automated systems that adhere to and advance their safety, security and \neffectiveness. Multiple NSF programs support research that directly addresses many of these principles: \nthe National AI Research Institutes23 support research on all aspects of safe, trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical Systems24 program supports research on developing safe \nautonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace25 \nprogram supports research on cybersecurity and privacy enhancing technologies in automated systems; the \nFormal Methods in the Field26 program supports research on rigorous formal verification and analysis of \nautomated systems and machine learning, and the Designing Accountable Software Systems27 program supports \nresearch on rigorous and reproducible methodologies for developing software systems with legal and regulatory \ncompliance in mind. \nSome state legislatures have placed strong transparency and validity requirements on \nthe use of pretrial risk assessments. The use of algorithmic pretrial risk assessments has been a \ncause of concern for civil rights groups.28 Idaho Code Section 19-1910, enacted in 2019,29 requires that any \npretrial risk assessment, before use in the state, first be ""shown to be free of bias against any class of \nindividuals protected from discrimination by state or federal law"", that any locality using a pretrial risk \nassessment must first formally validate the claim of its being free of bias, that ""all documents, records, and \ninformation used to build or validate the risk assessment shall be open to public inspection,"" and that assertions \nof trade secrets cannot be used ""to quash discovery in a criminal matter by a party to a criminal case."" \n22\n']","Idaho's rules on pretrial risk assessment transparency require that any pretrial risk assessment be shown to be free of bias against any class of individuals protected from discrimination by state or federal law. Additionally, any locality using a pretrial risk assessment must formally validate the claim of it being free of bias, and all documents, records, and information used to build or validate the risk assessment must be open to public inspection. However, the context does not provide specific information on how these rules align with federal ethical AI standards.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 21, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What strategies help engage AI Actors to assess GAI impacts while maintaining AI content integrity?,"[' \n28 \nMAP 5.2: Practices and personnel for supporting regular engagement with relevant AI Actors and integrating feedback about \npositive, negative, and unanticipated impacts are in place and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.2-001 \nDetermine context-based measures to identify if new impacts are present due to \nthe GAI system, including regular engagements with downstream AI Actors to \nidentify and quantify new contexts of unanticipated impacts of GAI systems. \nHuman-AI Configuration; Value \nChain and Component Integration \nMP-5.2-002 \nPlan regular engagements with AI Actors responsible for inputs to GAI systems, \nincluding third-party data and algorithms, to review and evaluate unanticipated \nimpacts. \nHuman-AI Configuration; Value \nChain and Component Integration \nAI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring \n \nMEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any \ndiscrepancies in how content provenance mechanisms work across diverse \npopulations. \nInformation Integrity; Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to evaluate structured public feedback exercises \ninformed by representative AI Actors. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.1-005 \nEvaluate novel methods and technologies for the measurement of GAI-related \nrisks including in content provenance, offensive cyber, and CBRN, while \nmaintaining the models’ ability to produce valid, reliable, and factually accurate \noutputs. \nInformation Integrity; CBRN \nInformation or Capabilities; \nObscene, Degrading, and/or \nAbusive Content \n', ' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']","Strategies to engage AI Actors to assess GAI impacts while maintaining AI content integrity include determining context-based measures to identify new impacts, planning regular engagements with AI Actors responsible for inputs to GAI systems, employing methods to trace the origin and modifications of digital content, integrating tools to analyze content provenance, and using structured feedback mechanisms to capture user input about AI-generated content.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 31, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What strategies are best for managing GAI systems and their lifecycle risks?,"[' \n16 \nGOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, and \norganizational roles and responsibilities are clearly defined, including determining the frequency of periodic review. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.5-001 Define organizational responsibilities for periodic review of content provenance \nand incident monitoring for GAI systems. \nInformation Integrity \nGV-1.5-002 \nEstablish organizational policies and procedures for after action reviews of GAI \nsystem incident response and incident disclosures, to identify gaps; Update \nincident response and incident disclosure processes as required. \nHuman-AI Configuration; \nInformation Security \nGV-1.5-003 \nMaintain a document retention policy to keep history for test, evaluation, \nvalidation, and verification (TEVV), and digital content transparency methods for \nGAI. \nInformation Integrity; Intellectual \nProperty \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring \n \nGOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory \nand adjust AI system inventory requirements to account for GAI risks. \nInformation Security \nGV-1.6-002 Define any inventory exemptions in organizational policies for GAI systems \nembedded into application software. \nValue Chain and Component \nIntegration \nGV-1.6-003 \nIn addition to general model, governance, and risk information, consider the \nfollowing items in GAI system inventory entries: Data provenance information \n(e.g., source, signatures, versioning, watermarks); Known issues reported from \ninternal bug tracking or external information sharing resources (e.g., AI incident \ndatabase, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles \nand responsibilities; Special rights and considerations for intellectual property, \nlicensed works, or personal, privileged, proprietary or sensitive data; Underlying \nfoundation models, versions of underlying models, and access modes. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity; Intellectual Property; \nValue Chain and Component \nIntegration \nAI Actor Tasks: Governance and Oversight \n \n', ' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n']",The context does not provide specific strategies for managing GAI systems and their lifecycle risks.,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 19, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What confabulation might mislead users about CBRN info or capabilities?,"[' \n4 \n1. CBRN Information or Capabilities: Eased access to or synthesis of materially nefarious \ninformation or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) \nweapons or other dangerous materials or agents. \n2. Confabulation: The production of confidently stated but erroneous or false content (known \ncolloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.6 \n3. Dangerous, Violent, or Hateful Content: Eased production of and access to violent, inciting, \nradicalizing, or threatening content as well as recommendations to carry out self-harm or \nconduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging \nor stereotyping content. \n4. Data Privacy: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of \nbiometric, health, location, or other personally identifiable information or sensitive data.7 \n5. Environmental Impacts: Impacts due to high compute resource utilization in training or \noperating GAI models, and related outcomes that may adversely impact ecosystems. \n6. Harmful Bias or Homogenization: Amplification and exacerbation of historical, societal, and \nsystemic biases; performance disparities8 between sub-groups or languages, possibly due to \nnon-representative training data, that result in discrimination, amplification of biases, or \nincorrect presumptions about performance; undesired homogeneity that skews system or model \noutputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful \nbiases. \n7. Human-AI Configuration: Arrangements of or interactions between a human and an AI system \nwhich can result in the human inappropriately anthropomorphizing GAI systems or experiencing \nalgorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI \nsystems. \n8. Information Integrity: Lowered barrier to entry to generate and support the exchange and \nconsumption of content which may not distinguish fact from opinion or fiction or acknowledge \nuncertainties, or could be leveraged for large-scale dis- and mis-information campaigns. \n9. Information Security: Lowered barriers for offensive cyber capabilities, including via automated \ndiscovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber \n \n \n6 Some commenters have noted that the terms “hallucination” and “fabrication” anthropomorphize GAI, which \nitself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human \nentities. \n7 What is categorized as sensitive data or sensitive PII can be highly contextual based on the nature of the \ninformation, but examples of sensitive information include information that relates to an information subject’s \nmost intimate sphere, including political opinions, sex life, or criminal convictions. \n8 The notion of harm presumes some baseline scenario that the harmful factor (e.g., a GAI model) makes worse. \nWhen the mechanism for potential harm is a disparity between groups, it can be difficult to establish what the \nmost appropriate baseline is to compare against, which can result in divergent views on when a disparity between \nAI behaviors for different subgroups constitutes a harm. In discussing harms from disparities such as biased \nbehavior, this document highlights examples where someone’s situation is worsened relative to what it would have \nbeen in the absence of any AI system, making the outcome unambiguously a harm of the system. \n', ' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']",Confabulation in the context of CBRN information or capabilities refers to the production of confidently stated but erroneous or false content that may mislead or deceive users regarding the access to or synthesis of nefarious information or design capabilities related to CBRN weapons or other dangerous materials.,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 7, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What insights did OSTP seek from the biometric tech RFI, and who provided feedback?","['APPENDIX\nSummaries of Additional Engagements: \n• OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n• OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative \nConsumer Technology Association \nCourtney Radsch \nCoworker \nCyber Farm Labs \nData & Society Research Institute \nData for Black Lives \nData to Actionable Knowledge Lab \nat Harvard University \nDeloitte \nDev Technology Group \nDigital Therapeutics Alliance \nDigital Welfare State & Human \nRights Project and Center for \nHuman Rights and Global Justice at \nNew York University School of \nLaw, and Temple University \nInstitute for Law, Innovation & \nTechnology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier Foundation \nElectronic Privacy Information \nCenter, Center for Digital \nDemocracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information Technology \nResearch and Development \nInteragency Working Group \nHireVue \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at Science Applications \nInternational Corporation \nInformation Technology and \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation \nInternational Business Machines \nCorporation \nInternational Committee of the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner, Dan Berger, \nMargaret Hu, and Sara Katsanis \nJonathan Barry-Blocker \nJoseph Turow \nJoy Buolamwini \nJoy Mack \nKaren Bureau \nLamont Gholston \nLawyers’ Committee for Civil \nRights Under Law \n60\n', 'APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation \nStephanie Dinkins and the Future \nHistories Studio at Stony Brook \nUniversity \nTechNet \nThe Alliance for Media Arts and \nCulture, MIT Open Documentary \nLab and Co-Creation Studio, and \nImmerse \nThe International Brotherhood of \nTeamsters \nThe Leadership Conference on \nCivil and Human Rights \nThorn \nU.S. Chamber of Commerce’s \nTechnology Engagement Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \nXR Association \nXR Safety Initiative \n• As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor members of the public. The listening sessions together drew upwards of 300 participants. The Science and\nTechnology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61\n']","OSTP sought insights on the extent and variety of biometric technologies in past, current, or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation. Feedback was provided by 130 organizations and individuals, including Accenture, ACLU, Google, Microsoft Corporation, and many others.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 59, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 60, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What causes model collapse in AI, especially with synthetic data and biases?","[' \n9 \nand reduced content diversity). Overly homogenized outputs can themselves be incorrect, or they may \nlead to unreliable decision-making or amplify harmful biases. These phenomena can flow from \nfoundation models to downstream models and systems, with the foundation models acting as \n“bottlenecks,” or single points of failure. \nOverly homogenized content can contribute to “model collapse.” Model collapse can occur when model \ntraining over-relies on synthetic data, resulting in data points disappearing from the distribution of the \nnew model’s outputs. In addition to threatening the robustness of the model overall, model collapse \ncould lead to homogenized outputs, including by amplifying any homogenization from the model used to \ngenerate the synthetic training data. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Valid and Reliable \n2.7. Human-AI Configuration \nGAI system use can involve varying risks of misconfigurations and poor interactions between a system \nand a human who is interacting with it. Humans bring their unique perspectives, experiences, or domain-\nspecific expertise to interactions with AI systems but may not have detailed knowledge of AI systems and \nhow they work. As a result, human experts may be unnecessarily “averse” to GAI systems, and thus \ndeprive themselves or others of GAI’s beneficial uses. \nConversely, due to the complexity and increasing reliability of GAI technology, over time, humans may \nover-rely on GAI systems or may unjustifiably perceive GAI content to be of higher quality than that \nproduced by other sources. This phenomenon is an example of automation bias, or excessive deference \nto automated systems. Automation bias can exacerbate other risks of GAI, such as risks of confabulation \nor risks of bias or homogenization. \nThere may also be concerns about emotional entanglement between humans and GAI systems, which \ncould lead to negative psychological impacts. \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe, Valid and Reliable \n2.8. Information Integrity \nInformation integrity describes the “spectrum of information and associated patterns of its creation, \nexchange, and consumption in society.” High-integrity information can be trusted; “distinguishes fact \nfrom fiction, opinion, and inference; acknowledges uncertainties; and is transparent about its level of \nvetting. This information can be linked to the original source(s) with appropriate evidence. High-integrity \ninformation is also accurate and reliable, can be verified and authenticated, has a clear chain of custody, \nand creates reasonable expectations about when its validity may expire.”11 \n \n \n11 This definition of information integrity is derived from the 2022 White House Roadmap for Researchers on \nPriorities Related to Information Integrity Research and Development. \n', ' \n8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles \n']","Model collapse in AI can occur when model training over-relies on synthetic data, leading to data points disappearing from the distribution of the new model's outputs. This threatens the robustness of the model overall and can result in homogenized outputs, amplifying any homogenization from the model used to generate the synthetic training data.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 12, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 11, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What standards should automated systems follow for safety and fairness, and how to assess them?","[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n']","Automated systems should follow standards that include independent evaluation, regular reporting, and protections against algorithmic discrimination. They should be designed to allow independent evaluators access to assess safety and effectiveness, with regular updates on system performance, data usage, risk management, and independent evaluations. Additionally, entities should conduct algorithmic impact assessments to evaluate potential discrimination and ensure transparency in reporting these assessments.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What strategies help with privacy and IP risks in AI content?,"[' \n26 \nMAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or \nsoftware – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other \nrights. \nAction ID \nSuggested Action \nGAI Risks \nMP-4.1-001 Conduct periodic monitoring of AI-generated content for privacy risks; address any \npossible instances of PII or sensitive data exposure. \nData Privacy \nMP-4.1-002 Implement processes for responding to potential intellectual property infringement \nclaims or other rights. \nIntellectual Property \nMP-4.1-003 \nConnect new GAI policies, procedures, and processes to existing model, data, \nsoftware development, and IT governance and to legal, compliance, and risk \nmanagement activities. \nInformation Security; Data Privacy \nMP-4.1-004 Document training data curation policies, to the extent possible and according to \napplicable laws and policies. \nIntellectual Property; Data Privacy; \nObscene, Degrading, and/or \nAbusive Content \nMP-4.1-005 \nEstablish policies for collection, retention, and minimum quality of data, in \nconsideration of the following risks: Disclosure of inappropriate CBRN information; \nUse of Illegal or dangerous content; Offensive cyber capabilities; Training data \nimbalances that could give rise to harmful biases; Leak of personally identifiable \ninformation, including facial likenesses of individuals. \nCBRN Information or Capabilities; \nIntellectual Property; Information \nSecurity; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-006 Implement policies and practices defining how third-party intellectual property and \ntraining data will be used, stored, and protected. \nIntellectual Property; Value Chain \nand Component Integration \nMP-4.1-007 Re-evaluate models that were fine-tuned or enhanced on top of third-party \nmodels. \nValue Chain and Component \nIntegration \nMP-4.1-008 \nRe-evaluate risks when adapting GAI models to new domains. Additionally, \nestablish warning systems to determine if a GAI system is being used in a new \ndomain where previous assumptions (relating to context of use or mapped risks \nsuch as security, and safety) may no longer hold. \nCBRN Information or Capabilities; \nIntellectual Property; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-009 Leverage approaches to detect the presence of PII or sensitive data in generated \noutput text, image, video, or audio. \nData Privacy \n', "" \n27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring \n \n""]","Strategies to help with privacy and intellectual property (IP) risks in AI content include conducting periodic monitoring of AI-generated content for privacy risks, implementing processes for responding to potential intellectual property infringement claims, documenting training data curation policies, establishing policies for collection and retention of data, and conducting appropriate diligence on training data use to assess intellectual property and privacy risks.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 29, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "How does risk documentation aid compliance and governance in GAI systems, especially with external feedback?","[' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n', ' \n20 \nGV-4.3-003 \nVerify information sharing and feedback mechanisms among individuals and \norganizations regarding any negative impact from GAI systems. \nInformation Integrity; Data \nPrivacy \nAI Actor Tasks: AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those \nexternal to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI \nrisks. \nAction ID \nSuggested Action \nGAI Risks \nGV-5.1-001 \nAllocate time and resources for outreach, feedback, and recourse processes in GAI \nsystem development. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nGV-5.1-002 \nDocument interactions with GAI systems to users prior to interactive activities, \nparticularly in contexts involving more significant risks. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Design, AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of \ninfringement of a third-party’s intellectual property or other rights. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.1-001 Categorize different types of GAI content with associated third-party rights (e.g., \ncopyright, intellectual property, data privacy). \nData Privacy; Intellectual \nProperty; Value Chain and \nComponent Integration \nGV-6.1-002 Conduct joint educational activities and events in collaboration with third parties \nto promote best practices for managing GAI risks. \nValue Chain and Component \nIntegration \nGV-6.1-003 \nDevelop and validate approaches for measuring the success of content \nprovenance management efforts with third parties (e.g., incidents detected and \nresponse times). \nInformation Integrity; Value Chain \nand Component Integration \nGV-6.1-004 \nDraft and maintain well-defined contracts and service level agreements (SLAs) \nthat specify content ownership, usage rights, quality standards, security \nrequirements, and content provenance expectations for GAI systems. \nInformation Integrity; Information \nSecurity; Intellectual Property \n']","The context does not provide specific information on how risk documentation aids compliance and governance in GAI systems, particularly regarding external feedback.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 23, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "How does testing ensure the safety of automated systems before deployment, especially regarding community input and risk?","[' \n \n \nSAFE AND EFFECTIVE SYSTEMS \nYou should be protected from unsafe or ineffective sys\xad\ntems. Automated systems should be developed with consultation \nfrom diverse communities, stakeholders, and domain experts to iden\xad\ntify concerns, risks, and potential impacts of the system. Systems \nshould undergo pre-deployment testing, risk identification and miti\xad\ngation, and ongoing monitoring that demonstrate they are safe and \neffective based on their intended use, mitigation of unsafe outcomes \nincluding those beyond the intended use, and adherence to do\xad\nmain-specific standards. Outcomes of these protective measures \nshould include the possibility of not deploying the system or remov\xad\ning a system from use. Automated systems should not be designed \nwith an intent or reasonably foreseeable possibility of endangering \nyour safety or the safety of your community. They should be designed \nto proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15\n', ' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","Testing ensures the safety of automated systems before deployment by requiring extensive testing that follows domain-specific best practices, taking into account the specific technology used and the roles of human operators. This testing should mirror real-world conditions and include both automated and human-led testing. Additionally, community input is gathered through consultation during the design and implementation phases, allowing for the identification and mitigation of potential risks that may impact rights and access, particularly for affected communities. Concerns raised during this consultation should be documented and considered in the development process, ensuring that the system is safe and effective based on community feedback.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 14, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What standards should automated systems follow for safety and fairness, and how to assess them?","[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n']","Automated systems should follow standards that include independent evaluation to ensure safety and effectiveness, regular reporting on system performance and data usage, and protections against algorithmic discrimination. Assessments should involve algorithmic impact assessments that detail consultation results, equity assessments, and any disparities, with findings made public whenever possible.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What federal steps are being taken to tackle algorithmic bias in mortgage lending for communities of color?,"["" \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe federal government is working to combat discrimination in mortgage lending. The Depart\xad\nment of Justice has launched a nationwide initiative to combat redlining, which includes reviewing how \nlenders who may be avoiding serving communities of color are conducting targeted marketing and advertising.51 \nThis initiative will draw upon strong partnerships across federal agencies, including the Consumer Financial \nProtection Bureau and prudential regulators. The Action Plan to Advance Property Appraisal and Valuation \nEquity includes a commitment from the agencies that oversee mortgage lending to include a \nnondiscrimination standard in the proposed rules for Automated Valuation Models.52\nThe Equal Employment Opportunity Commission and the Department of Justice have clearly \nlaid out how employers’ use of AI and other automated systems can result in \ndiscrimination against job applicants and employees with disabilities.53 The documents explain \nhow employers’ use of software that relies on algorithmic decision-making may violate existing requirements \nunder Title I of the Americans with Disabilities Act (“ADA”). This technical assistance also provides practical \ntips to employers on how to comply with the ADA, and to job applicants and employees who think that their \nrights may have been violated. \nDisparity assessments identified harms to Black patients' healthcare access. A widely \nused healthcare algorithm relied on the cost of each patient’s past medical care to predict future medical needs, \nrecommending early interventions for the patients deemed most at risk. This process discriminated \nagainst Black patients, who generally have less access to medical care and therefore have generated less cost \nthan white patients with similar illness and need. A landmark study documented this pattern and proposed \npractical ways that were shown to reduce this bias, such as focusing specifically on active chronic health \nconditions or avoidable future costs related to emergency visits and hospitalization.54 \nLarge employers have developed best practices to scrutinize the data and models used \nfor hiring. An industry initiative has developed Algorithmic Bias Safeguards for the Workforce, a structured \nquestionnaire that businesses can use proactively when procuring software to evaluate workers. It covers \nspecific technical questions such as the training data used, model training process, biases identified, and \nmitigation steps employed.55 \nStandards organizations have developed guidelines to incorporate accessibility criteria \ninto technology design processes. The most prevalent in the United States is the Access Board’s Section \n508 regulations,56 which are the technical standards for federal information communication technology (software, \nhardware, and web). Other standards include those issued by the International Organization for \nStandardization,57 and the World Wide Web Consortium Web Content Accessibility Guidelines,58 a globally \nrecognized voluntary consensus standard for web content and other information and communications \ntechnology. \nNIST has released Special Publication 1270, Towards a Standard for Identifying and Managing Bias \nin Artificial Intelligence.59 The special publication: describes the stakes and challenges of bias in artificial \nintelligence and provides examples of how and why it can chip away at public trust; identifies three categories \nof bias in AI – systemic, statistical, and human – and describes how and where they contribute to harms; and \ndescribes three broad challenges for mitigating bias – datasets, testing and evaluation, and human factors – and \nintroduces preliminary guidance for addressing them. Throughout, the special publication takes a socio-\ntechnical perspective to identifying and managing AI bias. \n29\nAlgorithmic \nDiscrimination \nProtections \n"", "" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n""]","The federal government is working to combat discrimination in mortgage lending through initiatives such as the Department of Justice's nationwide initiative to combat redlining. This includes reviewing how lenders may be avoiding serving communities of color and conducting targeted marketing and advertising. Additionally, the Action Plan to Advance Property Appraisal and Valuation Equity includes a commitment from agencies overseeing mortgage lending to include a nondiscrimination standard in proposed rules for Automated Valuation Models.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 28, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What strategies can reduce bias in GAI while maintaining data accuracy?,"["" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n"", ' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What strategies help manage IP risks in GAI while ensuring transparency?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]",The context does not provide specific strategies for managing IP risks in GAI while ensuring transparency.,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do automated customer service systems meet complex needs with human oversight?,"["" \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nHealthcare “navigators” help people find their way through online signup forms to choose \nand obtain healthcare. A Navigator is “an individual or organization that's trained and able to help \nconsumers, small businesses, and their employees as they look for health coverage options through the \nMarketplace (a government web site), including completing eligibility and enrollment forms.”106 For \nthe 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could \n“train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive \nhealth coverage.”107\nThe customer service industry has successfully integrated automated services such as \nchat-bots and AI-driven call response systems with escalation to a human support \nteam.108 Many businesses now use partially automated customer service platforms that help answer customer \nquestions and compile common problems for human agents to review. These integrated human-AI \nsystems allow companies to provide faster customer care while maintaining human agents to answer \ncalls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to \nsuccessful customer service.109\nBallot curing laws in at least 24 states require a fallback system that allows voters to \ncorrect their ballot and have it counted in the case that a voter signature matching \nalgorithm incorrectly flags their ballot as invalid or there is another issue with their \nballot, and review by an election official does not rectify the problem. Some federal \ncourts have found that such cure procedures are constitutionally required.110 \nBallot \ncuring processes vary among states, and include direct phone calls, emails, or mail contact by election \nofficials.111 Voters are asked to provide alternative information or a new signature to verify the validity of their \nballot. \n52\n""]","Automated customer service systems meet complex needs with human oversight by integrating automated services such as chat-bots and AI-driven call response systems, which can escalate issues to a human support team when necessary. This allows companies to provide faster customer care while ensuring that human agents are available to handle complicated requests.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 51, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does diverse data help prevent algorithmic bias in automated systems?,"[' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n']","Diverse data helps prevent algorithmic bias in automated systems by ensuring that any data used in system development or assessment is representative of local communities based on the planned deployment setting. This data should be reviewed for bias considering the historical and societal context, and it should be sufficiently robust to identify and mitigate biases and potential harms.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What factors should be considered for assessing GAI systems' socio-cultural impacts and data integrity?,"[' \n23 \nMP-1.1-002 \nDetermine and document the expected and acceptable GAI system context of \nuse in collaboration with socio-cultural and other domain experts, by assessing: \nAssumptions and limitations; Direct value to the organization; Intended \noperational environment and observed usage patterns; Potential positive and \nnegative impacts to individuals, public safety, groups, communities, \norganizations, democratic institutions, and the physical environment; Social \nnorms and expectations. \nHarmful Bias and Homogenization \nMP-1.1-003 \nDocument risk measurement plans to address identified risks. Plans may \ninclude, as applicable: Individual and group cognitive biases (e.g., confirmation \nbias, funding bias, groupthink) for AI Actors involved in the design, \nimplementation, and use of GAI systems; Known past GAI system incidents and \nfailure modes; In-context use and foreseeable misuse, abuse, and off-label use; \nOver reliance on quantitative metrics and methodologies without sufficient \nawareness of their limitations in the context(s) of use; Standard measurement \nand structured human feedback approaches; Anticipated human-AI \nconfigurations. \nHuman-AI Configuration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMP-1.1-004 \nIdentify and document foreseeable illegal uses or applications of the GAI system \nthat surpass organizational risk tolerances. \nCBRN Information or Capabilities; \nDangerous, Violent, or Hateful \nContent; Obscene, Degrading, \nand/or Abusive Content \nAI Actor Tasks: AI Deployment \n \nMAP 1.2: Interdisciplinary AI Actors, competencies, skills, and capacities for establishing context reflect demographic diversity and \nbroad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary \ncollaboration are prioritized. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.2-001 \nEstablish and empower interdisciplinary teams that reflect a wide range of \ncapabilities, competencies, demographic groups, domain expertise, educational \nbackgrounds, lived experiences, professions, and skills across the enterprise to \ninform and conduct risk measurement and management functions. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMP-1.2-002 \nVerify that data or benchmarks used in risk measurement, and users, \nparticipants, or subjects involved in structured GAI public feedback exercises \nare representative of diverse in-context user populations. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nAI Actor Tasks: AI Deployment \n \n', ' \n29 \nMS-1.1-006 \nImplement continuous monitoring of GAI system impacts to identify whether GAI \noutputs are equitable across various sub-populations. Seek active and direct \nfeedback from affected communities via structured feedback mechanisms or red-\nteaming to monitor and improve outputs. \nHarmful Bias and Homogenization \nMS-1.1-007 \nEvaluate the quality and integrity of data used in training and the provenance of \nAI-generated content, for example by employing techniques like chaos \nengineering and seeking stakeholder feedback. \nInformation Integrity \nMS-1.1-008 \nDefine use cases, contexts of use, capabilities, and negative impacts where \nstructured human feedback exercises, e.g., GAI red-teaming, would be most \nbeneficial for GAI risk measurement and management based on the context of \nuse. \nHarmful Bias and \nHomogenization; CBRN \nInformation or Capabilities \nMS-1.1-009 \nTrack and document risks or opportunities related to all GAI risks that cannot be \nmeasured quantitatively, including explanations as to why some risks cannot be \nmeasured (e.g., due to technological limitations, resource constraints, or \ntrustworthy considerations). Include unmeasured risks in marginal risks. \nInformation Integrity \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are \ninvolved in regular assessments and updates. Domain experts, users, AI Actors external to the team that developed or deployed the \nAI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.3-001 \nDefine relevant groups of interest (e.g., demographic groups, subject matter \nexperts, experience with GAI technology) within the context of use as part of \nplans for gathering structured public feedback. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-002 \nEngage in internal and external evaluations, GAI red-teaming, impact \nassessments, or other structured human feedback exercises in consultation \nwith representative AI Actors with expertise and familiarity in the context of \nuse, and/or who are representative of the populations associated with the \ncontext of use. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-003 \nVerify those conducting structured human feedback exercises are not directly \ninvolved in system development tasks for the same GAI model. \nHuman-AI Configuration; Data \nPrivacy \nAI Actor Tasks: AI Deployment, AI Development, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, \nEnd-Users, Operation and Monitoring, TEVV \n \n']","Factors to consider for assessing GAI systems' socio-cultural impacts include assumptions and limitations, direct value to the organization, intended operational environment, observed usage patterns, potential positive and negative impacts to individuals and communities, and social norms and expectations. For data integrity, factors include evaluating the quality and integrity of data used in training, the provenance of AI-generated content, and ensuring that data or benchmarks used in risk measurement are representative of diverse in-context user populations.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 26, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 32, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What risks come from human use of generative AI, both emotionally and socially?","[' \n3 \nthe abuse, misuse, and unsafe repurposing by humans (adversarial or not), and others result \nfrom interactions between a human and an AI system. \n• \nTime scale: GAI risks may materialize abruptly or across extended periods. Examples include \nimmediate (and/or prolonged) emotional harm and potential risks to physical safety due to the \ndistribution of harmful deepfake images, or the long-term effect of disinformation on societal \ntrust in public institutions. \nThe presence of risks and where they fall along the dimensions above will vary depending on the \ncharacteristics of the GAI model, system, or use case at hand. These characteristics include but are not \nlimited to GAI model or system architecture, training mechanisms and libraries, data types used for \ntraining or fine-tuning, levels of model access or availability of model weights, and application or use \ncase context. \nOrganizations may choose to tailor how they measure GAI risks based on these characteristics. They may \nadditionally wish to allocate risk management resources relative to the severity and likelihood of \nnegative impacts, including where and how these risks manifest, and their direct and material impacts \nharms in the context of GAI use. Mitigations for model or system level risks may differ from mitigations \nfor use-case or ecosystem level risks. \nImportantly, some GAI risks are unknown, and are therefore difficult to properly scope or evaluate given \nthe uncertainty about potential GAI scale, complexity, and capabilities. Other risks may be known but \ndifficult to estimate given the wide range of GAI stakeholders, uses, inputs, and outputs. Challenges with \nrisk estimation are aggravated by a lack of visibility into GAI training data, and the generally immature \nstate of the science of AI measurement and safety today. This document focuses on risks for which there \nis an existing empirical evidence base at the time this profile was written; for example, speculative risks \nthat may potentially arise in more advanced, future GAI systems are not considered. Future updates may \nincorporate additional risks or provide further details on the risks identified below. \nTo guide organizations in identifying and managing GAI risks, a set of risks unique to or exacerbated by \nthe development and use of GAI are defined below.5 Each risk is labeled according to the outcome, \nobject, or source of the risk (i.e., some are risks “to” a subject or domain and others are risks “of” or \n“from” an issue or theme). These risks provide a lens through which organizations can frame and execute \nrisk management efforts. To help streamline risk management efforts, each risk is mapped in Section 3 \n(as well as in tables in Appendix B) to relevant Trustworthy AI Characteristics identified in the AI RMF. \n \n \n5 These risks can be further categorized by organizations depending on their unique approaches to risk definition \nand management. One possible way to further categorize these risks, derived in part from the UK’s International \nScientific Report on the Safety of Advanced AI, could be: 1) Technical / Model risks (or risk from malfunction): \nConfabulation; Dangerous or Violent Recommendations; Data Privacy; Value Chain and Component Integration; \nHarmful Bias, and Homogenization; 2) Misuse by humans (or malicious use): CBRN Information or Capabilities; \nData Privacy; Human-AI Configuration; Obscene, Degrading, and/or Abusive Content; Information Integrity; \nInformation Security; 3) Ecosystem / societal risks (or systemic risks): Data Privacy; Environmental; Intellectual \nProperty. We also note that some risks are cross-cutting between these categories. \n', ' \n2 \nThis work was informed by public feedback and consultations with diverse stakeholder groups as part of NIST’s \nGenerative AI Public Working Group (GAI PWG). The GAI PWG was an open, transparent, and collaborative \nprocess, facilitated via a virtual workspace, to obtain multistakeholder input on GAI risk management and to \ninform NIST’s approach. \nThe focus of the GAI PWG was limited to four primary considerations relevant to GAI: Governance, Content \nProvenance, Pre-deployment Testing, and Incident Disclosure (further described in Appendix A). As such, the \nsuggested actions in this document primarily address these considerations. \nFuture revisions of this profile will include additional AI RMF subcategories, risks, and suggested actions based \non additional considerations of GAI as the space evolves and empirical evidence indicates additional risks. A \nglossary of terms pertinent to GAI risk management will be developed and hosted on NIST’s Trustworthy & \nResponsible AI Resource Center (AIRC), and added to The Language of Trustworthy AI: An In-Depth Glossary of \nTerms. \nThis document was also informed by public comments and consultations from several Requests for Information. \n \n2. \nOverview of Risks Unique to or Exacerbated by GAI \nIn the context of the AI RMF, risk refers to the composite measure of an event’s probability (or \nlikelihood) of occurring and the magnitude or degree of the consequences of the corresponding event. \nSome risks can be assessed as likely to materialize in a given context, particularly those that have been \nempirically demonstrated in similar contexts. Other risks may be unlikely to materialize in a given \ncontext, or may be more speculative and therefore uncertain. \nAI risks can differ from or intensify traditional software risks. Likewise, GAI can exacerbate existing AI \nrisks, and creates unique risks. GAI risks can vary along many dimensions: \n• \nStage of the AI lifecycle: Risks can arise during design, development, deployment, operation, \nand/or decommissioning. \n• \nScope: Risks may exist at individual model or system levels, at the application or implementation \nlevels (i.e., for a specific use case), or at the ecosystem level – that is, beyond a single system or \norganizational context. Examples of the latter include the expansion of “algorithmic \nmonocultures,3” resulting from repeated use of the same model, or impacts on access to \nopportunity, labor markets, and the creative economies.4 \n• \nSource of risk: Risks may emerge from factors related to the design, training, or operation of the \nGAI model itself, stemming in some cases from GAI model or system inputs, and in other cases, \nfrom GAI system outputs. Many GAI risks, however, originate from human behavior, including \n \n \n3 “Algorithmic monocultures” refers to the phenomenon in which repeated use of the same model or algorithm in \nconsequential decision-making settings like employment and lending can result in increased susceptibility by \nsystems to correlated failures (like unexpected shocks), due to multiple actors relying on the same algorithm. \n4 Many studies have projected the impact of AI on the workforce and labor markets. Fewer studies have examined \nthe impact of GAI on the labor market, though some industry surveys indicate that that both employees and \nemployers are pondering this disruption. \n']","The risks that come from human use of generative AI (GAI) include immediate and prolonged emotional harm, potential risks to physical safety due to the distribution of harmful deepfake images, and the long-term effect of disinformation on societal trust in public institutions.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 6, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 5, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What problems does AI nudification tech address, and how do they connect to wider concerns about automated harm?","[' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nAI-enabled “nudification” technology that creates images where people appear to be nude—including apps that\nenable non-technical users to create or alter images of individuals without their consent—has proliferated at an\nalarming rate. Such technology is becoming a common form of image-based abuse that disproportionately\nimpacts women. As these tools become more sophisticated, they are producing altered images that are increasing\xad\nly realistic and are difficult for both humans and AI to detect as inauthentic. Regardless of authenticity, the expe\xad\nrience of harm to victims of non-consensual intimate images can be devastatingly real—affecting their personal\nand professional lives, and impacting their mental and physical health.10\n•\nA company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its driv\xad\ners, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond\ntheir control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.11\n17\n', ' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nWhile technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can \nalso lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range \nof error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. \nAutomated systems sometimes rely on data from other systems, including historical data, allowing irrelevant informa\xad\ntion from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposeful\xad\nly designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended \nor unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad\nvators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections \nfrom unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consis\xad\ntently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harm\xad\nful outcomes. \n•\nA proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was imple\xad\nmented at hundreds of hospitals around the country. An independent study showed that the model predictions\nunderperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting\nlikelihood of sepsis.6\n•\nOn social media, Black people who quote and criticize racist messages have had their own speech silenced when\na platform’s automated moderation system failed to distinguish this “counter speech” (or other critique\nand journalism) from the original hateful messages to which such speech responded.7\n•\nA device originally developed to help people track and find lost items has been used as a tool by stalkers to track\nvictims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to\nprotect people from unwanted tracking by alerting people on their phones when a device is found to be moving\nwith them over time and also by having the device make an occasional noise, but not all phones are able\nto receive the notification and the devices remain a safety concern due to their misuse.8 \n•\nAn algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit,\neven if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions\nwere the result of a feedback loop generated from the reuse of data from previous arrests and algorithm\npredictions.9\n16\n']","AI nudification technology addresses the problem of creating non-consensual intimate images that can lead to image-based abuse, particularly impacting women. This technology raises wider concerns about automated harm as it exemplifies how advanced tools can be misused, leading to devastating effects on victims' personal and professional lives, as well as their mental and physical health. Additionally, the reliance on automated systems can result in unintended consequences, such as incorrect penalization of drivers or biased decision-making based on flawed historical data, highlighting the need for safeguards and ethical reviews in technology deployment.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 16, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 15, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What strategies ensure safe, fair automated systems for underserved communities?","[' AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor \nbe designed to proactively protect you from harms \nimpacts of automated systems. You should be protected from inappropriate or irrelevant data use in the \ndesign, development, and deployment of automated systems, and from the compounded harm of its reuse. \nIndependent evaluation and reporting that confirms that the system is safe and effective, including reporting of \nsteps taken to mitigate potential harms, should be performed and the results made public whenever possible. \nALGORITHMIC DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms and systems should be used and designed in \nan equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified \ndifferent treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including \npregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other \nclassification protected by law. Depending on the specific circumstances, such algorithmic discrimination \nmay violate legal protections. Designers, developers, and deployers of automated systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to use and design systems in an equitable way. This protection should include proactive \nequity assessments as part of the system design, use of representative data and protection against proxies \nfor demographic features, ensuring accessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent \nevaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5\n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n']","Strategies to ensure safe and fair automated systems for underserved communities include conducting proactive equity assessments during the design phase, using representative and robust data, guarding against proxies that may lead to algorithmic discrimination, and implementing ongoing monitoring and evaluation to confirm protections against algorithmic discrimination. These strategies aim to identify potential discrimination and effects on equity, ensuring that the systems are designed and deployed in an equitable manner.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 4, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What's the role of oversight and feedback in managing GAI risks and communicating their societal effects?,"[' \n20 \nGV-4.3-003 \nVerify information sharing and feedback mechanisms among individuals and \norganizations regarding any negative impact from GAI systems. \nInformation Integrity; Data \nPrivacy \nAI Actor Tasks: AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those \nexternal to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI \nrisks. \nAction ID \nSuggested Action \nGAI Risks \nGV-5.1-001 \nAllocate time and resources for outreach, feedback, and recourse processes in GAI \nsystem development. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nGV-5.1-002 \nDocument interactions with GAI systems to users prior to interactive activities, \nparticularly in contexts involving more significant risks. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Design, AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of \ninfringement of a third-party’s intellectual property or other rights. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.1-001 Categorize different types of GAI content with associated third-party rights (e.g., \ncopyright, intellectual property, data privacy). \nData Privacy; Intellectual \nProperty; Value Chain and \nComponent Integration \nGV-6.1-002 Conduct joint educational activities and events in collaboration with third parties \nto promote best practices for managing GAI risks. \nValue Chain and Component \nIntegration \nGV-6.1-003 \nDevelop and validate approaches for measuring the success of content \nprovenance management efforts with third parties (e.g., incidents detected and \nresponse times). \nInformation Integrity; Value Chain \nand Component Integration \nGV-6.1-004 \nDraft and maintain well-defined contracts and service level agreements (SLAs) \nthat specify content ownership, usage rights, quality standards, security \nrequirements, and content provenance expectations for GAI systems. \nInformation Integrity; Information \nSecurity; Intellectual Property \n', ' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n']","Oversight and feedback play a crucial role in managing GAI risks by ensuring that organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from external sources regarding the potential individual and societal impacts related to AI risks. This includes establishing oversight functions across the GAI lifecycle and documenting the risks and potential impacts of the AI technology, which facilitates broader communication about these impacts.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 23, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True Which framework aims to boost AI trustworthiness while upholding civil rights and privacy laws?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n', 'SECTION TITLE\n \n \n \n \n \n \nApplying The Blueprint for an AI Bill of Rights \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThere are regulatory safety requirements for medical devices, as well as sector-, population-, or technology-spe\xad\ncific privacy and security protections. Ensuring some of the additional protections proposed in this framework \nwould require new laws to be enacted or new policies and practices to be adopted. In some cases, exceptions to \nthe principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law, \nconform to the practicalities of a specific use case, or balance competing public interests. In particular, law \nenforcement, and other regulatory contexts may require government actors to protect civil rights, civil liberties, \nand privacy in a manner consistent with, but using alternate mechanisms to, the specific principles discussed in \nthis framework. The Blueprint for an AI Bill of Rights is meant to assist governments and the private sector in \nmoving principles into practice. \nThe expectations given in the Technical Companion are meant to serve as a blueprint for the development of \nadditional technical standards and practices that should be tailored for particular sectors and contexts. While \nexisting laws informed the development of the Blueprint for an AI Bill of Rights, this framework does not detail \nthose laws beyond providing them as examples, where appropriate, of existing protective measures. This \nframework instead shares a broad, forward-leaning vision of recommended principles for automated system \ndevelopment and use to inform private and public involvement with these systems where they have the poten\xad\ntial to meaningfully impact rights, opportunities, or access. Additionally, this framework does not analyze or \ntake a position on legislative and regulatory proposals in municipal, state, and federal government, or those in \nother countries. \nWe have seen modest progress in recent years, with some state and local governments responding to these prob\xad\nlems with legislation, and some courts extending longstanding statutory protections to new and emerging tech\xad\nnologies. There are companies working to incorporate additional protections in their design and use of auto\xad\nmated systems, and researchers developing innovative guardrails. Advocates, researchers, and government \norganizations have proposed principles for the ethical use of AI and other automated systems. These include \nthe Organization for Economic Co-operation and Development’s (OECD’s) 2019 Recommendation on Artificial \nIntelligence, which includes principles for responsible stewardship of trustworthy AI and which the United \nStates adopted, and Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government, which sets out principles that govern the federal government’s use of AI. The Blueprint \nfor an AI Bill of Rights is fully consistent with these principles and with the direction in Executive Order 13985 \non Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. \nThese principles find kinship in the Fair Information Practice Principles (FIPPs), derived from the 1973 report \nof an advisory committee to the U.S. Department of Health, Education, and Welfare, Records, Computers, \nand the Rights of Citizens.4 While there is no single, universal articulation of the FIPPs, these core \nprinciples for managing information about individuals have been incorporated into data privacy laws and \npolicies across the globe.5 The Blueprint for an AI Bill of Rights embraces elements of the FIPPs that are \nparticularly relevant to automated systems, without articulating a specific set of FIPPs or scoping \napplicability or the interests served to a single particular domain, like privacy, civil rights and civil liberties, \nethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9\n']",The NIST AI Risk Management Framework aims to boost AI trustworthiness while upholding civil rights and privacy laws.,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 8, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What factors ensure effective oversight in automated systems for critical fields like justice and healthcare?,"[' \nSECTION TITLE\nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nYou should be able to opt out, where appropriate, and have access to a person who can quickly \nconsider and remedy problems you encounter. You should be able to opt out from automated systems in \nfavor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable \nexpectations in a given context and with a focus on ensuring broad accessibility and protecting the public from \nespecially harmful impacts. In some cases, a human or other alternative may be required by law. You should have \naccess to timely human consideration and remedy by a fallback and escalation process if an automated system \nfails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and \nfallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and \nshould not impose an unreasonable burden on the public. Automated systems with an intended use within sensi\xad\ntive domains, including, but not limited to, criminal justice, employment, education, and health, should additional\xad\nly be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting \nwith the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes \na description of these human governance processes and assessment of their timeliness, accessibility, outcomes, \nand effectiveness should be made public whenever possible. \nDefinitions for key terms in The Blueprint for an AI Bill of Rights can be found in Applying the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for actualizing each principle can be found in the Technical Companion. \n7\n', 'You should be able to opt out, where appropriate, and \nhave access to a person who can quickly consider and \nremedy problems you encounter. You should be able to opt \nout from automated systems in favor of a human alternative, where \nappropriate. Appropriateness should be determined based on rea\xad\nsonable expectations in a given context and with a focus on ensuring \nbroad accessibility and protecting the public from especially harm\xad\nful impacts. In some cases, a human or other alternative may be re\xad\nquired by law. You should have access to timely human consider\xad\nation and remedy by a fallback and escalation process if an automat\xad\ned system fails, it produces an error, or you would like to appeal or \ncontest its impacts on you. Human consideration and fallback \nshould be accessible, equitable, effective, maintained, accompanied \nby appropriate operator training, and should not impose an unrea\xad\nsonable burden on the public. Automated systems with an intended \nuse within sensitive domains, including, but not limited to, criminal \njustice, employment, education, and health, should additionally be \ntailored to the purpose, provide meaningful access for oversight, \ninclude training for any people interacting with the system, and in\xad\ncorporate human consideration for adverse or high-risk decisions. \nReporting that includes a description of these human governance \nprocesses and assessment of their timeliness, accessibility, out\xad\ncomes, and effectiveness should be made public whenever possible. \nHUMAN ALTERNATIVES, CONSIDERATION\nALLBACK\nF\nAND\n, \n46\n']","Effective oversight in automated systems for critical fields like justice and healthcare is ensured by tailoring the systems to their intended purpose, providing meaningful access for oversight, including training for individuals interacting with the system, and incorporating human consideration for adverse or high-risk decisions. Additionally, reporting on human governance processes and assessing their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 6, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 45, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What impact do automated systems have on rights, and how are transparency needs met by current laws?","["" \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nAutomated systems now determine opportunities, from employment to credit, and directly shape the American \npublic’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this \nexpansive impact is not always visible. An applicant might not know whether a person rejected their resume or a \nhiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny\xad\ning their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting \ndecisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. \nNotice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable\xad\nness of a recommendation before enacting it. \nIn order to guard against potential harms, the American public needs to know if an automated system is being used. \nClear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like\xad\nwise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a \nparticular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, \nunaccountable, whether by design or by omission. These factors can make explanations both more challenging and \nmore important, and should not be used as a pretext to avoid explaining important decisions to the people impacted \nby those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline \nrequirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the \nvalidity and reasonable use of automated systems. \n•\nA lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home\nhealth-care assistance couldn't determine why, especially since the decision went against historical access\npractices. In a court hearing, the lawyer learned from a witness that the state in which the older client\nlived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it\nharder to understand and contest the decision.\n•\nA formal child welfare investigation is opened against a parent based on an algorithm and without the parent\never being notified that data was being collected and used as part of an algorithmic child maltreatment\nrisk assessment.84 The lack of notice or an explanation makes it harder for those performing child\nmaltreatment assessments to validate the risk assessment and denies parents knowledge that could help them\ncontest a decision.\n41\n"", ' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nNOTICE & \nEXPLANATION \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\xad\xad\xad\nPeople in Illinois are given written notice by the private sector if their biometric informa-\ntion is used. The Biometric Information Privacy Act enacted by the state contains a number of provisions \nconcerning the use of individual biometric data and identifiers. Included among them is a provision that no private \nentity may ""collect, capture, purchase, receive through trade, or otherwise obtain"" such information about an \nindividual, unless written notice is provided to that individual or their legally appointed representative. 87\nMajor technology companies are piloting new ways to communicate with the public about \ntheir automated technologies. For example, a collection of non-profit organizations and companies have \nworked together to develop a framework that defines operational approaches to transparency for machine \nlearning systems.88 This framework, and others like it,89 inform the public about the use of these tools, going \nbeyond simple notice to include reporting elements such as safety evaluations, disparity assessments, and \nexplanations of how the systems work. \nLenders are required by federal law to notify consumers about certain decisions made about \nthem. Both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require in certain circumstances \nthat consumers who are denied credit receive ""adverse action"" notices. Anyone who relies on the information in a \ncredit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an ""adverse action"" \nnotice to the consumer, which includes ""notice of the reasons a creditor took adverse action on the application \nor on an existing credit account.""90 In addition, under the risk-based pricing rule,91 lenders must either inform \nborrowers of their credit score, or else tell consumers when ""they are getting worse terms because of \ninformation in their credit report."" The CFPB has also asserted that ""[t]he law gives every applicant the right to \na specific explanation if their application for credit was denied, and that right is not diminished simply because \na company uses a complex algorithm that it doesn\'t understand.""92 Such explanations illustrate a shared value \nthat certain decisions need to be explained. \nA California law requires that warehouse employees are provided with notice and explana-\ntion about quotas, potentially facilitated by automated systems, that apply to them. Warehous-\ning employers in California that use quota systems (often facilitated by algorithmic monitoring systems) are \nrequired to provide employees with a written description of each quota that applies to the employee, including \n“quantified number of tasks to be performed or materials to be produced or handled, within the defined \ntime period, and any potential adverse employment action that could result from failure to meet the quota.”93\nAcross the federal government, agencies are conducting and supporting research on explain-\nable AI systems. The NIST is conducting fundamental research on the explainability of AI systems. A multidis-\nciplinary team of researchers aims to develop measurement methods and best practices to support the \nimplementation of core tenets of explainable AI.94 The Defense Advanced Research Projects Agency has a \nprogram on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that \nproduce more explainable models, while maintaining a high level of learning performance (prediction \naccuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging \ngeneration of artificially intelligent partners.95 The National Science Foundation’s program on Fairness in \nArtificial Intelligence also includes a specific interest in research foundations for explainable AI.96\n45\n']","Automated systems significantly impact rights by determining opportunities in various areas such as employment and credit, often without individuals being aware of the algorithms influencing decisions. Current laws, such as the Biometric Information Privacy Act in Illinois, require written notice when biometric information is used, and federal laws like the Fair Credit Reporting Act mandate that consumers receive adverse action notices when credit is denied. These laws aim to ensure transparency and provide individuals with the knowledge necessary to contest decisions made by automated systems.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 40, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 44, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What criteria ensure human subject protection in AI evaluations?,"[' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","Human subject protection in AI evaluations is ensured through several criteria, including: 1) evaluations involving human subjects must meet applicable requirements and be representative of the relevant population; 2) options must be provided for human subjects to withdraw participation or revoke consent for the use of their data; 3) techniques such as anonymization and differential privacy should be used to minimize risks associated with linking AI-generated content back to individual human subjects; 4) documentation of how content provenance data is tracked and how it interacts with privacy and security is necessary, including the removal of personally identifiable information (PII).",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "How does data provenance support ethical AI governance, especially for human protection and bias?","[' \n35 \nMEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as \nidentified in the MAP function – to inform responsible use and governance. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.9-001 \nApply and document ML explanation results such as: Analysis of embeddings, \nCounterfactual prompts, Gradient-based attributions, Model \ncompression/surrogate models, Occlusion/term reduction. \nConfabulation \nMS-2.9-002 \nDocument GAI model details including: Proposed use and organizational value; \nAssumptions and limitations, Data collection methodologies; Data provenance; \nData quality; Model architecture (e.g., convolutional neural network, \ntransformers, etc.); Optimization objectives; Training algorithms; RLHF \napproaches; Fine-tuning or retrieval-augmented generation approaches; \nEvaluation data; Ethical considerations; Legal and regulatory requirements. \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting of training data \nsamples, and subsequent reverse engineering, model extraction, and \nmembership inference risks; Revealing biometric, confidential, copyrighted, \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \nTracking or revealing location information of users or members of training \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \nProperty \nMS-2.10-002 \nEngage directly with end-users and other stakeholders to understand their \nexpectations and concerns regarding content provenance. Use this feedback to \nguide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n', ' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n']","The context does not explicitly mention how data provenance supports ethical AI governance, particularly regarding human protection and bias.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 38, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What factors on data privacy and content integrity should be considered for a GAI system, especially regarding user feedback and transparency?","[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","Factors on data privacy and content integrity for a GAI system include documenting the extent to which human domain knowledge is employed to improve GAI system performance, reviewing and verifying sources and citations in GAI system outputs, tracking instances of anthropomorphization in GAI system interfaces, verifying GAI system training data and TEVV data provenance, and regularly reviewing security and safety guardrails. Additionally, structured feedback about content provenance should be recorded and integrated from operators, users, and impacted communities, and there should be an emphasis on digital content transparency regarding the societal impacts of AI and the role of diverse and inclusive content generation.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What challenges did panelists see at the tech-healthcare equity intersection?,"["" \n \n \n \n \nAPPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing scope of technology use in providing for social welfare, including \nin fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. \nHowever, various panelists individually cautioned that these systems may reduce burden for government \nagencies by increasing the burden and agency of people using and interacting with these technologies. \nAdditionally, these systems can produce feedback loops and compounded harm, collecting data from \ncommunities and using it to reinforce inequality. Various panelists suggested that these harms could be \nmitigated by ensuring community input at the beginning of the design process, providing ways to opt out of \nthese systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, \nand providing clear notice about the use of these systems and clear explanations of how and what the \ntechnologies are doing. Some panelists suggested that technology should be used to help people receive \nbenefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only \nused to provide a positive outcome; technology shouldn't be used to take supports away from people who need \nthem. \nPanel 6: The Healthcare System. This event explored current and emerging uses of technology in the \nhealthcare system and consumer products related to health. \nWelcome:\n•\nAlondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology\nPolicy\n•\nPatrick Gaspard, President and CEO, Center for American Progress\nModerator: Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of \nHealth and Human Services. \nPanelists: \n•\nMark Schneider, Health Innovation Advisor, ChristianaCare\n•\nZiad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management,\nUniversity of California, Berkeley School of Public Health\n•\nDorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and\nSadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania\n•\nDavid Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University\n•\nJamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for\nHealth Equity\xad\nPanelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and \noutcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of tech-\nnology as both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of \ndelivery, various panelists pointed to a number of concerns including access to and expense of broadband \nservice, the privacy concerns associated with telehealth systems, the expense associated with health \nmonitoring devices, and how this can exacerbate equity issues. On the issue of technology enhanced care, \nsome panelists spoke extensively about the way in which racial biases and the use of race in medicine \nperpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used \nin medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance \nof having the voices of those subjected to these technologies be heard.\n59\n""]","Panelists identified several challenges at the tech-healthcare equity intersection, including access to and expense of broadband service, privacy concerns associated with telehealth systems, and the expense of health monitoring devices, which can exacerbate equity issues. Additionally, they discussed how racial biases and the use of race in medicine perpetuate harms and embed prior discrimination, emphasizing the need for accountability of the technologies used in medical care and the importance of hearing the voices of those subjected to these technologies.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 58, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What strategies can help reduce IP and privacy risks in AI training data?,"[' \n26 \nMAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or \nsoftware – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other \nrights. \nAction ID \nSuggested Action \nGAI Risks \nMP-4.1-001 Conduct periodic monitoring of AI-generated content for privacy risks; address any \npossible instances of PII or sensitive data exposure. \nData Privacy \nMP-4.1-002 Implement processes for responding to potential intellectual property infringement \nclaims or other rights. \nIntellectual Property \nMP-4.1-003 \nConnect new GAI policies, procedures, and processes to existing model, data, \nsoftware development, and IT governance and to legal, compliance, and risk \nmanagement activities. \nInformation Security; Data Privacy \nMP-4.1-004 Document training data curation policies, to the extent possible and according to \napplicable laws and policies. \nIntellectual Property; Data Privacy; \nObscene, Degrading, and/or \nAbusive Content \nMP-4.1-005 \nEstablish policies for collection, retention, and minimum quality of data, in \nconsideration of the following risks: Disclosure of inappropriate CBRN information; \nUse of Illegal or dangerous content; Offensive cyber capabilities; Training data \nimbalances that could give rise to harmful biases; Leak of personally identifiable \ninformation, including facial likenesses of individuals. \nCBRN Information or Capabilities; \nIntellectual Property; Information \nSecurity; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-006 Implement policies and practices defining how third-party intellectual property and \ntraining data will be used, stored, and protected. \nIntellectual Property; Value Chain \nand Component Integration \nMP-4.1-007 Re-evaluate models that were fine-tuned or enhanced on top of third-party \nmodels. \nValue Chain and Component \nIntegration \nMP-4.1-008 \nRe-evaluate risks when adapting GAI models to new domains. Additionally, \nestablish warning systems to determine if a GAI system is being used in a new \ndomain where previous assumptions (relating to context of use or mapped risks \nsuch as security, and safety) may no longer hold. \nCBRN Information or Capabilities; \nIntellectual Property; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-009 Leverage approaches to detect the presence of PII or sensitive data in generated \noutput text, image, video, or audio. \nData Privacy \n', "" \n27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring \n \n""]","Strategies to reduce IP and privacy risks in AI training data include conducting periodic monitoring of AI-generated content for privacy risks, implementing processes for responding to potential intellectual property infringement claims, documenting training data curation policies, establishing policies for collection and retention of data, and conducting appropriate diligence on training data use to assess intellectual property and privacy risks.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 29, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What goals does the NIST AI Risk Management Framework pursue for safe, equitable AI, especially in transparency and ethics?","[' \nENDNOTES\n12. Expectations about reporting are intended for the entity developing or using the automated system. The\nresulting reports can be provided to the public, regulators, auditors, industry standards groups, or others\nengaged in independent review, and should be made public as much as possible consistent with law,\nregulation, and policy, and noting that intellectual property or law enforcement considerations may prevent\npublic release. These reporting expectations are important for transparency, so the American people can\nhave confidence that their rights, opportunities, and access as well as their expectations around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation has publicly described the health and other benefits of these\n“traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow\xad\nVehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups are here to help.\nA growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI\nmodels. MIT Technology Review. Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for\xad\nin-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget. Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985\xad\nImplementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\nDepartment-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation\xad\nPathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for\xad\nthe-intelligence-community\n64\n', ' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","The NIST AI Risk Management Framework aims to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence (AI) to realize its full commercial and societal benefits without harm to people or the planet. It also supports the development of safe, secure, and trustworthy AI, emphasizing transparency and ethical considerations in its implementation.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 63, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do real-time auditing tools help with AI content authenticity and system monitoring?,"[' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n', ' \n45 \nMG-4.1-007 \nVerify that AI Actors responsible for monitoring reported issues can effectively \nevaluate GAI system performance including the application of content \nprovenance data tracking techniques, and promptly escalate issues for response. \nHuman-AI Configuration; \nInformation Integrity \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, Domain Experts, End-Users, Human Factors, Operation and \nMonitoring \n \nMANAGE 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular \nengagement with interested parties, including relevant AI Actors. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.2-001 Conduct regular monitoring of GAI systems and publish reports detailing the \nperformance, feedback received, and improvements made. \nHarmful Bias and Homogenization \nMG-4.2-002 \nPractice and follow incident response plans for addressing the generation of \ninappropriate or harmful content and adapt processes based on findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents with \nrelevant AI Actors, to understand the root causes and implement preventive \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior to ease \nnon-technical stakeholders understanding of GAI system functionality. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Affected Individuals and Communities, End-Users, Operation and \nMonitoring, TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors, including affected communities. Processes for tracking, \nresponding to, and recovering from incidents and errors are followed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.3-001 \nConduct after-action assessments for GAI system incidents to verify incident \nresponse and recovery processes are followed and effective, including to follow \nprocedures for communicating incidents to relevant AI Actors and where \napplicable, relevant legal and regulatory bodies. \nInformation Security \nMG-4.3-002 Establish and maintain policies and procedures to record and track GAI system \nreported errors, near-misses, and negative impacts. \nConfabulation; Information \nIntegrity \n']","Real-time auditing tools aid in the tracking and validation of the lineage and authenticity of AI-generated data, which is essential for ensuring the integrity and reliability of the content produced by AI systems.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 48, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What key processes and stakeholder interactions ensure automated systems' safety and effectiveness?,"[' \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nOngoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibra\xad\ntion procedures, in place to ensure that their performance does not fall below an acceptable level over time, \nbased on changing real-world conditions or deployment contexts, post-deployment modification, or unexpect\xad\ned conditions. This ongoing monitoring should include continuous evaluation of performance metrics and \nharm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well \nas ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitor\xad\ning should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\xad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\xad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\xad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing \nassessment and mitigation. Organizational stakeholders including those with oversight of the business process \nor operation being automated, as well as other organizational divisions that may be affected due to the use of \nthe system, should be involved in establishing governance procedures. Responsibility should rest high enough \nin the organization that decisions about resources, mitigation, incident response, and potential rollback can be \nmade promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those \nholding this responsibility should be made aware of any use cases with the potential for meaningful impact on \npeople’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, \nit may be appropriate for an independent ethics review to be conducted before deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its \nreuse \nRelevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally, \njustification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19\n', ' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","Key processes and stakeholder interactions that ensure automated systems' safety and effectiveness include ongoing monitoring procedures, clear organizational oversight, consultation with the public during various phases of development, extensive testing before deployment, and proactive risk identification and mitigation. These processes involve continuous evaluation of performance metrics, involvement of organizational stakeholders, engagement with diverse impacted communities, and adherence to domain-specific best practices for testing.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 18, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the effects of bias and uniformity in GAI on data accuracy and user feedback?,"[' \n25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","The effects of bias and uniformity in GAI on data accuracy and user feedback are related to harmful bias and homogenization, which can compromise the representativeness and relevance of data used in AI systems. This can lead to inaccuracies in the information generated and may affect the quality of user feedback, as it may not accurately reflect diverse perspectives or experiences.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 28, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True Which NSF projects align with federal ethics for automated systems?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\nSome U.S government agencies have developed specific frameworks for ethical use of AI \nsystems. The Department of Energy (DOE) has activated the AI Advancement Council that oversees coordina-\ntion and advises on implementation of the DOE AI Strategy and addresses issues and/or escalations on the \nethical use and development of AI systems.20 The Department of Defense has adopted Artificial Intelligence \nEthical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its national \nsecurity and defense activities.21 Similarly, the U.S. Intelligence Community (IC) has developed the Principles \nof Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to \ndevelop and use AI in furtherance of the IC\'s mission, as well as an AI Ethics Framework to help implement \nthese principles.22\nThe National Science Foundation (NSF) funds extensive research to help foster the \ndevelopment of automated systems that adhere to and advance their safety, security and \neffectiveness. Multiple NSF programs support research that directly addresses many of these principles: \nthe National AI Research Institutes23 support research on all aspects of safe, trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical Systems24 program supports research on developing safe \nautonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace25 \nprogram supports research on cybersecurity and privacy enhancing technologies in automated systems; the \nFormal Methods in the Field26 program supports research on rigorous formal verification and analysis of \nautomated systems and machine learning, and the Designing Accountable Software Systems27 program supports \nresearch on rigorous and reproducible methodologies for developing software systems with legal and regulatory \ncompliance in mind. \nSome state legislatures have placed strong transparency and validity requirements on \nthe use of pretrial risk assessments. The use of algorithmic pretrial risk assessments has been a \ncause of concern for civil rights groups.28 Idaho Code Section 19-1910, enacted in 2019,29 requires that any \npretrial risk assessment, before use in the state, first be ""shown to be free of bias against any class of \nindividuals protected from discrimination by state or federal law"", that any locality using a pretrial risk \nassessment must first formally validate the claim of its being free of bias, that ""all documents, records, and \ninformation used to build or validate the risk assessment shall be open to public inspection,"" and that assertions \nof trade secrets cannot be used ""to quash discovery in a criminal matter by a party to a criminal case."" \n22\n', ' \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\nExecutive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \nFederal Government requires that certain federal agencies adhere to nine principles when \ndesigning, developing, acquiring, or using AI for purposes other than national security or \ndefense. These principles—while taking into account the sensitive law enforcement and other contexts in which \nthe federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and \nrespectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) \nsafe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate \nrequirements on drivers, such as slowing down near schools or playgrounds.16\nFrom large companies to start-ups, industry is providing innovative solutions that allow \norganizations to mitigate risks to the safety and efficacy of AI systems, both before \ndeployment and through monitoring over time.17 These innovative solutions include risk \nassessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing \nmonitoring, documentation procedures specific to model assessments, and many other strategies that aim to \nmitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety \nand effectiveness concerns. \nThe Office of Management and Budget (OMB) has called for an expansion of opportunities \nfor meaningful stakeholder engagement in the design of programs and services. OMB also \npoints to numerous examples of effective and proactive stakeholder engagement, including the Community-\nBased Participatory Research Program developed by the National Institutes of Health and the participatory \ntechnology assessments developed by the National Oceanic and Atmospheric Administration.18\nThe National Institute of Standards and Technology (NIST) is developing a risk \nmanagement framework to better manage risks posed to individuals, organizations, and \nsociety by AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended for \nvoluntary use to help incorporate trustworthiness considerations into the design, development, use, and \nevaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-\ndriven, open, transparent, and collaborative process that includes workshops and other opportunities to provide \ninput. The NIST framework aims to foster the development of innovative approaches to address \ncharacteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, \nrobustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of \nharmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21\n']","The National Science Foundation (NSF) funds extensive research to help foster the development of automated systems that adhere to and advance their safety, security, and effectiveness. Multiple NSF programs support research that directly addresses many of these principles, including the National AI Research Institutes, the Cyber Physical Systems program, the Secure and Trustworthy Cyberspace program, the Formal Methods in the Field program, and the Designing Accountable Software Systems program.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 21, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 20, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What concerns do panelists raise about AI in criminal justice and its effects on communities and democracy?,"["" \n \n \n \nAPPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures – providing examples of how strong transparency requirements in smart city projects \nhelped to reshape power and give more voice to those lacking the financial or political power to effect change. \nIn discussion of technical and governance interventions that that are needed to protect against the harms \nof these technologies, various panelists emphasized the need for transparency, data collection, and \nflexible and reactive policy development, analogous to how software is continuously updated and deployed. \nSome panelists pointed out that companies need clear guidelines to have a consistent environment for \ninnovation, with principles and guardrails being the key to fostering responsible innovation. \nPanel 2: The Criminal Justice System. This event explored current and emergent uses of technology in \nthe criminal justice system and considered how they advance or undermine public safety, justice, and \ndemocratic values. \nWelcome: \n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nBen Winters, Counsel, Electronic Privacy Information Center\nModerator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity \nPanelists: \n•\nSean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab\n•\nKristian Lum, Researcher\n•\nJumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers\n•\nStanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University\nCollege of Medicine\n•\nMyaisha Hayes, Campaign Strategies Director, MediaJustice\nPanelists discussed uses of technology within the criminal justice system, including the use of predictive \npolicing, pretrial risk assessments, automated license plate readers, and prison communication tools. The \ndiscussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; \nsuch strategies might include data-driven approaches, but the focus on safety should be primary, and \ntechnology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, various panelists emphasized that transparency is important but is not enough to achieve \naccountability. Some panelists discussed their individual views on additional system needs for validity, and \nagreed upon the importance of advisory boards and compensated community input early in the design process \n(before the technology is built and instituted). Various panelists also emphasized the importance of regulation \nthat includes limits to the type and cost of such technologies. \n56\n"", "" \n \n \n \n \nAPPENDIX\nPanel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in \nthe design of technology that can help support a democratic vision for AI. It included discussion of the \ntechnical aspects \nof \ndesigning \nnon-discriminatory \ntechnology, \nexplainable \nAI, \nhuman-computer \ninteraction with an emphasis on community participation, and privacy-aware design. \nWelcome:\n•\nSorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nJ. Bob Alotta, Vice President for Global Programs, Mozilla Foundation\n•\nNavrina Singh, Board Member, Mozilla Foundation\nModerator: Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S \nFederal Trade Commission. \nPanelists: \n•\nLiz O’Sullivan, CEO, Parity AI\n•\nTimnit Gebru, Independent Scholar\n•\nJennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City\n•\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n•\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n•\nBlake Hall, CEO and Founder, ID.Me\n•\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n•\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58\n""]","Panelists raised concerns about the validity of AI systems used in the criminal justice system, noting that adverse or irrelevant data can lead to a replication of unjust outcomes. They highlighted issues such as confirmation bias and the tendency to defer to potentially inaccurate automated systems. The impact of these systems on individuals and communities is seen as potentially severe, with concerns that they lack individualization, work against the belief that people can change for the better, and can lead to job loss and custody issues. Additionally, surveillance can create chilling effects for communities and send negative signals about how they are viewed. Panelists emphasized that while transparency is important, it is not sufficient for achieving accountability, and they discussed the need for regulation that includes limits on the type and cost of such technologies.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 55, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 57, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What must lenders disclose to consumers about adverse actions from automated decisions, and how does this tie into the need for transparency in algorithms affecting rights?","[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nNOTICE & \nEXPLANATION \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\xad\xad\xad\nPeople in Illinois are given written notice by the private sector if their biometric informa-\ntion is used. The Biometric Information Privacy Act enacted by the state contains a number of provisions \nconcerning the use of individual biometric data and identifiers. Included among them is a provision that no private \nentity may ""collect, capture, purchase, receive through trade, or otherwise obtain"" such information about an \nindividual, unless written notice is provided to that individual or their legally appointed representative. 87\nMajor technology companies are piloting new ways to communicate with the public about \ntheir automated technologies. For example, a collection of non-profit organizations and companies have \nworked together to develop a framework that defines operational approaches to transparency for machine \nlearning systems.88 This framework, and others like it,89 inform the public about the use of these tools, going \nbeyond simple notice to include reporting elements such as safety evaluations, disparity assessments, and \nexplanations of how the systems work. \nLenders are required by federal law to notify consumers about certain decisions made about \nthem. Both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require in certain circumstances \nthat consumers who are denied credit receive ""adverse action"" notices. Anyone who relies on the information in a \ncredit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an ""adverse action"" \nnotice to the consumer, which includes ""notice of the reasons a creditor took adverse action on the application \nor on an existing credit account.""90 In addition, under the risk-based pricing rule,91 lenders must either inform \nborrowers of their credit score, or else tell consumers when ""they are getting worse terms because of \ninformation in their credit report."" The CFPB has also asserted that ""[t]he law gives every applicant the right to \na specific explanation if their application for credit was denied, and that right is not diminished simply because \na company uses a complex algorithm that it doesn\'t understand.""92 Such explanations illustrate a shared value \nthat certain decisions need to be explained. \nA California law requires that warehouse employees are provided with notice and explana-\ntion about quotas, potentially facilitated by automated systems, that apply to them. Warehous-\ning employers in California that use quota systems (often facilitated by algorithmic monitoring systems) are \nrequired to provide employees with a written description of each quota that applies to the employee, including \n“quantified number of tasks to be performed or materials to be produced or handled, within the defined \ntime period, and any potential adverse employment action that could result from failure to meet the quota.”93\nAcross the federal government, agencies are conducting and supporting research on explain-\nable AI systems. The NIST is conducting fundamental research on the explainability of AI systems. A multidis-\nciplinary team of researchers aims to develop measurement methods and best practices to support the \nimplementation of core tenets of explainable AI.94 The Defense Advanced Research Projects Agency has a \nprogram on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that \nproduce more explainable models, while maintaining a high level of learning performance (prediction \naccuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging \ngeneration of artificially intelligent partners.95 The National Science Foundation’s program on Fairness in \nArtificial Intelligence also includes a specific interest in research foundations for explainable AI.96\n45\n', "" \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nAutomated systems now determine opportunities, from employment to credit, and directly shape the American \npublic’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this \nexpansive impact is not always visible. An applicant might not know whether a person rejected their resume or a \nhiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny\xad\ning their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting \ndecisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. \nNotice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable\xad\nness of a recommendation before enacting it. \nIn order to guard against potential harms, the American public needs to know if an automated system is being used. \nClear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like\xad\nwise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a \nparticular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, \nunaccountable, whether by design or by omission. These factors can make explanations both more challenging and \nmore important, and should not be used as a pretext to avoid explaining important decisions to the people impacted \nby those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline \nrequirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the \nvalidity and reasonable use of automated systems. \n•\nA lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home\nhealth-care assistance couldn't determine why, especially since the decision went against historical access\npractices. In a court hearing, the lawyer learned from a witness that the state in which the older client\nlived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it\nharder to understand and contest the decision.\n•\nA formal child welfare investigation is opened against a parent based on an algorithm and without the parent\never being notified that data was being collected and used as part of an algorithmic child maltreatment\nrisk assessment.84 The lack of notice or an explanation makes it harder for those performing child\nmaltreatment assessments to validate the risk assessment and denies parents knowledge that could help them\ncontest a decision.\n41\n""]","Lenders are required by federal law to notify consumers about certain decisions made about them, specifically through ""adverse action"" notices. This includes providing notice of the reasons a creditor took adverse action on a credit application or existing credit account. This requirement ties into the need for transparency in algorithms affecting rights, as it ensures that consumers are informed about the automated decisions impacting their credit, allowing them to understand and contest those decisions.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 44, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 40, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What steps are taken to inform the public about automated decision-making and their rights?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nNOTICE & \nEXPLANATION \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\xad\xad\xad\nPeople in Illinois are given written notice by the private sector if their biometric informa-\ntion is used. The Biometric Information Privacy Act enacted by the state contains a number of provisions \nconcerning the use of individual biometric data and identifiers. Included among them is a provision that no private \nentity may ""collect, capture, purchase, receive through trade, or otherwise obtain"" such information about an \nindividual, unless written notice is provided to that individual or their legally appointed representative. 87\nMajor technology companies are piloting new ways to communicate with the public about \ntheir automated technologies. For example, a collection of non-profit organizations and companies have \nworked together to develop a framework that defines operational approaches to transparency for machine \nlearning systems.88 This framework, and others like it,89 inform the public about the use of these tools, going \nbeyond simple notice to include reporting elements such as safety evaluations, disparity assessments, and \nexplanations of how the systems work. \nLenders are required by federal law to notify consumers about certain decisions made about \nthem. Both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require in certain circumstances \nthat consumers who are denied credit receive ""adverse action"" notices. Anyone who relies on the information in a \ncredit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an ""adverse action"" \nnotice to the consumer, which includes ""notice of the reasons a creditor took adverse action on the application \nor on an existing credit account.""90 In addition, under the risk-based pricing rule,91 lenders must either inform \nborrowers of their credit score, or else tell consumers when ""they are getting worse terms because of \ninformation in their credit report."" The CFPB has also asserted that ""[t]he law gives every applicant the right to \na specific explanation if their application for credit was denied, and that right is not diminished simply because \na company uses a complex algorithm that it doesn\'t understand.""92 Such explanations illustrate a shared value \nthat certain decisions need to be explained. \nA California law requires that warehouse employees are provided with notice and explana-\ntion about quotas, potentially facilitated by automated systems, that apply to them. Warehous-\ning employers in California that use quota systems (often facilitated by algorithmic monitoring systems) are \nrequired to provide employees with a written description of each quota that applies to the employee, including \n“quantified number of tasks to be performed or materials to be produced or handled, within the defined \ntime period, and any potential adverse employment action that could result from failure to meet the quota.”93\nAcross the federal government, agencies are conducting and supporting research on explain-\nable AI systems. The NIST is conducting fundamental research on the explainability of AI systems. A multidis-\nciplinary team of researchers aims to develop measurement methods and best practices to support the \nimplementation of core tenets of explainable AI.94 The Defense Advanced Research Projects Agency has a \nprogram on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that \nproduce more explainable models, while maintaining a high level of learning performance (prediction \naccuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging \ngeneration of artificially intelligent partners.95 The National Science Foundation’s program on Fairness in \nArtificial Intelligence also includes a specific interest in research foundations for explainable AI.96\n45\n', "" \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nAutomated systems now determine opportunities, from employment to credit, and directly shape the American \npublic’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this \nexpansive impact is not always visible. An applicant might not know whether a person rejected their resume or a \nhiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny\xad\ning their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting \ndecisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. \nNotice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable\xad\nness of a recommendation before enacting it. \nIn order to guard against potential harms, the American public needs to know if an automated system is being used. \nClear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like\xad\nwise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a \nparticular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, \nunaccountable, whether by design or by omission. These factors can make explanations both more challenging and \nmore important, and should not be used as a pretext to avoid explaining important decisions to the people impacted \nby those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline \nrequirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the \nvalidity and reasonable use of automated systems. \n•\nA lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home\nhealth-care assistance couldn't determine why, especially since the decision went against historical access\npractices. In a court hearing, the lawyer learned from a witness that the state in which the older client\nlived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it\nharder to understand and contest the decision.\n•\nA formal child welfare investigation is opened against a parent based on an algorithm and without the parent\never being notified that data was being collected and used as part of an algorithmic child maltreatment\nrisk assessment.84 The lack of notice or an explanation makes it harder for those performing child\nmaltreatment assessments to validate the risk assessment and denies parents knowledge that could help them\ncontest a decision.\n41\n""]","Steps taken to inform the public about automated decision-making and their rights include written notice provided by private entities in Illinois regarding the use of biometric information, federal laws requiring lenders to notify consumers about adverse actions related to credit decisions, and California laws mandating that warehouse employees receive written descriptions of quotas. Additionally, major technology companies are developing frameworks for transparency in machine learning systems, and federal agencies are conducting research on explainable AI systems to ensure that the public understands how automated systems impact their rights and opportunities.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 44, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 40, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does community assessment help reduce algorithmic bias in the AI Bill of Rights?,"['Applying The Blueprint for an AI Bill of Rights \nDEFINITIONS\nALGORITHMIC DISCRIMINATION: “Algorithmic discrimination” occurs when automated systems \ncontribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, \nsex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifica-\ntion protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate \nlegal protections. Throughout this framework the term “algorithmic discrimination” takes this meaning (and \nnot a technical understanding of discrimination as distinguishing between items). \nAUTOMATED SYSTEM: An ""automated system"" is any system, software, or process that uses computation as \nwhole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect \ndata or observations, or otherwise interact with individuals and/or communities. Automated systems \ninclude, but are not limited to, systems derived from machine learning, statistics, or other data processing \nor artificial intelligence techniques, and exclude passive computing infrastructure. “Passive computing \ninfrastructure” is any intermediary technology that does not influence or determine the outcome of decision, \nmake or aid in decisions, inform policy implementation, or collect data or observations, including web \nhosting, domain registration, networking, caching, data storage, or cybersecurity. Throughout this \nframework, automated systems that are considered in scope are only those that have the potential to \nmeaningfully impact individuals’ or communi-ties’ rights, opportunities, or access. \nCOMMUNITIES: “Communities” include: neighborhoods; social network connections (both online and \noffline); families (construed broadly); people connected by affinity, identity, or shared traits; and formal organi-\nzational ties. This includes Tribes, Clans, Bands, Rancherias, Villages, and other Indigenous communities. AI \nand other data-driven automated systems most directly collect data on, make inferences about, and may cause \nharm to individuals. But the overall magnitude of their impacts may be most readily visible at the level of com-\nmunities. Accordingly, the concept of community is integral to the scope of the Blueprint for an AI Bill of Rights. \nUnited States law and policy have long employed approaches for protecting the rights of individuals, but exist-\ning frameworks have sometimes struggled to provide protections when effects manifest most clearly at a com-\nmunity level. For these reasons, the Blueprint for an AI Bill of Rights asserts that the harms of automated \nsystems should be evaluated, protected against, and redressed at both the individual and community levels. \nEQUITY: “Equity” means the consistent and systematic fair, just, and impartial treatment of all individuals. \nSystemic, fair, and just treatment must take into account the status of individuals who belong to underserved \ncommunities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American \npersons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; \nwomen, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) \npersons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely \naffected by persistent poverty or inequality. \nRIGHTS, OPPORTUNITIES, OR ACCESS: “Rights, opportunities, or access” is used to indicate the scoping \nof this framework. It describes the set of: civil rights, civil liberties, and privacy, including freedom of speech, \nvoting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of \nprivacy and other freedoms in both public and private sector contexts; equal opportunities, including equitable \naccess to education, housing, credit, employment, and other programs; or, access to critical resources or \nservices, such as healthcare, financial services, safety, social services, non-deceptive information about goods \nand services, and government benefits. \n10\n', ' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 9, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role does NIST play in AI safety and risk management?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']","NIST plays a significant role in AI safety and risk management by developing measurements, technology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence. They are also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy AI and have established the U.S. AI Safety Institute and the AI Safety Institute Consortium to build the necessary science for the safe and trustworthy development and use of AI.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What governance aspects are key for public safety in automated systems?,"[' \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nOngoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibra\xad\ntion procedures, in place to ensure that their performance does not fall below an acceptable level over time, \nbased on changing real-world conditions or deployment contexts, post-deployment modification, or unexpect\xad\ned conditions. This ongoing monitoring should include continuous evaluation of performance metrics and \nharm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well \nas ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitor\xad\ning should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\xad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\xad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\xad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing \nassessment and mitigation. Organizational stakeholders including those with oversight of the business process \nor operation being automated, as well as other organizational divisions that may be affected due to the use of \nthe system, should be involved in establishing governance procedures. Responsibility should rest high enough \nin the organization that decisions about resources, mitigation, incident response, and potential rollback can be \nmade promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those \nholding this responsibility should be made aware of any use cases with the potential for meaningful impact on \npeople’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, \nit may be appropriate for an independent ethics review to be conducted before deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its \nreuse \nRelevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally, \njustification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19\n', ' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","Key governance aspects for public safety in automated systems include laying out clear governance structures and procedures, establishing responsibility for oversight, involving organizational stakeholders in governance procedures, and ensuring that those in charge are aware of potential impacts on people's rights and opportunities. Additionally, it may be appropriate to conduct an independent ethics review before deployment.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 18, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do content provenance standards impact the performance and risks of third-party GAI systems regarding info integrity and IP?,"[' \n21 \nGV-6.1-005 \nImplement a use-cased based supplier risk assessment framework to evaluate and \nmonitor third-party entities’ performance and adherence to content provenance \nstandards and technologies to detect anomalies and unauthorized changes; \nservices acquisition and value chain risk management; and legal compliance. \nData Privacy; Information \nIntegrity; Information Security; \nIntellectual Property; Value Chain \nand Component Integration \nGV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party \nGAI processes and standards. \nInformation Integrity \nGV-6.1-007 Inventory all third-party entities with access to organizational content and \nestablish approved GAI technology and service provider lists. \nValue Chain and Component \nIntegration \nGV-6.1-008 Maintain records of changes to content made by third parties to promote content \nprovenance, including sources, timestamps, metadata. \nInformation Integrity; Value Chain \nand Component Integration; \nIntellectual Property \nGV-6.1-009 \nUpdate and integrate due diligence processes for GAI acquisition and \nprocurement vendor assessments to include intellectual property, data privacy, \nsecurity, and other risks. For example, update processes to: Address solutions that \nmay rely on embedded GAI technologies; Address ongoing monitoring, \nassessments, and alerting, dynamic risk assessments, and real-time reporting \ntools for monitoring third-party GAI risks; Consider policy adjustments across GAI \nmodeling libraries, tools and APIs, fine-tuned models, and embedded tools; \nAssess GAI vendors, open-source or proprietary GAI tools, or GAI service \nproviders against incident or vulnerability databases. \nData Privacy; Human-AI \nConfiguration; Information \nSecurity; Intellectual Property; \nValue Chain and Component \nIntegration; Harmful Bias and \nHomogenization \nGV-6.1-010 \nUpdate GAI acceptable use policies to address proprietary and open-source GAI \ntechnologies and data, and contractors, consultants, and other third-party \npersonnel. \nIntellectual Property; Value Chain \nand Component Integration \nAI Actor Tasks: Operation and Monitoring, Procurement, Third-party entities \n \nGOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be \nhigh-risk. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.2-001 \nDocument GAI risks associated with system value chain to identify over-reliance \non third-party data and to identify fallbacks. \nValue Chain and Component \nIntegration \nGV-6.2-002 \nDocument incidents involving third-party GAI data and systems, including open-\ndata and open-source software. \nIntellectual Property; Value Chain \nand Component Integration \n', ' \n20 \nGV-4.3-003 \nVerify information sharing and feedback mechanisms among individuals and \norganizations regarding any negative impact from GAI systems. \nInformation Integrity; Data \nPrivacy \nAI Actor Tasks: AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those \nexternal to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI \nrisks. \nAction ID \nSuggested Action \nGAI Risks \nGV-5.1-001 \nAllocate time and resources for outreach, feedback, and recourse processes in GAI \nsystem development. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nGV-5.1-002 \nDocument interactions with GAI systems to users prior to interactive activities, \nparticularly in contexts involving more significant risks. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Design, AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of \ninfringement of a third-party’s intellectual property or other rights. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.1-001 Categorize different types of GAI content with associated third-party rights (e.g., \ncopyright, intellectual property, data privacy). \nData Privacy; Intellectual \nProperty; Value Chain and \nComponent Integration \nGV-6.1-002 Conduct joint educational activities and events in collaboration with third parties \nto promote best practices for managing GAI risks. \nValue Chain and Component \nIntegration \nGV-6.1-003 \nDevelop and validate approaches for measuring the success of content \nprovenance management efforts with third parties (e.g., incidents detected and \nresponse times). \nInformation Integrity; Value Chain \nand Component Integration \nGV-6.1-004 \nDraft and maintain well-defined contracts and service level agreements (SLAs) \nthat specify content ownership, usage rights, quality standards, security \nrequirements, and content provenance expectations for GAI systems. \nInformation Integrity; Information \nSecurity; Intellectual Property \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 24, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 23, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What goals does the U.S. AI Safety Institute have for NIST's AI risk standards?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What org strategies help with AI testing, incident reporting, and risk communication?","[' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n', ' \n20 \nGV-4.3-003 \nVerify information sharing and feedback mechanisms among individuals and \norganizations regarding any negative impact from GAI systems. \nInformation Integrity; Data \nPrivacy \nAI Actor Tasks: AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those \nexternal to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI \nrisks. \nAction ID \nSuggested Action \nGAI Risks \nGV-5.1-001 \nAllocate time and resources for outreach, feedback, and recourse processes in GAI \nsystem development. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nGV-5.1-002 \nDocument interactions with GAI systems to users prior to interactive activities, \nparticularly in contexts involving more significant risks. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Design, AI Impact Assessment, Affected Individuals and Communities, Governance and Oversight \n \nGOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of \ninfringement of a third-party’s intellectual property or other rights. \nAction ID \nSuggested Action \nGAI Risks \nGV-6.1-001 Categorize different types of GAI content with associated third-party rights (e.g., \ncopyright, intellectual property, data privacy). \nData Privacy; Intellectual \nProperty; Value Chain and \nComponent Integration \nGV-6.1-002 Conduct joint educational activities and events in collaboration with third parties \nto promote best practices for managing GAI risks. \nValue Chain and Component \nIntegration \nGV-6.1-003 \nDevelop and validate approaches for measuring the success of content \nprovenance management efforts with third parties (e.g., incidents detected and \nresponse times). \nInformation Integrity; Value Chain \nand Component Integration \nGV-6.1-004 \nDraft and maintain well-defined contracts and service level agreements (SLAs) \nthat specify content ownership, usage rights, quality standards, security \nrequirements, and content provenance expectations for GAI systems. \nInformation Integrity; Information \nSecurity; Intellectual Property \n']","Organizational strategies that help with AI testing, incident reporting, and risk communication include establishing policies for measuring the effectiveness of content provenance methodologies, identifying the minimum set of criteria necessary for GAI system incident reporting, and verifying information sharing and feedback mechanisms regarding any negative impact from GAI systems.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 23, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What insights did OSTP seek from the biometric tech RFI, and who provided feedback?","['APPENDIX\nSummaries of Additional Engagements: \n• OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n• OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium \nat Computing Research Association \nConnected Health Initiative \nConsumer Technology Association \nCourtney Radsch \nCoworker \nCyber Farm Labs \nData & Society Research Institute \nData for Black Lives \nData to Actionable Knowledge Lab \nat Harvard University \nDeloitte \nDev Technology Group \nDigital Therapeutics Alliance \nDigital Welfare State & Human \nRights Project and Center for \nHuman Rights and Global Justice at \nNew York University School of \nLaw, and Temple University \nInstitute for Law, Innovation & \nTechnology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier Foundation \nElectronic Privacy Information \nCenter, Center for Digital \nDemocracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information Technology \nResearch and Development \nInteragency Working Group \nHireVue \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at Science Applications \nInternational Corporation \nInformation Technology and \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation \nInternational Business Machines \nCorporation \nInternational Committee of the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner, Dan Berger, \nMargaret Hu, and Sara Katsanis \nJonathan Barry-Blocker \nJoseph Turow \nJoy Buolamwini \nJoy Mack \nKaren Bureau \nLamont Gholston \nLawyers’ Committee for Civil \nRights Under Law \n60\n', 'APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation \nStephanie Dinkins and the Future \nHistories Studio at Stony Brook \nUniversity \nTechNet \nThe Alliance for Media Arts and \nCulture, MIT Open Documentary \nLab and Co-Creation Studio, and \nImmerse \nThe International Brotherhood of \nTeamsters \nThe Leadership Conference on \nCivil and Human Rights \nThorn \nU.S. Chamber of Commerce’s \nTechnology Engagement Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \nXR Association \nXR Safety Initiative \n• As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor members of the public. The listening sessions together drew upwards of 300 participants. The Science and\nTechnology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61\n']","OSTP sought insights on the extent and variety of biometric technologies in past, current, or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation. Feedback was provided by 130 organizations and individuals, including Accenture, ACLU, Google, Microsoft Corporation, and many others.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 59, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 60, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What consent practices and design principles can help balance user privacy and surveillance risks in automated systems?,"[' \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nProtect the public from unchecked surveillance \nHeightened oversight of surveillance. Surveillance or monitoring systems should be subject to \nheightened oversight that includes at a minimum assessment of potential harms during design (before deploy\xad\nment) and in an ongoing manner, to ensure that the American public’s rights, opportunities, and access are \nprotected. This assessment should be done before deployment and should give special attention to ensure \nthere is not algorithmic discrimination, especially based on community membership, when deployed in a \nspecific real-world context. Such assessment should then be reaffirmed in an ongoing manner as long as the \nsystem is in use. \nLimited and proportionate surveillance. Surveillance should be avoided unless it is strictly necessary \nto achieve a legitimate purpose and it is proportionate to the need. Designers, developers, and deployers of \nsurveillance systems should use the least invasive means of monitoring available and restrict monitoring to the \nminimum number of subjects possible. To the greatest extent possible consistent with law enforcement and \nnational security needs, individuals subject to monitoring should be provided with clear and specific notice \nbefore it occurs and be informed about how the data gathered through surveillance will be used. \nScope limits on surveillance to protect rights and democratic values. Civil liberties and civil \nrights must not be limited by the threat of surveillance or harassment facilitated or aided by an automated \nsystem. Surveillance systems should not be used to monitor the exercise of democratic rights, such as voting, \nprivacy, peaceful assembly, speech, or association, in a way that limits the exercise of civil rights or civil liber\xad\nties. Information about or algorithmically-determined assumptions related to identity should be carefully \nlimited if used to target or guide surveillance systems in order to avoid algorithmic discrimination; such iden\xad\ntity-related information includes group characteristics or affiliations, geographic designations, location-based \nand association-based inferences, social networks, and biometrics. Continuous surveillance and monitoring \nsystems should not be used in physical or digital workplaces (regardless of employment status), public educa\xad\ntional institutions, and public accommodations. Continuous surveillance and monitoring systems should not \nbe used in a way that has the effect of limiting access to critical resources or services or suppressing the exer\xad\ncise of rights, even where the organization is not under a particular duty to protect those rights. \nProvide the public with mechanisms for appropriate and meaningful consent, access, and \ncontrol over their data \nUse-specific consent. Consent practices should not allow for abusive surveillance practices. Where data \ncollectors or automated systems seek consent, they should seek it for specific, narrow use contexts, for specif\xad\nic time durations, and for use by specific entities. Consent should not extend if any of these conditions change; \nconsent should be re-acquired before using data if the use case changes, a time limit elapses, or data is trans\xad\nferred to another entity (including being shared or sold). Consent requested should be limited in scope and \nshould not request consent beyond what is required. Refusal to provide consent should be allowed, without \nadverse effects, to the greatest extent possible based on the needs of the use case. \nBrief and direct consent requests. When seeking consent from users short, plain language consent \nrequests should be used so that users understand for what use contexts, time span, and entities they are \nproviding data and metadata consent. User experience research should be performed to ensure these consent \nrequests meet performance standards for readability and comprehension. This includes ensuring that consent \nrequests are accessible to users with disabilities and are available in the language(s) and reading level appro\xad\npriate for the audience. User experience design choices that intentionally obfuscate or manipulate user \nchoice (i.e., “dark patterns”) should be not be used. \n34\n', ' \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTraditional terms of service—the block of text that the public is accustomed to clicking through when using a web\xad\nsite or digital app—are not an adequate mechanism for protecting privacy. The American public should be protect\xad\ned via built-in privacy protections, data minimization, use and collection limitations, and transparency, in addition \nto being entitled to clear mechanisms to control access to and use of their data—including their metadata—in a \nproactive, informed, and ongoing way. Any automated system collecting, using, sharing, or storing personal data \nshould meet these expectations. \nProtect privacy by design and by default \nPrivacy by design and by default. Automated systems should be designed and built with privacy protect\xad\ned by default. Privacy risks should be assessed throughout the development life cycle, including privacy risks \nfrom reidentification, and appropriate technical and policy mitigation measures should be implemented. This \nincludes potential harms to those who are not users of the automated system, but who may be harmed by \ninferred data, purposeful privacy violations, or community surveillance or other community harms. Data \ncollection should be minimized and clearly communicated to the people whose data is collected. Data should \nonly be collected or used for the purposes of training or testing machine learning models if such collection and \nuse is legal and consistent with the expectations of the people whose data is collected. User experience \nresearch should be conducted to confirm that people understand what data is being collected about them and \nhow it will be used, and that this collection matches their expectations and desires. \nData collection and use-case scope limits. Data collection should be limited in scope, with specific, \nnarrow identified goals, to avoid ""mission creep."" Anticipated data collection should be determined to be \nstrictly necessary to the identified goals and should be minimized as much as possible. Data collected based on \nthese identified goals and for a specific context should not be used in a different context without assessing for \nnew privacy risks and implementing appropriate mitigation measures, which may include express consent. \nClear timelines for data retention should be established, with data deleted as soon as possible in accordance \nwith legal or policy-based limitations. Determined data retention timelines should be documented and justi\xad\nfied. \nRisk identification and mitigation. Entities that collect, use, share, or store sensitive data should \nattempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropri\xad\nately to identified risks. Appropriate responses include determining not to process data when the privacy risks \noutweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not \ninclude sharing or transferring the privacy risks to users via notice or consent requests where users could not \nreasonably be expected to understand the risks without further support. \nPrivacy-preserving security. Entities creating, using, or governing automated systems should follow \nprivacy and security best practices designed to ensure data and metadata do not leak beyond the specific \nconsented use case. Best practices could include using privacy-enhancing cryptography or other types of \nprivacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with \nconventional system security protocols. \n33\n']","Consent practices that can help balance user privacy and surveillance risks in automated systems include use-specific consent, where consent is sought for specific, narrow use contexts and time durations, and should be re-acquired if conditions change. Additionally, brief and direct consent requests should be used, employing short, plain language to ensure users understand the context and duration of data use. User experience research should be conducted to ensure these requests are accessible and comprehensible, avoiding manipulative design choices. Furthermore, privacy should be protected by design and by default, with privacy risks assessed throughout the development life cycle and data collection minimized to only what is necessary for identified goals.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 33, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 32, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the effects of GAI evaluations on fair content and community input?,"[' \n29 \nMS-1.1-006 \nImplement continuous monitoring of GAI system impacts to identify whether GAI \noutputs are equitable across various sub-populations. Seek active and direct \nfeedback from affected communities via structured feedback mechanisms or red-\nteaming to monitor and improve outputs. \nHarmful Bias and Homogenization \nMS-1.1-007 \nEvaluate the quality and integrity of data used in training and the provenance of \nAI-generated content, for example by employing techniques like chaos \nengineering and seeking stakeholder feedback. \nInformation Integrity \nMS-1.1-008 \nDefine use cases, contexts of use, capabilities, and negative impacts where \nstructured human feedback exercises, e.g., GAI red-teaming, would be most \nbeneficial for GAI risk measurement and management based on the context of \nuse. \nHarmful Bias and \nHomogenization; CBRN \nInformation or Capabilities \nMS-1.1-009 \nTrack and document risks or opportunities related to all GAI risks that cannot be \nmeasured quantitatively, including explanations as to why some risks cannot be \nmeasured (e.g., due to technological limitations, resource constraints, or \ntrustworthy considerations). Include unmeasured risks in marginal risks. \nInformation Integrity \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are \ninvolved in regular assessments and updates. Domain experts, users, AI Actors external to the team that developed or deployed the \nAI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.3-001 \nDefine relevant groups of interest (e.g., demographic groups, subject matter \nexperts, experience with GAI technology) within the context of use as part of \nplans for gathering structured public feedback. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-002 \nEngage in internal and external evaluations, GAI red-teaming, impact \nassessments, or other structured human feedback exercises in consultation \nwith representative AI Actors with expertise and familiarity in the context of \nuse, and/or who are representative of the populations associated with the \ncontext of use. \nHuman-AI Configuration; Harmful \nBias and Homogenization; CBRN \nInformation or Capabilities \nMS-1.3-003 \nVerify those conducting structured human feedback exercises are not directly \ninvolved in system development tasks for the same GAI model. \nHuman-AI Configuration; Data \nPrivacy \nAI Actor Tasks: AI Deployment, AI Development, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, \nEnd-Users, Operation and Monitoring, TEVV \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 32, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What risks come from easier access to violent content, especially regarding CBRN knowledge and misinformation?","[' \n4 \n1. CBRN Information or Capabilities: Eased access to or synthesis of materially nefarious \ninformation or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) \nweapons or other dangerous materials or agents. \n2. Confabulation: The production of confidently stated but erroneous or false content (known \ncolloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.6 \n3. Dangerous, Violent, or Hateful Content: Eased production of and access to violent, inciting, \nradicalizing, or threatening content as well as recommendations to carry out self-harm or \nconduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging \nor stereotyping content. \n4. Data Privacy: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of \nbiometric, health, location, or other personally identifiable information or sensitive data.7 \n5. Environmental Impacts: Impacts due to high compute resource utilization in training or \noperating GAI models, and related outcomes that may adversely impact ecosystems. \n6. Harmful Bias or Homogenization: Amplification and exacerbation of historical, societal, and \nsystemic biases; performance disparities8 between sub-groups or languages, possibly due to \nnon-representative training data, that result in discrimination, amplification of biases, or \nincorrect presumptions about performance; undesired homogeneity that skews system or model \noutputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful \nbiases. \n7. Human-AI Configuration: Arrangements of or interactions between a human and an AI system \nwhich can result in the human inappropriately anthropomorphizing GAI systems or experiencing \nalgorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI \nsystems. \n8. Information Integrity: Lowered barrier to entry to generate and support the exchange and \nconsumption of content which may not distinguish fact from opinion or fiction or acknowledge \nuncertainties, or could be leveraged for large-scale dis- and mis-information campaigns. \n9. Information Security: Lowered barriers for offensive cyber capabilities, including via automated \ndiscovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber \n \n \n6 Some commenters have noted that the terms “hallucination” and “fabrication” anthropomorphize GAI, which \nitself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human \nentities. \n7 What is categorized as sensitive data or sensitive PII can be highly contextual based on the nature of the \ninformation, but examples of sensitive information include information that relates to an information subject’s \nmost intimate sphere, including political opinions, sex life, or criminal convictions. \n8 The notion of harm presumes some baseline scenario that the harmful factor (e.g., a GAI model) makes worse. \nWhen the mechanism for potential harm is a disparity between groups, it can be difficult to establish what the \nmost appropriate baseline is to compare against, which can result in divergent views on when a disparity between \nAI behaviors for different subgroups constitutes a harm. In discussing harms from disparities such as biased \nbehavior, this document highlights examples where someone’s situation is worsened relative to what it would have \nbeen in the absence of any AI system, making the outcome unambiguously a harm of the system. \n', ' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']","Eased access to violent content can lead to the production of and access to violent, inciting, radicalizing, or threatening content, as well as recommendations to carry out self-harm or conduct illegal activities. This includes difficulty controlling public exposure to hateful and disparaging or stereotyping content. Additionally, the lowered barrier to generate and support the exchange of content may not distinguish fact from opinion or acknowledge uncertainties, which could be leveraged for large-scale dis- and mis-information campaigns, potentially impacting the operational likelihood of attacks involving CBRN knowledge.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 7, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "What factors on data privacy and content integrity should be considered for a GAI system, especially regarding user feedback and transparency?","[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","Factors on data privacy and content integrity for a GAI system include documenting the extent to which human domain knowledge is employed to improve GAI system performance, reviewing and verifying sources and citations in GAI system outputs, tracking instances of anthropomorphization in GAI system interfaces, verifying GAI system training data and TEVV data provenance, and regularly reviewing security and safety guardrails. Additionally, structured feedback about content provenance should be recorded and integrated from operators, users, and impacted communities, and there should be an emphasis on digital content transparency regarding the societal impacts of AI and the role of diverse and inclusive content generation.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What goals does PAVE have for racial equity and valuing marginalized communities?,"["" \n \n \n \nENDNOTES\n47. Darshali A. Vyas et al., Hidden in Plain Sight – Reconsidering the Use of Race Correction in Clinical\nAlgorithms, 383 N. Engl. J. Med.874, 876-78 (Aug. 27, 2020), https://www.nejm.org/doi/full/10.1056/\nNEJMms2004740.\n48. The definitions of 'equity' and 'underserved communities' can be found in the Definitions section of\nthis framework as well as in Section 2 of The Executive Order On Advancing Racial Equity and Support\nfor Underserved Communities Through the Federal Government. https://www.whitehouse.gov/\nbriefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support\xad\nfor-underserved-communities-through-the-federal-government/\n49. Id.\n50. Various organizations have offered proposals for how such assessments might be designed. See, e.g.,\nEmanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf.\nAssembling Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society\nResearch Institute Report. June 29, 2021. https://datasociety.net/library/assembling-accountability\xad\nalgorithmic-impact-assessment-for-the-public-interest/; Nicol Turner Lee, Paul Resnick, and Genie\nBarton. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms.\nBrookings Report. May 22, 2019.\nhttps://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and\xad\npolicies-to-reduce-consumer-harms/; Andrew D. Selbst. An Institutional View Of Algorithmic Impact\nAssessments. Harvard Journal of Law & Technology. June 15, 2021. https://ssrn.com/abstract=3867634;\nDillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. Algorithmic Impact\nAssessments: A Practical Framework for Public Agency Accountability. AI Now Institute Report. April\n2018. https://ainowinstitute.org/aiareport2018.pdf\n51. Department of Justice. Justice Department Announces New Initiative to Combat Redlining. Oct. 22,\n2021. https://www.justice.gov/opa/pr/justice-department-announces-new-initiative-combat-redlining\n52. PAVE Interagency Task Force on Property Appraisal and Valuation Equity. Action Plan to Advance\nProperty Appraisal and Valuation Equity: Closing the Racial Wealth Gap by Addressing Mis-valuations for\nFamilies and Communities of Color. March 2022. https://pave.hud.gov/sites/pave.hud.gov/files/\ndocuments/PAVEActionPlan.pdf\n53. U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of\nSoftware, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. EEOC\xad\nNVTA-2022-2. May 12, 2022. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use\xad\nsoftware-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial\nIntelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai\xad\nguidance/\n54. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in\nan algorithm used to manage the health of populations. Science. Vol. 366, No. 6464. Oct. 25, 2019. https://\nwww.science.org/doi/10.1126/science.aax2342\n55. Data & Trust Alliance. Algorithmic Bias Safeguards for Workforce: Overview. Jan. 2022. https://\ndataandtrustalliance.org/Algorithmic_Bias_Safeguards_for_Workforce_Overview.pdf\n56. Section 508.gov. IT Accessibility Laws and Policies. Access Board. https://www.section508.gov/\nmanage/laws-and-policies/\n67\n"", "" \n \n \n \nENDNOTES\n1.The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the\nFederal\xa0Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive\norder-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n2. The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun.\n24, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/06/24/remarks-by-president\xad\nbiden-on-the-supreme-court-decision-to-overturn-roe-v-wade/\n3. The White House. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https://\nwww.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an\xad\nautomated-society/\n4. U.S. Dept. of Health, Educ. & Welfare, Report of the Sec’y’s Advisory Comm. on Automated Pers. Data Sys.,\nRecords, Computers, and the Rights of Citizens (July 1973). https://www.justice.gov/opcl/docs/rec-com\xad\nrights.pdf.\n5. See, e.g., Office of Mgmt. & Budget, Exec. Office of the President, Circular A-130, Managing Information as a\nStrategic Resource, app. II §\xa03 (July 28, 2016); Org. of Econ. Co-Operation & Dev., Revision of the\nRecommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder\nFlows of Personal Data, Annex Part Two (June 20, 2013). https://one.oecd.org/document/C(2013)79/en/pdf.\n6. Andrew Wong et al. External validation of a widely implemented proprietary sepsis prediction model in\nhospitalized patients. JAMA Intern Med. 2021; 181(8):1065-1070. doi:10.1001/jamainternmed.2021.2626\n7. Jessica Guynn. Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate\nspeech. USA Today. Apr. 24, 2019. https://www.usatoday.com/story/news/2019/04/24/facebook-while-black\xad\nzucked-users-say-they-get-blocked-racism-discussion/2859593002/\n8. See, e.g., Michael Levitt. AirTags are being used to track people and cars. Here's what is being done about it.\nNPR. Feb. 18, 2022. https://www.npr.org/2022/02/18/1080944193/apple-airtags-theft-stalking-privacy-tech;\nSamantha Cole. Police Records Show Women Are Being Stalked With Apple AirTags Across the Country.\nMotherboard. Apr. 6, 2022. https://www.vice.com/en/article/y3vj3y/apple-airtags-police-reports-stalking\xad\nharassment\n9. Kristian Lum and William Isaac. To Predict and Serve? Significance. Vol. 13, No. 5, p. 14-19. Oct. 7, 2016.\nhttps://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x; Aaron Sankin, Dhruv Mehrotra,\nSurya Mattu, and Annie Gilbertson. Crime Prediction Software Promised to Be Free of Biases. New Data Shows\nIt Perpetuates Them. The Markup and Gizmodo. Dec. 2, 2021. https://themarkup.org/prediction\xad\nbias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates\xad\nthem\n10. Samantha Cole. This Horrifying App Undresses a Photo of Any Woman With a Single Click. Motherboard.\nJune 26, 2019. https://www.vice.com/en/article/kzm59x/deepn""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 66, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 62, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What steps ensure automated systems reduce bias and promote equity?,"[' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDemonstrate that the system protects against algorithmic discrimination \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of potential algorithmic discrimination caused by automated systems they use or \noversee. In the case of public sector uses, these independent evaluations should be made public unless law \nenforcement or national security restrictions prevent doing so. Care should be taken to balance individual \nprivacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and \ncontrols allow access to such data without compromising privacy. \nReporting. Entities responsible for the development or use of automated systems should provide \nreporting of an appropriately designed algorithmic impact assessment,50 with clear specification of who \nperforms the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in \nresponse to the assessment. This algorithmic impact assessment should include at least: the results of any \nconsultation, design stage equity assessments (potentially including qualitative analysis), accessibility \ndesigns and testing, disparity testing, document any remaining disparities, and detail any mitigation \nimplementation and assessments. This algorithmic impact assessment should be made public whenever \npossible. Reporting should be provided in a clear and machine-readable manner using plain language to \nallow for more straightforward public accountability. \n28\nAlgorithmic \nDiscrimination \nProtections \n']","To ensure automated systems reduce bias and promote equity, several steps should be taken: 1) Conduct proactive equity assessments during the design phase to identify potential discrimination and effects on equity; 2) Use representative and robust data that reflects local communities and is reviewed for bias; 3) Guard against proxies by avoiding the direct use of demographic information in system design and testing for correlations; 4) Allow independent evaluations of potential algorithmic discrimination; 5) Provide reporting of algorithmic impact assessments that detail consultations, equity assessments, and any disparities found, ensuring transparency and public accountability.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 27, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does threat modeling help with GAI risk and org policies on transparency?,"[' \n18 \nGOVERN 3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations \nand oversight of AI systems. \nAction ID \nSuggested Action \nGAI Risks \nGV-3.2-001 \nPolicies are in place to bolster oversight of GAI systems with independent \nevaluations or assessments of GAI models or systems where the type and \nrobustness of evaluations are proportional to the identified risks. \nCBRN Information or Capabilities; \nHarmful Bias and Homogenization \nGV-3.2-002 \nConsider adjustment of organizational roles and components across lifecycle \nstages of large or complex GAI systems, including: Test and evaluation, validation, \nand red-teaming of GAI systems; GAI content moderation; GAI system \ndevelopment and engineering; Increased accessibility of GAI tools, interfaces, and \nsystems, Incident response and containment. \nHuman-AI Configuration; \nInformation Security; Harmful Bias \nand Homogenization \nGV-3.2-003 \nDefine acceptable use policies for GAI interfaces, modalities, and human-AI \nconfigurations (i.e., for chatbots and decision-making tasks), including criteria for \nthe kinds of queries GAI applications should refuse to respond to. \nHuman-AI Configuration \nGV-3.2-004 \nEstablish policies for user feedback mechanisms for GAI systems which include \nthorough instructions and any mechanisms for recourse. \nHuman-AI Configuration \nGV-3.2-005 \nEngage in threat modeling to anticipate potential risks from GAI systems. \nCBRN Information or Capabilities; \nInformation Security \nAI Actors: AI Design \n \nGOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, \ndevelopment, deployment, and uses of AI systems to minimize potential negative impacts. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.1-001 \nEstablish policies and procedures that address continual improvement processes \nfor GAI risk measurement. Address general risks associated with a lack of \nexplainability and transparency in GAI systems by using ample documentation and \ntechniques such as: application of gradient-based attributions, occlusion/term \nreduction, counterfactual prompts and prompt engineering, and analysis of \nembeddings; Assess and update risk measurement approaches at regular \ncadences. \nConfabulation \nGV-4.1-002 \nEstablish policies, procedures, and processes detailing risk measurement in \ncontext of use with standardized measurement protocols and structured public \nfeedback exercises such as AI red-teaming or independent external evaluations. \nCBRN Information and Capability; \nValue Chain and Component \nIntegration \n', ' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 21, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How does the AI Incident Database help with AI challenges in cybersecurity and mental health?,"[' \n54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant. \nhttps://www.mandiant.com/resources/blog/securing-ai-pipeline \nBurgess, M. (2024) Generative AI’s Biggest Security Flaw Is Not Easy to Fix. WIRED. \nhttps://www.wired.com/story/generative-ai-prompt-injection-hacking/ \nBurtell, M. et al. (2024) The Surprising Power of Next Word Prediction: Large Language Models \nExplained, Part 1. Georgetown Center for Security and Emerging Technology. \nhttps://cset.georgetown.edu/article/the-surprising-power-of-next-word-prediction-large-language-\nmodels-explained-part-1/ \nCanadian Centre for Cyber Security (2023) Generative artificial intelligence (AI) - ITSAP.00.041. \nhttps://www.cyber.gc.ca/en/guidance/generative-artificial-intelligence-ai-itsap00041 \nCarlini, N., et al. (2021) Extracting Training Data from Large Language Models. Usenix. \nhttps://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting \nCarlini, N. et al. (2023) Quantifying Memorization Across Neural Language Models. ICLR 2023. \nhttps://arxiv.org/pdf/2202.07646 \nCarlini, N. et al. (2024) Stealing Part of a Production Language Model. arXiv. \nhttps://arxiv.org/abs/2403.06634 \nChandra, B. et al. (2023) Dismantling the Disinformation Business of Chinese Influence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\nchinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\nAI_Companionship_A_Dialectical_Inquiry_into_Replika \nDahl, M. et al. (2024) Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. arXiv. \nhttps://arxiv.org/abs/2401.01301 \n', "" \n55 \nDe Angelo, D. (2024) Short, Mid and Long-Term Impacts of AI in Cybersecurity. Palo Alto Networks. \nhttps://www.paloaltonetworks.com/blog/2024/02/impacts-of-ai-in-cybersecurity/ \nDe Freitas, J. et al. (2023) Chatbots and Mental Health: Insights into the Safety of Generative AI. Harvard \nBusiness School. https://www.hbs.edu/ris/Publication%20Files/23-011_c1bdd417-f717-47b6-bccb-\n5438c6e65c1a_f6fd9798-3c2d-4932-b222-056231fe69d7.pdf \nDietvorst, B. et al. (2014) Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them \nErr. Journal of Experimental Psychology. https://marketing.wharton.upenn.edu/wp-\ncontent/uploads/2016/10/Dietvorst-Simmons-Massey-2014.pdf \nDuhigg, C. (2012) How Companies Learn Your Secrets. New York Times. \nhttps://www.nytimes.com/2012/02/19/magazine/shopping-habits.html \nElsayed, G. et al. (2024) Images altered to trick machine vision can influence humans too. Google \nDeepMind. https://deepmind.google/discover/blog/images-altered-to-trick-machine-vision-can-\ninfluence-humans-too/ \nEpstein, Z. et al. (2023). Art and the science of generative AI. Science. \nhttps://www.science.org/doi/10.1126/science.adh4451 \nFeffer, M. et al. (2024) Red-Teaming for Generative AI: Silver Bullet or Security Theater? arXiv. \nhttps://arxiv.org/pdf/2401.15897 \nGlazunov, S. et al. (2024) Project Naptime: Evaluating Offensive Security Capabilities of Large Language \nModels. Project Zero. https://googleprojectzero.blogspot.com/2024/06/project-naptime.html \nGreshake, K. et al. (2023) Not what you've signed up for: Compromising Real-World LLM-Integrated \nApplications with Indirect Prompt Injection. arXiv. https://arxiv.org/abs/2302.12173 \nHagan, M. (2024) Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to \npeople’s legal problem stories. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936 \nHaran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. \nhttps://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/ \nInformation Technology Industry Council (2024) Authenticating AI-Generated Content. \nhttps://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf \nJain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To Equal Opportunity. arXiv. \nhttps://arxiv.org/pdf/2305.08157 \nJi, Z. et al (2023) Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, \nArticle 248. https://doi.org/10.1145/3571730 \nJones-Jang, S. et al. (2022) How do people react to AI failure? Automation bias, algorithmic aversion, and \nperceived controllability. Oxford. https://academic.oup.com/jcmc/article/28/1/zmac029/6827859] \nJussupow, E. et al. (2020) Why Are We Averse Towards Algorithms? A Comprehensive Literature Review \non Algorithm Aversion. ECIS 2020. https://aisel.aisnet.org/ecis2020_rp/168/ \nKalai, A., et al. (2024) Cal""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 57, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 58, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What steps ensure automated systems avoid bias and maintain safety?,"[' \xad\xad\xad\xad\xad\xad\xad\nALGORITHMIC DISCRIMINATION Protections\nYou should not face discrimination by algorithms \nand systems should be used and designed in an \nequitable \nway. \nAlgorithmic \ndiscrimination \noccurs when \nautomated systems contribute to unjustified different treatment or \nimpacts disfavoring people based on their race, color, ethnicity, \nsex \n(including \npregnancy, \nchildbirth, \nand \nrelated \nmedical \nconditions, \ngender \nidentity, \nintersex \nstatus, \nand \nsexual \norientation), religion, age, national origin, disability, veteran status, \ngenetic infor-mation, or any other classification protected by law. \nDepending on the specific circumstances, such algorithmic \ndiscrimination may violate legal protections. Designers, developers, \nand deployers of automated systems should take proactive and \ncontinuous measures to protect individuals and communities \nfrom algorithmic discrimination and to use and design systems in \nan equitable way. This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23\n', ' AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor \nbe designed to proactively protect you from harms \nimpacts of automated systems. You should be protected from inappropriate or irrelevant data use in the \ndesign, development, and deployment of automated systems, and from the compounded harm of its reuse. \nIndependent evaluation and reporting that confirms that the system is safe and effective, including reporting of \nsteps taken to mitigate potential harms, should be performed and the results made public whenever possible. \nALGORITHMIC DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms and systems should be used and designed in \nan equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified \ndifferent treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including \npregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other \nclassification protected by law. Depending on the specific circumstances, such algorithmic discrimination \nmay violate legal protections. Designers, developers, and deployers of automated systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to use and design systems in an equitable way. This protection should include proactive \nequity assessments as part of the system design, use of representative data and protection against proxies \nfor demographic features, ensuring accessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent \nevaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5\n']","To ensure automated systems avoid bias and maintain safety, designers, developers, and deployers should take proactive and continuous measures, including conducting proactive equity assessments as part of system design, using representative data, ensuring accessibility for people with disabilities, performing pre-deployment and ongoing disparity testing and mitigation, and maintaining clear organizational oversight. Additionally, independent evaluation and reporting should confirm that the system is safe and effective, including steps taken to mitigate potential harms.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 22, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 4, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What methods work for evaluating biases in AI content with diverse user feedback?,"[' \n38 \nMEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.13-001 \nCreate measurement error models for pre-deployment metrics to demonstrate \nconstruct validity for each metric (i.e., does the metric effectively operationalize \nthe desired concept): Measure or estimate, and document, biases or statistical \nvariance in applied metrics or structured human feedback processes; Leverage \ndomain expertise when modeling complex societal constructs such as hateful \ncontent. \nConfabulation; Information \nIntegrity; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV \n \nMEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available \nmeasurement techniques or where metrics are not yet available. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.2-001 \nEstablish processes for identifying emergent GAI system risks including \nconsulting with external AI Actors. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are \nestablished and integrated into AI system evaluation metrics. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.3-001 \nConduct impact assessments on how AI-generated content might affect \ndifferent social, economic, and cultural groups. \nHarmful Bias and Homogenization \nMS-3.3-002 \nConduct studies to understand how end users perceive and interact with GAI \ncontent and accompanying content provenance within context of use. Assess \nwhether the content aligns with their expectations and how they may act upon \nthe information presented. \nHuman-AI Configuration; \nInformation Integrity \nMS-3.3-003 \nEvaluate potential biases and stereotypes that could emerge from the AI-\ngenerated content using appropriate methodologies including computational \ntesting methods as well as evaluating structured feedback input. \nHarmful Bias and Homogenization \n', "" \n39 \nMS-3.3-004 \nProvide input for training materials about the capabilities and limitations of GAI \nsystems related to digital content transparency for AI Actors, other \nprofessionals, and the public about the societal impacts of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-3.3-005 \nRecord and integrate structured feedback about content provenance from \noperators, users, and potentially impacted communities through the use of \nmethods such as user research studies, focus groups, or community forums. \nActively seek feedback on generated content quality and potential biases. \nAssess the general awareness among end users and impacted communities \nabout the availability of these feedback channels. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Configuration; \nConfabulation; Information \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods to evaluate GAI system \ndecisions and verify alignment with intended purpose. \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor and document instances where human operators or other systems \noverride the GAI's decisions. Evaluate these cases to understand if the overrides \nare linked to issues related to content provenance. \nInformation Integrity \nMS-4.2-005 \nVerify and document the incorporation of results of structured public feedback \nexercises into design, implementation, deployment approval (“go”/“no-go” \ndecisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \nInformation Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n""]","The context mentions evaluating potential biases and stereotypes that could emerge from AI-generated content using appropriate methodologies, including computational testing methods as well as evaluating structured feedback input. Additionally, it suggests recording and integrating structured feedback about content provenance from operators, users, and potentially impacted communities through methods such as user research studies, focus groups, or community forums.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 41, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 42, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the U.S. AI Safety Institute's goals for NIST's AI risk standards?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "How might AI tech reinforce inequities in education, housing, and jobs, and add burdens on those using social welfare?","["" \n \nAPPENDIX\nPanel 3: Equal Opportunities and Civil Justice. This event explored current and emerging uses of \ntechnology that impact equity of opportunity in employment, education, and housing. \nWelcome: \n•\nRashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nDominique Harrison, Director for Technology Policy, The Joint Center for Political and Economic\nStudies\nModerator: Jenny Yang, Director, Office of Federal Contract Compliance Programs, Department of Labor \nPanelists: \n•\nChristo Wilson, Associate Professor of Computer Science, Northeastern University\n•\nFrida Polli, CEO, Pymetrics\n•\nKaren Levy, Assistant Professor, Department of Information Science, Cornell University\n•\nNatasha Duarte, Project Director, Upturn\n•\nElana Zeide, Assistant Professor, University of Nebraska College of Law\n•\nFabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport and Community\nAdvocate and Floor Captain, Atlantic Plaza Towers Tenants Association\nThe individual panelists described the ways in which AI systems and other technologies are increasingly being \nused to limit access to equal opportunities in education, housing, and employment. Education-related \nconcerning uses included the increased use of remote proctoring systems, student location and facial \nrecognition tracking, teacher evaluation systems, robot teachers, and more. Housing-related concerning uses \nincluding automated tenant background screening and facial recognition-based controls to enter or exit \nhousing complexes. Employment-related concerning uses included discrimination in automated hiring \nscreening and workplace surveillance. Various panelists raised the limitations of existing privacy law as a key \nconcern, pointing out that students should be able to reinvent themselves and require privacy of their student \nrecords and education-related data in order to do so. The overarching concerns of surveillance in these \ndomains included concerns about the chilling effects of surveillance on student expression, inappropriate \ncontrol of tenants via surveillance, and the way that surveillance of workers blurs the boundary between work \nand life and exerts extreme and potentially damaging control over workers' lives. Additionally, some panelists \npointed out ways that data from one situation was misapplied in another in a way that limited people's \nopportunities, for example data from criminal justice settings or previous evictions being used to block further \naccess to housing. Throughout, various panelists emphasized that these technologies are being used to shift the \nburden of oversight and efficiency from employers to workers, schools to students, and landlords to tenants, in \nways that diminish and encroach on equality of opportunity; assessment of these technologies should include \nwhether they are genuinely helpful in solving an identified problem. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, panelists individually described the importance of: receiving community input into the \ndesign and use of technologies, public reporting on crucial elements of these systems, better notice and consent \nprocedures that ensure privacy based on context and use case, ability to opt-out of using these systems and \nreceive a fallback to a human process, providing explanations of decisions and how these systems work, the \nneed for governance including training in using these systems, ensuring the technological use cases are \ngenuinely related to the goal task and are locally validated to work, and the need for institution and protection \nof third party audits to ensure systems continue to be accountable and valid. \n57\n"", "" \n \n \n \n \nAPPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing scope of technology use in providing for social welfare, including \nin fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. \nHowever, various panelists individually cautioned that these systems may reduce burden for government \nagencies by increasing the burden and agency of people using and interacting with these technologies. \nAdditionally, these systems can produce feedback loops and compounded harm, collecting data from \ncommunities and using it to reinforce inequality. Various panelists suggested that these harms could be \nmitigated by ensuring community input at the beginning of the design process, providing ways to opt out of \nthese systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, \nand providing clear notice about the use of these systems and clear explanations of how and what the \ntechnologies are doing. Some panelists suggested that technology should be used to help people receive \nbenefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only \nused to provide a positive outcome; technology shouldn't be used to take supports away from people who need \nthem. \nPanel 6: The Healthcare System. This event explored current and emerging uses of technology in the \nhealthcare system and consumer products related to health. \nWelcome:\n•\nAlondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology\nPolicy\n•\nPatrick Gaspard, President and CEO, Center for American Progress\nModerator: Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of \nHealth and Human Services. \nPanelists: \n•\nMark Schneider, Health Innovation Advisor, ChristianaCare\n•\nZiad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management,\nUniversity of California, Berkeley School of Public Health\n•\nDorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and\nSadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania\n•\nDavid Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University\n•\nJamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for\nHealth Equity\xad\nPanelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and \noutcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of tech-\nnology as both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of \ndelivery, various panelists pointed to a number of concerns including access to and expense of broadband \nservice, the privacy concerns associated with telehealth systems, the expense associated with health \nmonitoring devices, and how this can exacerbate equity issues. On the issue of technology enhanced care, \nsome panelists spoke extensively about the way in which racial biases and the use of race in medicine \nperpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used \nin medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance \nof having the voices of those subjected to these technologies be heard.\n59\n""]","AI technology can reinforce inequities in education, housing, and jobs by being used to limit access to equal opportunities, such as through automated tenant background screening, discrimination in automated hiring screening, and remote proctoring systems. Additionally, these technologies can shift the burden of oversight from employers to workers, schools to students, and landlords to tenants, which diminishes equality of opportunity. In the context of social welfare, AI systems may reduce the burden for government agencies but increase the burden on individuals interacting with these technologies, potentially creating feedback loops that reinforce inequality.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 56, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 58, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What role do algorithmic impact assessments play in clarifying accountability for automated systems?,"["" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and \nexplanations as to how and why a decision was made or an action was taken by the system. These expectations are \nexplained below. \nProvide clear, timely, understandable, and accessible notice of use and explanations \xad\nGenerally accessible plain language documentation. The entity responsible for using the automated \nsystem should ensure that documentation describing the overall system (including any human components) is \npublic and easy to find. The documentation should describe, in plain language, how the system works and how \nany automated component is used to determine an action or decision. It should also include expectations about \nreporting described throughout this framework, such as the algorithmic impact assessments described as \npart of Algorithmic Discrimination Protections. \nAccountable. Notices should clearly identify the entity responsible for designing each component of the \nsystem and the entity using it. \nTimely and up-to-date. Users should receive notice of the use of automated systems in advance of using or \nwhile being impacted by the technology. An explanation should be available with the decision itself, or soon \nthereafter. Notice should be kept up-to-date and people impacted by the system should be notified of use case \nor key functionality changes. \nBrief and clear. Notices and explanations should be assessed, such as by research on users’ experiences, \nincluding user testing, to ensure that the people using or impacted by the automated system are able to easily \nfind notices and explanations, read them quickly, and understand and act on them. This includes ensuring that \nnotices and explanations are accessible to users with disabilities and are available in the language(s) and read-\ning level appropriate for the audience. Notices and explanations may need to be available in multiple forms, \n(e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the \nAmerican public. \nProvide explanations as to how and why a decision was made or an action was taken by an \nautomated system \nTailored to the purpose. Explanations should be tailored to the specific purpose for which the user is \nexpected to use the explanation, and should clearly state that purpose. An informational explanation might \ndiffer from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the \ncontext of a dispute or contestation process. For the purposes of this framework, 'explanation' should be \nconstrued broadly. An explanation need not be a plain-language statement about causality but could consist of \nany mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the \nstated purpose. Tailoring should be assessed (e.g., via user experience research). \nTailored to the target of the explanation. Explanations should be targeted to specific audiences and \nclearly state that audience. An explanation provided to the subject of a decision might differ from one provided \nto an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience \nresearch). \n43\n"", "" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTailored to the level of risk. An assessment should be done to determine the level of risk of the auto\xad\nmated system. In settings where the consequences are high as determined by a risk assessment, or extensive \noversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should \nbe built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully \ntransparent models should be used), rather than as an after-the-decision interpretation. In other settings, the \nextent of explanation provided should be tailored to the risk level. \nValid. The explanation provided by a system should accurately reflect the factors and the influences that led \nto a particular decision, and should be meaningful for the particular customization based on purpose, target, \nand level of risk. While approximation and simplification may be necessary for the system to succeed based on \nthe explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns \nrelated to revealing decision-making information, such simplifications should be done in a scientifically \nsupportable way. Where appropriate based on the explanatory system, error ranges for the explanation should \nbe calculated and included in the explanation, with the choice of presentation of such information balanced \nwith usability and overall interface complexity concerns. \nDemonstrate protections for notice and explanation \nReporting. Summary reporting should document the determinations made based on the above consider\xad\nations, including: the responsible entities for accountability purposes; the goal and use cases for the system, \nidentified users, and impacted populations; the assessment of notice clarity and timeliness; the assessment of \nthe explanation's validity and accessibility; the assessment of the level of risk; and the account and assessment \nof how explanations are tailored, including to the purpose, the recipient of the explanation, and the level of \nrisk. Individualized profile information should be made readily available to the greatest extent possible that \nincludes explanations for any system impacts or inferences. Reporting should be provided in a clear plain \nlanguage and machine-readable manner. \n44\n""]",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 42, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 43, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does human input affect fairness and fallback in automated systems?,"[' \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably effective mechanisms to opt out in favor of a human alterna\xad\ntive, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional \nhuman oversight and safeguards for systems used in sensitive domains, and with training and assessment for any \nhuman-based portions of the system to ensure effectiveness. \nProvide a mechanism to conveniently opt out from automated systems in favor of a human \nalternative, where appropriate \nBrief, clear, accessible notice and instructions. Those impacted by an automated system should be \ngiven a brief, clear notice that they are entitled to opt-out, along with clear instructions for how to opt-out. \nInstructions should be provided in an accessible form and should be easily findable by those impacted by the \nautomated system. The brevity, clarity, and accessibility of the notice and instructions should be assessed (e.g., \nvia user experience research). \nHuman alternatives provided when appropriate. In many scenarios, there is a reasonable expectation \nof human involvement in attaining rights, opportunities, or access. When automated systems make up part of \nthe attainment process, alternative timely human-driven processes should be provided. The use of a human \nalternative should be triggered by an opt-out process. \nTimely and not burdensome human alternative. Opting out should be timely and not unreasonably \nburdensome in both the process of requesting to opt-out and the human-driven alternative provided. \nProvide timely human consideration and remedy by a fallback and escalation system in the \nevent that an automated system fails, produces error, or you would like to appeal or con\xad\ntest its impacts on you \nProportionate. The availability of human consideration and fallback, along with associated training and \nsafeguards against human bias, should be proportionate to the potential of the automated system to meaning\xad\nfully impact rights, opportunities, or access. Automated systems that have greater control over outcomes, \nprovide input to high-stakes decisions, relate to sensitive domains, or otherwise have a greater potential to \nmeaningfully impact rights, opportunities, or access should have greater availability (e.g., staffing) and over\xad\nsight of human consideration and fallback mechanisms. \nAccessible. Mechanisms for human consideration and fallback, whether in-person, on paper, by phone, or \notherwise provided, should be easy to find and use. These mechanisms should be tested to ensure that users \nwho have trouble with the automated system are able to use human consideration and fallback, with the under\xad\nstanding that it may be these users who are most likely to need the human assistance. Similarly, it should be \ntested to ensure that users with disabilities are able to find and use human consideration and fallback and also \nrequest reasonable accommodations or modifications. \nConvenient. Mechanisms for human consideration and fallback should not be unreasonably burdensome as \ncompared to the automated system’s equivalent. \n49\n', "" \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nEquitable. Consideration should be given to ensuring outcomes of the fallback and escalation system are \nequitable when compared to those of the automated system and such that the fallback and escalation \nsystem provides equitable access to underserved communities.105 \nTimely. Human consideration and fallback are only useful if they are conducted and concluded in a \ntimely manner. The determination of what is timely should be made relative to the specific automated \nsystem, and the review system should be staffed and regularly assessed to ensure it is providing timely \nconsideration and fallback. In time-critical systems, this mechanism should be immediately available or, \nwhere possible, available before the harm occurs. Time-critical systems include, but are not limited to, \nvoting-related systems, automated building access and other access systems, systems that form a critical \ncomponent of healthcare, and systems that have the ability to withhold wages or otherwise cause \nimmediate financial penalties. \nEffective. The organizational structure surrounding processes for consideration and fallback should \nbe designed so that if the human decision-maker charged with reassessing a decision determines that it \nshould be overruled, the new decision will be effectively enacted. This includes ensuring that the new \ndecision is entered into the automated system throughout its components, any previous repercussions from \nthe old decision are also overturned, and safeguards are put in place to help ensure that future decisions do \nnot result in the same errors. \nMaintained. The human consideration and fallback process and any associated automated processes \nshould be maintained and supported as long as the relevant automated system continues to be in use. \nInstitute training, assessment, and oversight to combat automation bias and ensure any \nhuman-based components of a system are effective. \nTraining and assessment. Anyone administering, interacting with, or interpreting the outputs of an auto\xad\nmated system should receive training in that system, including how to properly interpret outputs of a system \nin light of its intended purpose and in how to mitigate the effects of automation bias. The training should reoc\xad\ncur regularly to ensure it is up to date with the system and to ensure the system is used appropriately. Assess\xad\nment should be ongoing to ensure that the use of the system with human involvement provides for appropri\xad\nate results, i.e., that the involvement of people does not invalidate the system's assessment as safe and effective \nor lead to algorithmic discrimination. \nOversight. Human-based systems have the potential for bias, including automation bias, as well as other \nconcerns that may limit their effectiveness. The results of assessments of the efficacy and potential bias of \nsuch human-based systems should be overseen by governance structures that have the potential to update the \noperation of the human-based system in order to mitigate these effects. \n50\n""]","The context does not explicitly address how human input affects fairness and fallback in automated systems. However, it mentions that human consideration and fallback mechanisms should be proportionate, accessible, equitable, timely, and effective, which implies that human input is crucial in ensuring these aspects are upheld. The effectiveness of human involvement is emphasized through training, assessment, and oversight to combat automation bias and ensure appropriate results.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 48, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 49, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What factors to consider for human options in automated systems per the AI Bill of Rights?,"['TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \nSAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13\n']",The answer to given question is not present in context,multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 12, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does a document retention policy support GAI system integrity?,"[' \n16 \nGOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, and \norganizational roles and responsibilities are clearly defined, including determining the frequency of periodic review. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.5-001 Define organizational responsibilities for periodic review of content provenance \nand incident monitoring for GAI systems. \nInformation Integrity \nGV-1.5-002 \nEstablish organizational policies and procedures for after action reviews of GAI \nsystem incident response and incident disclosures, to identify gaps; Update \nincident response and incident disclosure processes as required. \nHuman-AI Configuration; \nInformation Security \nGV-1.5-003 \nMaintain a document retention policy to keep history for test, evaluation, \nvalidation, and verification (TEVV), and digital content transparency methods for \nGAI. \nInformation Integrity; Intellectual \nProperty \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring \n \nGOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory \nand adjust AI system inventory requirements to account for GAI risks. \nInformation Security \nGV-1.6-002 Define any inventory exemptions in organizational policies for GAI systems \nembedded into application software. \nValue Chain and Component \nIntegration \nGV-1.6-003 \nIn addition to general model, governance, and risk information, consider the \nfollowing items in GAI system inventory entries: Data provenance information \n(e.g., source, signatures, versioning, watermarks); Known issues reported from \ninternal bug tracking or external information sharing resources (e.g., AI incident \ndatabase, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles \nand responsibilities; Special rights and considerations for intellectual property, \nlicensed works, or personal, privileged, proprietary or sensitive data; Underlying \nfoundation models, versions of underlying models, and access modes. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity; Intellectual Property; \nValue Chain and Component \nIntegration \nAI Actor Tasks: Governance and Oversight \n \n', ' \n19 \nGV-4.1-003 \nEstablish policies, procedures, and processes for oversight functions (e.g., senior \nleadership, legal, compliance, including internal evaluation) across the GAI \nlifecycle, from problem formulation and supply chains to system decommission. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, \nevaluate, and use, and they communicate about the impacts more broadly. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.2-001 \nEstablish terms of use and terms of service for GAI systems. \nIntellectual Property; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nGV-4.2-002 \nInclude relevant AI Actors in the GAI system risk identification process. \nHuman-AI Configuration \nGV-4.2-003 \nVerify that downstream GAI system impacts (such as the use of third-party \nplugins) are included in the impact documentation process. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. \nAction ID \nSuggested Action \nGAI Risks \nGV4.3--001 \nEstablish policies for measuring the effectiveness of employed content \nprovenance methodologies (e.g., cryptography, watermarking, steganography, \netc.) \nInformation Integrity \nGV-4.3-002 \nEstablish organizational practices to identify the minimum set of criteria \nnecessary for GAI system incident reporting such as: System ID (auto-generated \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident, \nDescription, Impact(s), Stakeholder(s) Impacted. \nInformation Security \n']",The context does not provide specific information on how a document retention policy supports GAI system integrity.,multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 19, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 22, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What challenges did panelists see at the tech-health equity intersection?,"["" \n \n \n \n \nAPPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing scope of technology use in providing for social welfare, including \nin fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. \nHowever, various panelists individually cautioned that these systems may reduce burden for government \nagencies by increasing the burden and agency of people using and interacting with these technologies. \nAdditionally, these systems can produce feedback loops and compounded harm, collecting data from \ncommunities and using it to reinforce inequality. Various panelists suggested that these harms could be \nmitigated by ensuring community input at the beginning of the design process, providing ways to opt out of \nthese systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, \nand providing clear notice about the use of these systems and clear explanations of how and what the \ntechnologies are doing. Some panelists suggested that technology should be used to help people receive \nbenefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only \nused to provide a positive outcome; technology shouldn't be used to take supports away from people who need \nthem. \nPanel 6: The Healthcare System. This event explored current and emerging uses of technology in the \nhealthcare system and consumer products related to health. \nWelcome:\n•\nAlondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology\nPolicy\n•\nPatrick Gaspard, President and CEO, Center for American Progress\nModerator: Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of \nHealth and Human Services. \nPanelists: \n•\nMark Schneider, Health Innovation Advisor, ChristianaCare\n•\nZiad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management,\nUniversity of California, Berkeley School of Public Health\n•\nDorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and\nSadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania\n•\nDavid Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University\n•\nJamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for\nHealth Equity\xad\nPanelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and \noutcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of tech-\nnology as both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of \ndelivery, various panelists pointed to a number of concerns including access to and expense of broadband \nservice, the privacy concerns associated with telehealth systems, the expense associated with health \nmonitoring devices, and how this can exacerbate equity issues. On the issue of technology enhanced care, \nsome panelists spoke extensively about the way in which racial biases and the use of race in medicine \nperpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used \nin medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance \nof having the voices of those subjected to these technologies be heard.\n59\n"", "" \n \n \n \nAPPENDIX\nPanelists discussed the benefits of AI-enabled systems and their potential to build better and more \ninnovative infrastructure. They individually noted that while AI technologies may be new, the process of \ntechnological diffusion is not, and that it was critical to have thoughtful and responsible development and \nintegration of technology within communities. Some panelists suggested that the integration of technology \ncould benefit from examining how technological diffusion has worked in the realm of urban planning: \nlessons learned from successes and failures there include the importance of balancing ownership rights, use \nrights, and community health, safety and welfare, as well ensuring better representation of all voices, \nespecially those traditionally marginalized by technological advances. Some panelists also raised the issue of \npower structures – providing examples of how strong transparency requirements in smart city projects \nhelped to reshape power and give more voice to those lacking the financial or political power to effect change. \nIn discussion of technical and governance interventions that that are needed to protect against the harms \nof these technologies, various panelists emphasized the need for transparency, data collection, and \nflexible and reactive policy development, analogous to how software is continuously updated and deployed. \nSome panelists pointed out that companies need clear guidelines to have a consistent environment for \ninnovation, with principles and guardrails being the key to fostering responsible innovation. \nPanel 2: The Criminal Justice System. This event explored current and emergent uses of technology in \nthe criminal justice system and considered how they advance or undermine public safety, justice, and \ndemocratic values. \nWelcome: \n•\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n•\nBen Winters, Counsel, Electronic Privacy Information Center\nModerator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity \nPanelists: \n•\nSean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab\n•\nKristian Lum, Researcher\n•\nJumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers\n•\nStanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University\nCollege of Medicine\n•\nMyaisha Hayes, Campaign Strategies Director, MediaJustice\nPanelists discussed uses of technology within the criminal justice system, including the use of predictive \npolicing, pretrial risk assessments, automated license plate readers, and prison communication tools. The \ndiscussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; \nsuch strategies might include data-driven approaches, but the focus on safety should be primary, and \ntechnology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, various panelists emphasized that transparency is important but is not enough to achieve \naccountability. Some panelists discussed their individual views on additional system needs for validity, and \nagreed upon the importance of advisory boards and compensated community input early in the design process \n(before the technology is built and instituted). Various panelists also emphasized the importance of regulation \nthat includes limits to the type and cost of such technologies. \n56\n""]","Panelists discussed several challenges at the tech-health equity intersection, including access to and expense of broadband service, privacy concerns associated with telehealth systems, and the expense associated with health monitoring devices, which can exacerbate equity issues. They also highlighted the need for accountability in the technologies used in medical care, particularly regarding racial biases and the use of race in medicine, which perpetuate harms and embed prior discrimination.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 58, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 55, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do transparency policies help manage GAI risks and ensure compliance?,"[' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n', ' \n15 \nGV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in \naccordance with activities in the AI RMF Map function. \nCBRN Information or Capabilities; \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nGV-1.3-005 \nMaintain an updated hierarchy of identified and expected GAI risks connected to \ncontexts of GAI model advancement and use, potentially including specialized risk \nlevels for GAI systems that address issues such as model collapse and algorithmic \nmonoculture. \nHarmful Bias and Homogenization \nGV-1.3-006 \nReevaluate organizational risk tolerances to account for unacceptable negative risk \n(such as where significant negative impacts are imminent, severe harms are \nactually occurring, or large-scale risks could occur); and broad GAI negative risks, \nincluding: Immature safety or risk cultures related to AI and GAI design, \ndevelopment and deployment, public information integrity risks, including impacts \non democratic processes, unknown long-term performance characteristics of GAI. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; CBRN \nInformation or Capabilities \nGV-1.3-007 Devise a plan to halt development or deployment of a GAI system that poses \nunacceptable negative risk. \nCBRN Information and Capability; \nInformation Security; Information \nIntegrity \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other \ncontrols based on organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.4-001 \nEstablish policies and mechanisms to prevent GAI systems from generating \nCSAM, NCII or content that violates the law. \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias \nand Homogenization; \nDangerous, Violent, or Hateful \nContent \nGV-1.4-002 \nEstablish transparent acceptable use policies for GAI that address illegal use or \napplications of GAI. \nCBRN Information or \nCapabilities; Obscene, \nDegrading, and/or Abusive \nContent; Data Privacy; Civil \nRights violations \nAI Actor Tasks: AI Development, AI Deployment, Governance and Oversight \n \n']","Transparency policies help manage GAI risks by establishing processes for documenting the origin and history of training data and generated data for GAI applications. This promotes digital content transparency while balancing the proprietary nature of training approaches, thereby ensuring compliance with data privacy, information integrity, and intellectual property standards.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 18, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How important are clear decision-making explanations in automated systems for risk assessment and user understanding?,"["" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nTailored to the level of risk. An assessment should be done to determine the level of risk of the auto\xad\nmated system. In settings where the consequences are high as determined by a risk assessment, or extensive \noversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should \nbe built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully \ntransparent models should be used), rather than as an after-the-decision interpretation. In other settings, the \nextent of explanation provided should be tailored to the risk level. \nValid. The explanation provided by a system should accurately reflect the factors and the influences that led \nto a particular decision, and should be meaningful for the particular customization based on purpose, target, \nand level of risk. While approximation and simplification may be necessary for the system to succeed based on \nthe explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns \nrelated to revealing decision-making information, such simplifications should be done in a scientifically \nsupportable way. Where appropriate based on the explanatory system, error ranges for the explanation should \nbe calculated and included in the explanation, with the choice of presentation of such information balanced \nwith usability and overall interface complexity concerns. \nDemonstrate protections for notice and explanation \nReporting. Summary reporting should document the determinations made based on the above consider\xad\nations, including: the responsible entities for accountability purposes; the goal and use cases for the system, \nidentified users, and impacted populations; the assessment of notice clarity and timeliness; the assessment of \nthe explanation's validity and accessibility; the assessment of the level of risk; and the account and assessment \nof how explanations are tailored, including to the purpose, the recipient of the explanation, and the level of \nrisk. Individualized profile information should be made readily available to the greatest extent possible that \nincludes explanations for any system impacts or inferences. Reporting should be provided in a clear plain \nlanguage and machine-readable manner. \n44\n"", "" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and \nexplanations as to how and why a decision was made or an action was taken by the system. These expectations are \nexplained below. \nProvide clear, timely, understandable, and accessible notice of use and explanations \xad\nGenerally accessible plain language documentation. The entity responsible for using the automated \nsystem should ensure that documentation describing the overall system (including any human components) is \npublic and easy to find. The documentation should describe, in plain language, how the system works and how \nany automated component is used to determine an action or decision. It should also include expectations about \nreporting described throughout this framework, such as the algorithmic impact assessments described as \npart of Algorithmic Discrimination Protections. \nAccountable. Notices should clearly identify the entity responsible for designing each component of the \nsystem and the entity using it. \nTimely and up-to-date. Users should receive notice of the use of automated systems in advance of using or \nwhile being impacted by the technology. An explanation should be available with the decision itself, or soon \nthereafter. Notice should be kept up-to-date and people impacted by the system should be notified of use case \nor key functionality changes. \nBrief and clear. Notices and explanations should be assessed, such as by research on users’ experiences, \nincluding user testing, to ensure that the people using or impacted by the automated system are able to easily \nfind notices and explanations, read them quickly, and understand and act on them. This includes ensuring that \nnotices and explanations are accessible to users with disabilities and are available in the language(s) and read-\ning level appropriate for the audience. Notices and explanations may need to be available in multiple forms, \n(e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the \nAmerican public. \nProvide explanations as to how and why a decision was made or an action was taken by an \nautomated system \nTailored to the purpose. Explanations should be tailored to the specific purpose for which the user is \nexpected to use the explanation, and should clearly state that purpose. An informational explanation might \ndiffer from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the \ncontext of a dispute or contestation process. For the purposes of this framework, 'explanation' should be \nconstrued broadly. An explanation need not be a plain-language statement about causality but could consist of \nany mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the \nstated purpose. Tailoring should be assessed (e.g., via user experience research). \nTailored to the target of the explanation. Explanations should be targeted to specific audiences and \nclearly state that audience. An explanation provided to the subject of a decision might differ from one provided \nto an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience \nresearch). \n43\n""]","The context emphasizes the importance of clear decision-making explanations in automated systems, particularly for risk assessment and user understanding. It states that explanations should accurately reflect the factors influencing decisions and be tailored to the purpose and audience, ensuring they are understandable and accessible. This is crucial in settings with high consequences, where transparency and clarity are necessary for accountability and user trust.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 43, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 42, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What risks do prompt injection attacks pose to GAI, especially regarding misinformation and data poisoning?","[' \n11 \nvalue chain (e.g., data inputs, processing, GAI training, or deployment environments), conventional \ncybersecurity practices may need to adapt or evolve. \nFor instance, prompt injection involves modifying what input is provided to a GAI system so that it \nbehaves in unintended ways. In direct prompt injections, attackers might craft malicious prompts and \ninput them directly to a GAI system, with a variety of downstream negative consequences to \ninterconnected systems. Indirect prompt injection attacks occur when adversaries remotely (i.e., without \na direct interface) exploit LLM-integrated applications by injecting prompts into data likely to be \nretrieved. Security researchers have already demonstrated how indirect prompt injections can exploit \nvulnerabilities by stealing proprietary data or running malicious code remotely on a machine. Merely \nquerying a closed production model can elicit previously undisclosed information about that model. \nAnother cybersecurity risk to GAI is data poisoning, in which an adversary compromises a training \ndataset used by a model to manipulate its outputs or operation. Malicious tampering with data or parts \nof the model could exacerbate risks associated with GAI system outputs. \nTrustworthy AI Characteristics: Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n2.10. \nIntellectual Property \nIntellectual property risks from GAI systems may arise where the use of copyrighted works is not a fair \nuse under the fair use doctrine. If a GAI system’s training data included copyrighted material, GAI \noutputs displaying instances of training data memorization (see Data Privacy above) could infringe on \ncopyright. \nHow GAI relates to copyright, including the status of generated content that is similar to but does not \nstrictly copy work protected by copyright, is currently being debated in legal fora. Similar discussions are \ntaking place regarding the use or emulation of personal identity, likeness, or voice without permission. \nTrustworthy AI Characteristics: Accountable and Transparent, Fair with Harmful Bias Managed, Privacy \nEnhanced \n2.11. \nObscene, Degrading, and/or Abusive Content \nGAI can ease the production of and access to illegal non-consensual intimate imagery (NCII) of adults, \nand/or child sexual abuse material (CSAM). GAI-generated obscene, abusive or degrading content can \ncreate privacy, psychological and emotional, and even physical harms, and in some cases may be illegal. \nGenerated explicit or obscene AI content may include highly realistic “deepfakes” of real individuals, \nincluding children. The spread of this kind of material can have downstream negative consequences: in \nthe context of CSAM, even if the generated images do not resemble specific individuals, the prevalence \nof such images can divert time and resources from efforts to find real-world victims. Outside of CSAM, \nthe creation and spread of NCII disproportionately impacts women and sexual minorities, and can have \nsubsequent negative consequences including decline in overall mental health, substance abuse, and \neven suicidal thoughts. \nData used for training GAI models may unintentionally include CSAM and NCII. A recent report noted \nthat several commonly used GAI training datasets were found to contain hundreds of known images of \n', ' \n10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published. \n']","Prompt injection attacks pose significant risks to GAI by enabling attackers to modify inputs to the system, leading to unintended behaviors and potential misinformation. Direct prompt injections can result in malicious prompts being inputted, causing negative consequences for interconnected systems. Indirect prompt injection attacks exploit vulnerabilities in LLM-integrated applications, potentially leading to the theft of proprietary data or the execution of malicious code. Additionally, data poisoning is a risk where adversaries compromise training datasets, manipulating the outputs or operations of GAI systems, which can exacerbate misinformation and the reliability of generated content.",multi_context,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 14, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 13, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What key processes and stakeholder interactions ensure automated systems' safety and effectiveness?,"[' \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nOngoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibra\xad\ntion procedures, in place to ensure that their performance does not fall below an acceptable level over time, \nbased on changing real-world conditions or deployment contexts, post-deployment modification, or unexpect\xad\ned conditions. This ongoing monitoring should include continuous evaluation of performance metrics and \nharm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well \nas ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitor\xad\ning should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\xad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\xad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\xad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing \nassessment and mitigation. Organizational stakeholders including those with oversight of the business process \nor operation being automated, as well as other organizational divisions that may be affected due to the use of \nthe system, should be involved in establishing governance procedures. Responsibility should rest high enough \nin the organization that decisions about resources, mitigation, incident response, and potential rollback can be \nmade promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those \nholding this responsibility should be made aware of any use cases with the potential for meaningful impact on \npeople’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, \nit may be appropriate for an independent ethics review to be conducted before deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its \nreuse \nRelevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally, \njustification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19\n', ' \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nIn order to ensure that an automated system is safe and effective, it should include safeguards to protect the \npublic from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task \nat hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of \nthe system. These expectations are explained below. \nProtect the public from harm in a proactive and ongoing manner \nConsultation. The public should be consulted in the design, implementation, deployment, acquisition, and \nmaintenance phases of automated system development, with emphasis on early-stage consultation before a \nsystem is introduced or a large change implemented. This consultation should directly engage diverse impact\xad\ned communities to consider concerns and risks that may be unique to those communities, or disproportionate\xad\nly prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakehold\xad\ners may differ depending on the specific automated system and development phase, but should include \nsubject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as \ncivil rights, civil liberties, and privacy experts. For private sector applications, consultations before product \nlaunch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback. \nTesting. Systems should undergo extensive testing before deployment. This testing should follow \ndomain-specific best practices, when available, for ensuring the technology will work in its real-world \ncontext. Such testing should take into account both the specific technology used and the roles of any human \noperators or reviewers who impact system outcomes or effectiveness; testing should include both automated \nsystems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the \nconditions in which the system will be deployed, and new testing may be required for each deployment to \naccount for material differences in conditions from one deployment to another. Following testing, system \nperformance should be compared with the in-place, potentially human-driven, status quo procedures, with \nexisting human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention \nand mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18\n']","Key processes and stakeholder interactions that ensure automated systems' safety and effectiveness include ongoing monitoring procedures, clear organizational oversight, consultation with the public during various phases of development, extensive testing before deployment, and proactive risk identification and mitigation. These processes involve continuous evaluation of performance metrics, involvement of organizational stakeholders, engagement with diverse impacted communities, and adherence to domain-specific best practices for testing.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 18, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 17, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What strategies can help prevent algorithmic bias in automated systems for underserved communities?,"["" \n \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nEnsuring accessibility during design, development, and deployment. Systems should be \ndesigned, developed, and deployed by organizations in ways that ensure accessibility to people with disabili\xad\nties. This should include consideration of a wide variety of disabilities, adherence to relevant accessibility \nstandards, and user experience research both before and after deployment to identify and address any accessi\xad\nbility barriers to the use or effectiveness of the automated system. \nDisparity assessment. Automated systems should be tested using a broad set of measures to assess wheth\xad\ner the system components, both in pre-deployment testing and in-context deployment, produce disparities. \nThe demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex \n(including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifi\xad\ncation protected by law. The broad set of measures assessed should include demographic performance mea\xad\nsures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity \nassessment should be separated from data used for the automated system and privacy protections should be \ninstituted; in some cases it may make sense to perform such assessment using a data sample. For every \ninstance where the deployed automated system leads to different treatment or impacts disfavoring the identi\xad\nfied groups, the entity governing, implementing, or using the system should document the disparity and a \njustification for any continued use of the system. \nDisparity mitigation. When a disparity assessment identifies a disparity against an assessed group, it may \nbe appropriate to take steps to mitigate or eliminate the disparity. In some cases, mitigation or elimination of \nthe disparity may be required by law. \nDisparities that have the potential to lead to algorithmic \ndiscrimination, cause meaningful harm, or violate equity49 goals should be mitigated. When designing and \nevaluating an automated system, steps should be taken to evaluate multiple models and select the one that \nhas the least adverse impact, modify data input choices, or otherwise identify a system with fewer \ndisparities. If adequate mitigation of the disparity is not possible, then the use of the automated system \nshould be reconsidered. One of the considerations in whether to use the system should be the validity of any \ntarget measure; unobservable targets may result in the inappropriate use of proxies. Meeting these \nstandards may require instituting mitigation procedures and other protective measures to address \nalgorithmic discrimination, avoid meaningful harm, and achieve equity goals. \nOngoing monitoring and mitigation. Automated systems should be regularly monitored to assess algo\xad\nrithmic discrimination that might arise from unforeseen interactions of the system with inequities not \naccounted for during the pre-deployment testing, changes to the system after deployment, or changes to the \ncontext of use or associated data. Monitoring and disparity assessment should be performed by the entity \ndeploying or using the automated system to examine whether the system has led to algorithmic discrimina\xad\ntion when deployed. This assessment should be performed regularly and whenever a pattern of unusual \nresults is occurring. It can be performed using a variety of approaches, taking into account whether and how \ndemographic information of impacted people is available, for example via testing with a sample of users or via \nqualitative user experience research. Riskier and higher-impact systems should be monitored and assessed \nmore frequently. Outcomes of this assessment should include additional disparity mitigation, if needed, or \nfallback to earlier procedures in the case that equity standards are no longer met and can't be mitigated, and \nprior mechanisms provide better adherence to equity standards. \n27\nAlgorithmic \nDiscrimination \nProtections \n"", ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n']","Strategies to prevent algorithmic bias in automated systems for underserved communities include conducting proactive equity assessments during the design phase, ensuring the use of representative and robust data, and guarding against the use of proxies that may lead to algorithmic discrimination. These strategies involve reviewing potential input data, historical context, and accessibility for people with disabilities, as well as testing for correlation between demographic information and attributes to identify and remove any proxies.",multi_context,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 26, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What drives the choice of humans over automation in sensitive areas?,"[' \nSECTION TITLE\nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nYou should be able to opt out, where appropriate, and have access to a person who can quickly \nconsider and remedy problems you encounter. You should be able to opt out from automated systems in \nfavor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable \nexpectations in a given context and with a focus on ensuring broad accessibility and protecting the public from \nespecially harmful impacts. In some cases, a human or other alternative may be required by law. You should have \naccess to timely human consideration and remedy by a fallback and escalation process if an automated system \nfails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and \nfallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and \nshould not impose an unreasonable burden on the public. Automated systems with an intended use within sensi\xad\ntive domains, including, but not limited to, criminal justice, employment, education, and health, should additional\xad\nly be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting \nwith the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes \na description of these human governance processes and assessment of their timeliness, accessibility, outcomes, \nand effectiveness should be made public whenever possible. \nDefinitions for key terms in The Blueprint for an AI Bill of Rights can be found in Applying the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for actualizing each principle can be found in the Technical Companion. \n7\n']","The choice of humans over automation in sensitive areas is driven by the need for human consideration and remedy, particularly in contexts where automated systems may fail, produce errors, or where individuals wish to appeal or contest the impacts of these systems. This choice is also influenced by the requirement for appropriateness based on reasonable expectations, ensuring broad accessibility, and protecting the public from especially harmful impacts.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 6, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What ensures good governance in automated systems?,"[' \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nOngoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibra\xad\ntion procedures, in place to ensure that their performance does not fall below an acceptable level over time, \nbased on changing real-world conditions or deployment contexts, post-deployment modification, or unexpect\xad\ned conditions. This ongoing monitoring should include continuous evaluation of performance metrics and \nharm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well \nas ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitor\xad\ning should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\xad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\xad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\xad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing \nassessment and mitigation. Organizational stakeholders including those with oversight of the business process \nor operation being automated, as well as other organizational divisions that may be affected due to the use of \nthe system, should be involved in establishing governance procedures. Responsibility should rest high enough \nin the organization that decisions about resources, mitigation, incident response, and potential rollback can be \nmade promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those \nholding this responsibility should be made aware of any use cases with the potential for meaningful impact on \npeople’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, \nit may be appropriate for an independent ethics review to be conducted before deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its \nreuse \nRelevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally, \njustification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19\n']","Good governance in automated systems is ensured by laying out clear governance structures and procedures, which include clearly-stated governance procedures before deploying the system, as well as the responsibility of specific individuals or entities to oversee ongoing assessment and mitigation. Organizational stakeholders should be involved in establishing these governance procedures, and responsibility should rest high enough in the organization to allow for prompt decision-making regarding resources, mitigation, incident response, and potential rollback. Additionally, those in charge should be aware of any use cases with the potential for meaningful impact on people's rights, opportunities, or access, and it may be appropriate for an independent ethics review to be conducted before deployment.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 18, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What links do harmful AI biases have to data privacy or misinformation risks?,"[' \n26 \nMAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or \nsoftware – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other \nrights. \nAction ID \nSuggested Action \nGAI Risks \nMP-4.1-001 Conduct periodic monitoring of AI-generated content for privacy risks; address any \npossible instances of PII or sensitive data exposure. \nData Privacy \nMP-4.1-002 Implement processes for responding to potential intellectual property infringement \nclaims or other rights. \nIntellectual Property \nMP-4.1-003 \nConnect new GAI policies, procedures, and processes to existing model, data, \nsoftware development, and IT governance and to legal, compliance, and risk \nmanagement activities. \nInformation Security; Data Privacy \nMP-4.1-004 Document training data curation policies, to the extent possible and according to \napplicable laws and policies. \nIntellectual Property; Data Privacy; \nObscene, Degrading, and/or \nAbusive Content \nMP-4.1-005 \nEstablish policies for collection, retention, and minimum quality of data, in \nconsideration of the following risks: Disclosure of inappropriate CBRN information; \nUse of Illegal or dangerous content; Offensive cyber capabilities; Training data \nimbalances that could give rise to harmful biases; Leak of personally identifiable \ninformation, including facial likenesses of individuals. \nCBRN Information or Capabilities; \nIntellectual Property; Information \nSecurity; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-006 Implement policies and practices defining how third-party intellectual property and \ntraining data will be used, stored, and protected. \nIntellectual Property; Value Chain \nand Component Integration \nMP-4.1-007 Re-evaluate models that were fine-tuned or enhanced on top of third-party \nmodels. \nValue Chain and Component \nIntegration \nMP-4.1-008 \nRe-evaluate risks when adapting GAI models to new domains. Additionally, \nestablish warning systems to determine if a GAI system is being used in a new \ndomain where previous assumptions (relating to context of use or mapped risks \nsuch as security, and safety) may no longer hold. \nCBRN Information or Capabilities; \nIntellectual Property; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content; Data \nPrivacy \nMP-4.1-009 Leverage approaches to detect the presence of PII or sensitive data in generated \noutput text, image, video, or audio. \nData Privacy \n', "" \n27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring \n \n""]","The context does not explicitly link harmful AI biases to data privacy or misinformation risks. However, it mentions risks such as harmful biases, data privacy, and misinformation in separate sections, indicating that these issues are recognized but not directly connected in the provided text.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 29, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What to review for ethical use of sensitive data?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \xad\xad\xad\xad\xad\xad\nIn addition to the privacy expectations above for general non-sensitive data, any system collecting, using, shar-\ning, or storing sensitive data should meet the expectations below. Depending on the technological use case and \nbased on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child. \nProvide enhanced protections for data related to sensitive domains \nNecessary functions only. Sensitive data should only be used for functions strictly necessary for that \ndomain or for functions that are required for administrative reasons (e.g., school attendance records), unless \nconsent is acquired, if appropriate, and the additional expectations in this section are met. Consent for non-\nnecessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to \nreceive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance \ncompany) in order to facilitate payment for such a need, that data should only be used for that purpose. \nEthical review and use prohibitions. Any use of sensitive data or decision process based in part on sensi-\ntive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go \nthrough a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an indepen-\ndent ethics committee or similarly robust process). In some cases, this ethical review may determine that data \nshould not be used or shared for specific uses even with consent. Some novel uses of automated systems in this \ncontext, where the algorithm is dynamically developing and where the science behind the use case is not well \nestablished, may also count as human subject experimentation, and require special review under organizational \ncompliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and \ngovernance procedures. \nData quality. In sensitive domains, entities should be especially careful to maintain the quality of data to \navoid adverse consequences arising from decision-making based on flawed or inaccurate data. Such care is \nnecessary in a fragmented, complex data ecosystem and for datasets that have limited access such as for fraud \nprevention and law enforcement. It should be not left solely to individuals to carry the burden of reviewing and \ncorrecting data. Entities should conduct regular, independent audits and take prompt corrective measures to \nmaintain accurate, timely, and complete data. \nLimit access to sensitive data and derived data. Sensitive data and derived data should not be sold, \nshared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based \non a principle of local control, such that those individuals closest to the data subject have more access while \nthose who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a \nsuperintendent does not). \nReporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities devel-\noping technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data \nshould, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches \nthat resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a \ndescription of any data sold, shared, or made public, and how that data was assessed to determine it did not pres-\nent a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added \nbased on these procedures. Reporting should be provided in a clear and machine-readable manner. \n38\n']","Any use of sensitive data or decision processes based in part on sensitive data that might limit rights, opportunities, or access should go through a thorough ethical review and monitoring, both in advance and by periodic review. This may involve an independent ethics committee or a similarly robust process. The ethical review may determine that data should not be used or shared for specific uses even with consent.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 37, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the concerns with surveillance tech in education and healthcare?,"["" \n65. See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info\nAppears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://\nwww.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles\xad\nin-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server. WIRED,\nNov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/\n66. Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash. New York Times.\nSept. 24, 2019.\nhttps://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html\n67. Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust\nUnions. Newsweek. Dec. 13, 2021.\nhttps://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust\xad\nunions-1658603\n68. See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and\nagainst Weight Watchers and their subsidiary Kurbo\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)\n69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109\n(1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children's Online\nPrivacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection and\nStatistical Efficiency Act (CIPSEA) (116 Stat. 2899)\n70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov.\n21, 2018.\nhttps://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true\n71. Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012.\nhttps://www.nytimes.com/2012/02/19/magazine/shopping-habits.html\n72. Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology\nSchools are Using to Monitor Students. ProPublica. Jun. 25, 2019.\nhttps://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology\xad\nschools-are-using-to-monitor-students/\n73. Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are\nfighting back. Washington Post. Nov. 12, 2020.\nhttps://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/\n74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government\nTechnology. May 24, 2022.\nhttps://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;\nLydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability\nDiscrimination In New Surveillance Technologies: How new surveillance technologies in education,\npolicing, health care, and the workplace disproportionately harm disabled people. Center for Democracy\nand Technology Report. May 24, 2022.\nhttps://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how\xad\nnew-surveillance-technologies-in-education-policing-health-care-and-the-workplace\xad\ndisproportionately-harm-disabled-people/\n69\n"", ' \n \n \nENDNOTES\n75. See., e.g., Sam Sabin. Digital surveillance in a post-Roe world. Politico. May 5, 2022. https://\nwww.politico.com/newsletters/digital-future-daily/2022/05/05/digital-surveillance-in-a-post-roe\xad\nworld-00030459; Federal Trade Commission. FTC Sues Kochava for Selling Data that Tracks People at\nReproductive Health Clinics, Places of Worship, and Other Sensitive Locations. Aug. 29, 2022. https://\nwww.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people\xad\nreproductive-health-clinics-places-worship-other\n76. Todd Feathers. This Private Equity Firm Is Amassing Companies That Collect Data on America’s\nChildren. The Markup. Jan. 11, 2022.\nhttps://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassing-companies\xad\nthat-collect-data-on-americas-children\n77. Reed Albergotti. Every employee who leaves Apple becomes an ‘associate’: In job databases used by\nemployers to verify resume information, every former Apple employee’s title gets erased and replaced with\na generic title. The Washington Post. Feb. 10, 2022.\nhttps://www.washingtonpost.com/technology/2022/02/10/apple-associate/\n78. National Institute of Standards and Technology. Privacy Framework Perspectives and Success\nStories. Accessed May 2, 2022.\nhttps://www.nist.gov/privacy-framework/getting-started-0/perspectives-and-success-stories\n79. ACLU of New York. What You Need to Know About New York’s Temporary Ban on Facial\nRecognition in Schools. Accessed May 2, 2022.\nhttps://www.nyclu.org/en/publications/what-you-need-know-about-new-yorks-temporary-ban-facial\xad\nrecognition-schools\n80. New York State Assembly. Amendment to Education Law. Enacted Dec. 22, 2020.\nhttps://nyassembly.gov/leg/?default_fld=&leg_video=&bn=S05140&term=2019&Summary=Y&Text=Y\n81. U.S Department of Labor. Labor-Management Reporting and Disclosure Act of 1959, As Amended.\nhttps://www.dol.gov/agencies/olms/laws/labor-management-reporting-and-disclosure-act (Section\n203). See also: U.S Department of Labor. Form LM-10. OLMS Fact Sheet, Accessed May 2, 2022. https://\nwww.dol.gov/sites/dolgov/files/OLMS/regs/compliance/LM-10_factsheet.pdf\n82. See, e.g., Apple. Protecting the User’s Privacy. Accessed May 2, 2022.\nhttps://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy; Google Developers.\nDesign for Safety: Android is secure by default and private by design. Accessed May 3, 2022.\nhttps://developer.android.com/design-for-safety\n83. Karen Hao. The coming war on the hidden algorithms that trap people in poverty. MIT Tech Review.\nDec. 4, 2020.\nhttps://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers\xad\nfight-back/\n84. Anjana Samant, Aaron Horowitz, Kath Xu, and Sophie Beiers. Family Surveillance by Algorithm.\nACLU. Accessed May 2, 2022.\nhttps://www.aclu.org/fact-sheet/family-surveillance-algorithm\n70\n']","The concerns with surveillance technology in education and healthcare include its invasive nature, potential for discrimination, and the disproportionate harm it may cause to disabled individuals. Specifically, new surveillance technologies can monitor students in ways that may violate their privacy and exacerbate existing inequalities, particularly for those with disabilities.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 68, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}, {'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 69, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the options for high-priority AI risks and their link to org tolerance?,"[' \n40 \nMANAGE 1.3: Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and \ndocumented. Risk response options can include mitigating, transferring, avoiding, or accepting. \nAction ID \nSuggested Action \nGAI Risks \nMG-1.3-001 \nDocument trade-offs, decision processes, and relevant measurement and \nfeedback results for risks that do not surpass organizational risk tolerance, for \nexample, in the context of model release: Consider different approaches for \nmodel release, for example, leveraging a staged release approach. Consider \nrelease approaches in the context of the model and its projected use cases. \nMitigate, transfer, or avoid risks that surpass organizational risk tolerances. \nInformation Security \nMG-1.3-002 \nMonitor the robustness and effectiveness of risk controls and mitigation plans \n(e.g., via red-teaming, field testing, participatory engagements, performance \nassessments, user feedback mechanisms). \nHuman-AI Configuration \nAI Actor Tasks: AI Development, AI Deployment, AI Impact Assessment, Operation and Monitoring \n \nMANAGE 2.2: Mechanisms are in place and applied to sustain the value of deployed AI systems. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.2-001 \nCompare GAI system outputs against pre-defined organization risk tolerance, \nguidelines, and principles, and review and test AI-generated content against \nthese guidelines. \nCBRN Information or Capabilities; \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content \nMG-2.2-002 \nDocument training data sources to trace the origin and provenance of AI-\ngenerated content. \nInformation Integrity \nMG-2.2-003 \nEvaluate feedback loops between GAI system content provenance and human \nreviewers, and update where needed. Implement real-time monitoring systems \nto affirm that content provenance protocols remain effective. \nInformation Integrity \nMG-2.2-004 \nEvaluate GAI content and data for representational biases and employ \ntechniques such as re-sampling, re-ranking, or adversarial training to mitigate \nbiases in the generated content. \nInformation Security; Harmful Bias \nand Homogenization \nMG-2.2-005 \nEngage in due diligence to analyze GAI output for harmful content, potential \nmisinformation, and CBRN-related or NCII content. \nCBRN Information or Capabilities; \nObscene, Degrading, and/or \nAbusive Content; Harmful Bias and \nHomogenization; Dangerous, \nViolent, or Hateful Content \n']","The options for high-priority AI risks include mitigating, transferring, avoiding, or accepting these risks. Specifically, for risks that do not surpass organizational risk tolerance, it is suggested to document trade-offs, decision processes, and relevant measurement and feedback results. For risks that surpass organizational risk tolerances, the recommended actions are to mitigate, transfer, or avoid those risks.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 43, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How does Navigator training relate to health coverage access?,"["" \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nHealthcare “navigators” help people find their way through online signup forms to choose \nand obtain healthcare. A Navigator is “an individual or organization that's trained and able to help \nconsumers, small businesses, and their employees as they look for health coverage options through the \nMarketplace (a government web site), including completing eligibility and enrollment forms.”106 For \nthe 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could \n“train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive \nhealth coverage.”107\nThe customer service industry has successfully integrated automated services such as \nchat-bots and AI-driven call response systems with escalation to a human support \nteam.108 Many businesses now use partially automated customer service platforms that help answer customer \nquestions and compile common problems for human agents to review. These integrated human-AI \nsystems allow companies to provide faster customer care while maintaining human agents to answer \ncalls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to \nsuccessful customer service.109\nBallot curing laws in at least 24 states require a fallback system that allows voters to \ncorrect their ballot and have it counted in the case that a voter signature matching \nalgorithm incorrectly flags their ballot as invalid or there is another issue with their \nballot, and review by an election official does not rectify the problem. Some federal \ncourts have found that such cure procedures are constitutionally required.110 \nBallot \ncuring processes vary among states, and include direct phone calls, emails, or mail contact by election \nofficials.111 Voters are asked to provide alternative information or a new signature to verify the validity of their \nballot. \n52\n""]","Navigator training is related to health coverage access as it equips individuals or organizations to help consumers, small businesses, and their employees navigate the process of finding and obtaining health coverage options through the Marketplace. This training enables Navigators to assist with completing eligibility and enrollment forms, thereby facilitating access to affordable and comprehensive health coverage for uninsured consumers.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 51, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What connects NIST's AI Safety Institute to AI bias mgmt?,"[' \n \n \nAbout AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \nai-inquiries@nist.gov \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency. \n \n', ' \n57 \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \nNational Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook \nNational Institue of Standards and Technology (2023) Framing Risk \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/1-sec-risk \nNational Institute of Standards and Technology (2023) The Language of Trustworthy AI: An In-Depth \nGlossary of Terms https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary \nNational Institue of Standards and Technology (2022) Towards a Standard for Identifying and Managing \nBias in Artificial Intelligence https://www.nist.gov/publications/towards-standard-identifying-and-\nmanaging-bias-artificial-intelligence \nNorthcutt, C. et al. (2021) Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. \narXiv. https://arxiv.org/pdf/2103.14749 \nOECD (2023) ""Advancing accountability in AI: Governing and managing risks throughout the lifecycle for \ntrustworthy AI"", OECD Digital Economy Papers, No. 349, OECD Publishing, Paris. \nhttps://doi.org/10.1787/2448f04b-en \nOECD (2024) ""Defining AI incidents and related terms"" OECD Artificial Intelligence Papers, No. 16, OECD \nPublishing, Paris. https://doi.org/10.1787/d1a8d965-en \nOpenAI (2023) GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-card.pdf \nOpenAI (2024) GPT-4 Technical Report. https://arxiv.org/pdf/2303.08774 \nPadmakumar, V. et al. (2024) Does writing with language models reduce content diversity? ICLR. \nhttps://arxiv.org/pdf/2309.05196 \nPark, P. et. al. (2024) AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5). \narXiv. https://arxiv.org/pdf/2308.14752 \nPartnership on AI (2023) Building a Glossary for Synthetic Media Transparency Methods, Part 1: Indirect \nDisclosure. https://partnershiponai.org/glossary-for-synthetic-media-transparency-methods-part-1-\nindirect-disclosure/ \nQu, Y. et al. (2023) Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-\nTo-Image Models. arXiv. https://arxiv.org/pdf/2305.13873 \nRafat, K. et al. (2023) Mitigating carbon footprint for knowledge distillation based deep learning model \ncompression. PLOS One. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285668 \nSaid, I. et al. (2022) Nonconsensual Distribution of Intimate Images: Exploring the Role of Legal Attitudes \nin Victimization and Perpetration. Sage. \nhttps://journals.sagepub.com/doi/full/10.1177/08862605221122834#bibr47-08862605221122834 \nSandbrink, J. (2023) Artificial intelligence and biological misuse: Differentiating risks of language models \nand biological design tools. arXiv. https://arxiv.org/pdf/2306.13952 \n']",The answer to given question is not present in context,reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 2, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 60, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What ensures automated systems are safe and fair?,"[' AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor \nbe designed to proactively protect you from harms \nimpacts of automated systems. You should be protected from inappropriate or irrelevant data use in the \ndesign, development, and deployment of automated systems, and from the compounded harm of its reuse. \nIndependent evaluation and reporting that confirms that the system is safe and effective, including reporting of \nsteps taken to mitigate potential harms, should be performed and the results made public whenever possible. \nALGORITHMIC DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms and systems should be used and designed in \nan equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified \ndifferent treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including \npregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other \nclassification protected by law. Depending on the specific circumstances, such algorithmic discrimination \nmay violate legal protections. Designers, developers, and deployers of automated systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to use and design systems in an equitable way. This protection should include proactive \nequity assessments as part of the system design, use of representative data and protection against proxies \nfor demographic features, ensuring accessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent \nevaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5\n']","Automated systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring to ensure they are safe and effective. They should be developed with consultation from diverse communities, stakeholders, and domain experts, and should include protective measures to prevent endangering safety. Additionally, independent evaluation and reporting that confirms the system's safety and effectiveness should be performed, with results made public whenever possible.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 4, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What issues come from biased automated systems in hiring and justice?,"["" \n \n \n \n \n \n \n \nAlgorithmic \nDiscrimination \nProtections \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere is extensive evidence showing that automated systems can produce inequitable outcomes and amplify \nexisting inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of \nconsequences. For example, facial recognition technology that can contribute to wrongful and discriminatory \narrests,31 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount \nthe severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design \nautomated systems in an equitable way. The guardrails protecting the public from discrimination in their daily \nlives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to \nensure that all people are treated fairly when automated systems are used. This includes all dimensions of their \nlives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal \njustice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that \nautomated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was\nused by more than 500 universities across the country. The model was found to use race directly as a predictor,\nand also shown to have large disparities by race; Black students were as many as four times as likely as their\notherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors \nto guide students towards or away from majors, and some worry that they are being used to guide\nBlack students away from math and science subjects.34\n•\nA risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed\nevidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the\ngeneral recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the\nviolent recidivism tools. The Department of Justice is working to reduce these disparities and has\npublicly released a report detailing its review of the tool.35 \n24\n""]","Biased automated systems in hiring can lead to discriminatory decisions, such as hiring tools that reject women applicants for spurious reasons, penalizing resumes with the word 'women’s'. In the justice system, predictive models can disproportionately label Black students as high risk of dropping out, and risk assessment tools can overpredict recidivism for some groups of color, leading to unfair treatment and outcomes.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 23, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What ensures independent eval & reporting for system safety?,"[' \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through \nthe use of algorithms, such as data derived or inferred from prior model outputs, should be identified and \ntracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk \ninputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be care\xad\nfully validated against the risk of collateral consequences. \nData reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result \nin the spreading and scaling of harms. Data from some domains, including criminal justice data and data indi\xad\ncating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in \nsome cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure \nsafety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal \nmatters or private sector use) should only occur where use of such data is legally authorized and, after examina\xad\ntion, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reason\xad\nable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to \nidentify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for \nreplacing individual-level sensitive data. \nDemonstrate the safety and effectiveness of the system \nIndependent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., \nvia application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the \norganization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency, \nresults, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20\n']","Independent evaluation for system safety is ensured by designing automated systems to allow for independent evaluation through mechanisms such as application programming interfaces. Independent evaluators, including researchers, journalists, ethics review boards, inspectors general, and third-party auditors, should have access to the system and samples of associated data, consistent with privacy, security, law, or regulation. Additionally, entities responsible for automated systems should provide regularly-updated reports that include an overview of the system, data used, risk assessments, performance testing results, and independent evaluation outcomes, all presented in plain language and a machine-readable format.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How does public input influence the AI Bill of Rights?,"[' \n \n \nABOUT THIS FRAMEWORK\xad\xad\xad\xad\xad\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the \ndesign, use, and deployment of automated systems to protect the rights of the American public in the age of \nartificial intel-ligence. Developed through extensive consultation with the American public, these principles are \na blueprint for building and deploying automated systems that are aligned with democratic values and protect \ncivil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of \nall sizes—to uphold these values. Experts from across the private sector, governments, and international \nconsortia have published principles and frameworks to guide the responsible use of automated systems; this \nframework provides a national values statement and toolkit that is sector-agnostic to inform building these \nprotections into policy, practice, or the technological design process. Where existing law or policy—such as \nsector-specific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an \nAI Bill of Rights should be used to inform policy decisions.\nLISTENING TO THE AMERICAN PUBLIC\nThe White House Office of Science and Technology Policy has led a year-long process to seek and distill input \nfrom people across the country—from impacted communities and industry stakeholders to technology develop-\ners and other experts across fields and sectors, as well as policymakers throughout the Federal government—on \nthe issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listen-\ning sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4\n']","Public input influences the AI Bill of Rights by providing insights and feedback from impacted communities, industry stakeholders, technology developers, and experts. The White House Office of Science and Technology Policy conducted a year-long process to gather this input through various means, including panel discussions and public listening sessions, which helped shape the principles and practices outlined in the Blueprint for an AI Bill of Rights.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 3, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What issues arise from hidden criteria changes in benefit allocation?,"[' \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \n•\nA predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of\ngun violence (based on automated analysis of social ties to gang members, criminal histories, previous experi\xad\nences of gun violence, and other factors) and led to individuals being placed on a watch list with no\nexplanation or public transparency regarding how the system came to its conclusions.85 Both police and\nthe public deserve to understand why and how such a system is making these determinations.\n•\nA system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry\nerrors and other system flaws. These flaws were only revealed when an explanation of the system\nwas demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a\ntimely manner.\n42\n']","Issues arising from hidden criteria changes in benefit allocation include individuals being denied benefits due to data entry errors and other system flaws, which were only revealed when an explanation of the system was demanded. The lack of transparency made it harder for errors to be corrected in a timely manner.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 41, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What IP risks come from GAI using copyrighted works and data poisoning?,"[' \n11 \nvalue chain (e.g., data inputs, processing, GAI training, or deployment environments), conventional \ncybersecurity practices may need to adapt or evolve. \nFor instance, prompt injection involves modifying what input is provided to a GAI system so that it \nbehaves in unintended ways. In direct prompt injections, attackers might craft malicious prompts and \ninput them directly to a GAI system, with a variety of downstream negative consequences to \ninterconnected systems. Indirect prompt injection attacks occur when adversaries remotely (i.e., without \na direct interface) exploit LLM-integrated applications by injecting prompts into data likely to be \nretrieved. Security researchers have already demonstrated how indirect prompt injections can exploit \nvulnerabilities by stealing proprietary data or running malicious code remotely on a machine. Merely \nquerying a closed production model can elicit previously undisclosed information about that model. \nAnother cybersecurity risk to GAI is data poisoning, in which an adversary compromises a training \ndataset used by a model to manipulate its outputs or operation. Malicious tampering with data or parts \nof the model could exacerbate risks associated with GAI system outputs. \nTrustworthy AI Characteristics: Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n2.10. \nIntellectual Property \nIntellectual property risks from GAI systems may arise where the use of copyrighted works is not a fair \nuse under the fair use doctrine. If a GAI system’s training data included copyrighted material, GAI \noutputs displaying instances of training data memorization (see Data Privacy above) could infringe on \ncopyright. \nHow GAI relates to copyright, including the status of generated content that is similar to but does not \nstrictly copy work protected by copyright, is currently being debated in legal fora. Similar discussions are \ntaking place regarding the use or emulation of personal identity, likeness, or voice without permission. \nTrustworthy AI Characteristics: Accountable and Transparent, Fair with Harmful Bias Managed, Privacy \nEnhanced \n2.11. \nObscene, Degrading, and/or Abusive Content \nGAI can ease the production of and access to illegal non-consensual intimate imagery (NCII) of adults, \nand/or child sexual abuse material (CSAM). GAI-generated obscene, abusive or degrading content can \ncreate privacy, psychological and emotional, and even physical harms, and in some cases may be illegal. \nGenerated explicit or obscene AI content may include highly realistic “deepfakes” of real individuals, \nincluding children. The spread of this kind of material can have downstream negative consequences: in \nthe context of CSAM, even if the generated images do not resemble specific individuals, the prevalence \nof such images can divert time and resources from efforts to find real-world victims. Outside of CSAM, \nthe creation and spread of NCII disproportionately impacts women and sexual minorities, and can have \nsubsequent negative consequences including decline in overall mental health, substance abuse, and \neven suicidal thoughts. \nData used for training GAI models may unintentionally include CSAM and NCII. A recent report noted \nthat several commonly used GAI training datasets were found to contain hundreds of known images of \n']","Intellectual property risks from GAI systems may arise where the use of copyrighted works is not a fair use under the fair use doctrine. If a GAI system’s training data included copyrighted material, GAI outputs displaying instances of training data memorization could infringe on copyright. Additionally, data poisoning poses a risk where an adversary compromises a training dataset used by a model to manipulate its outputs or operation, potentially leading to malicious tampering with data or parts of the model.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 14, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What ensures human oversight in automated voting signatures?,"[' \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nThere are many reasons people may prefer not to use an automated system: the system can be flawed and can lead to \nunintended outcomes; it may reinforce bias or be inaccessible; it may simply be inconvenient or unavailable; or it may \nreplace a paper or manual process to which people had grown accustomed. Yet members of the public are often \npresented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once \nthey decide they no longer want to deal exclusively with the automated system or be impacted by its results. As a result \nof this lack of human reconsideration, many receive delayed access, or lose access, to rights, opportunities, benefits, \nand critical services. The American public deserves the assurance that, when rights, opportunities, or access are \nmeaningfully at stake and there is a reasonable expectation of an alternative to an automated system, they can conve\xad\nniently opt out of an automated system and will not be disadvantaged for that choice. In some cases, such a human or \nother alternative may be required by law, for example it could be required as “reasonable accommodations” for people \nwith disabilities. \nIn addition to being able to opt out and use a human alternative, the American public deserves a human fallback \nsystem in the event that an automated system fails or causes harm. No matter how rigorously an automated system is \ntested, there will always be situations for which the system fails. The American public deserves protection via human \nreview against these outlying or unexpected scenarios. In the case of time-critical systems, the public should not have \nto wait—immediate human consideration and fallback should be available. In many time-critical systems, such a \nremedy is already immediately available, such as a building manager who can open a door in the case an automated \ncard access system fails. \nIn the criminal justice system, employment, education, healthcare, and other sensitive domains, automated systems \nare used for many purposes, from pre-trial risk assessments and parole decisions to technologies that help doctors \ndiagnose disease. Absent appropriate safeguards, these technologies can lead to unfair, inaccurate, or dangerous \noutcomes. These sensitive domains require extra protections. It is critically important that there is extensive human \noversight in such settings. \nThese critical protections have been adopted in some scenarios. Where automated systems have been introduced to \nprovide the public access to government benefits, existing human paper and phone-based processes are generally still \nin place, providing an important alternative to ensure access. Companies that have introduced automated call centers \noften retain the option of dialing zero to reach an operator. When automated identity controls are in place to board an \nairplane or enter the country, there is a person supervising the systems who can be turned to for help or to appeal a \nmisidentification. \nThe American people deserve the reassurance that such procedures are in place to protect their rights, opportunities, \nand access. People make mistakes, and a human alternative or fallback mechanism will not always have the right \nanswer, but they serve as an important check on the power and validity of automated systems. \n• An automated signature matching system is used as part of the voting process in many parts of the country to\ndetermine whether the signature on a mail-in ballot matches the signature on file. These signature matching\nsystems are less likely to work correctly for some voters, including voters with mental or physical\ndisabilities, voters with shorter or hyphenated names, and voters who have changed their name.97 A human\ncuring process,98 which helps voters to confirm their signatures and correct other voting mistakes, is\nimportant to ensure all votes are counted,99 and it is already standard practice in much of the country for\nboth an election official and the voter to have the opportunity to review and correct any such issues.100 \n47\n']","A human curing process helps voters confirm their signatures and correct other voting mistakes, ensuring that all votes are counted. This process is already standard practice in much of the country, allowing both an election official and the voter to review and correct any issues.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 46, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do algorithmic impact assessments relate to automated system transparency?,"["" \n \n \n \n \n \nNOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and \nexplanations as to how and why a decision was made or an action was taken by the system. These expectations are \nexplained below. \nProvide clear, timely, understandable, and accessible notice of use and explanations \xad\nGenerally accessible plain language documentation. The entity responsible for using the automated \nsystem should ensure that documentation describing the overall system (including any human components) is \npublic and easy to find. The documentation should describe, in plain language, how the system works and how \nany automated component is used to determine an action or decision. It should also include expectations about \nreporting described throughout this framework, such as the algorithmic impact assessments described as \npart of Algorithmic Discrimination Protections. \nAccountable. Notices should clearly identify the entity responsible for designing each component of the \nsystem and the entity using it. \nTimely and up-to-date. Users should receive notice of the use of automated systems in advance of using or \nwhile being impacted by the technology. An explanation should be available with the decision itself, or soon \nthereafter. Notice should be kept up-to-date and people impacted by the system should be notified of use case \nor key functionality changes. \nBrief and clear. Notices and explanations should be assessed, such as by research on users’ experiences, \nincluding user testing, to ensure that the people using or impacted by the automated system are able to easily \nfind notices and explanations, read them quickly, and understand and act on them. This includes ensuring that \nnotices and explanations are accessible to users with disabilities and are available in the language(s) and read-\ning level appropriate for the audience. Notices and explanations may need to be available in multiple forms, \n(e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the \nAmerican public. \nProvide explanations as to how and why a decision was made or an action was taken by an \nautomated system \nTailored to the purpose. Explanations should be tailored to the specific purpose for which the user is \nexpected to use the explanation, and should clearly state that purpose. An informational explanation might \ndiffer from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the \ncontext of a dispute or contestation process. For the purposes of this framework, 'explanation' should be \nconstrued broadly. An explanation need not be a plain-language statement about causality but could consist of \nany mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the \nstated purpose. Tailoring should be assessed (e.g., via user experience research). \nTailored to the target of the explanation. Explanations should be targeted to specific audiences and \nclearly state that audience. An explanation provided to the subject of a decision might differ from one provided \nto an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience \nresearch). \n43\n""]",The answer to given question is not present in context,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 42, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What knowledge and security factors should be noted for GAI deployment?,"[' \n31 \nMS-2.3-004 \nUtilize a purpose-built testing environment such as NIST Dioptra to empirically \nevaluate GAI trustworthy characteristics. \nCBRN Information or Capabilities; \nData Privacy; Confabulation; \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, TEVV \n \nMEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the \nconditions under which the technology was developed are documented. \nAction ID \nSuggested Action \nRisks \nMS-2.5-001 Avoid extrapolating GAI system performance or capabilities from narrow, non-\nsystematic, and anecdotal assessments. \nHuman-AI Configuration; \nConfabulation \nMS-2.5-002 \nDocument the extent to which human domain knowledge is employed to \nimprove GAI system performance, via, e.g., RLHF, fine-tuning, retrieval-\naugmented generation, content moderation, business rules. \nHuman-AI Configuration \nMS-2.5-003 Review and verify sources and citations in GAI system outputs during pre-\ndeployment risk measurement and ongoing monitoring activities. \nConfabulation \nMS-2.5-004 Track and document instances of anthropomorphization (e.g., human images, \nmentions of human feelings, cyborg imagery or motifs) in GAI system interfaces. Human-AI Configuration \nMS-2.5-005 Verify GAI system training data and TEVV data provenance, and that fine-tuning \nor retrieval-augmented generation data is grounded. \nInformation Integrity \nMS-2.5-006 \nRegularly review security and safety guardrails, especially if the GAI system is \nbeing operated in novel circumstances. This includes reviewing reasons why the \nGAI system was initially assessed as being safe to deploy. \nInformation Security; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: Domain Experts, TEVV \n \n']","The context mentions several knowledge and security factors for GAI deployment, including the need to document the extent of human domain knowledge employed to improve GAI system performance, verify sources and citations in GAI system outputs, track instances of anthropomorphization in GAI system interfaces, verify GAI system training data and TEVV data provenance, and regularly review security and safety guardrails, especially in novel circumstances.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 34, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do security measures relate to info integrity?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']",The answer to given question is not present in context,reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What links are there between tech protections and the AI Bill of Rights?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\xad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing \nmonitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have\nconfidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14\n']",The context does not provide specific links between tech protections and the AI Bill of Rights.,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 13, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True Which NSF programs ensure automated system safety and compliance?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\nSome U.S government agencies have developed specific frameworks for ethical use of AI \nsystems. The Department of Energy (DOE) has activated the AI Advancement Council that oversees coordina-\ntion and advises on implementation of the DOE AI Strategy and addresses issues and/or escalations on the \nethical use and development of AI systems.20 The Department of Defense has adopted Artificial Intelligence \nEthical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its national \nsecurity and defense activities.21 Similarly, the U.S. Intelligence Community (IC) has developed the Principles \nof Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to \ndevelop and use AI in furtherance of the IC\'s mission, as well as an AI Ethics Framework to help implement \nthese principles.22\nThe National Science Foundation (NSF) funds extensive research to help foster the \ndevelopment of automated systems that adhere to and advance their safety, security and \neffectiveness. Multiple NSF programs support research that directly addresses many of these principles: \nthe National AI Research Institutes23 support research on all aspects of safe, trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical Systems24 program supports research on developing safe \nautonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace25 \nprogram supports research on cybersecurity and privacy enhancing technologies in automated systems; the \nFormal Methods in the Field26 program supports research on rigorous formal verification and analysis of \nautomated systems and machine learning, and the Designing Accountable Software Systems27 program supports \nresearch on rigorous and reproducible methodologies for developing software systems with legal and regulatory \ncompliance in mind. \nSome state legislatures have placed strong transparency and validity requirements on \nthe use of pretrial risk assessments. The use of algorithmic pretrial risk assessments has been a \ncause of concern for civil rights groups.28 Idaho Code Section 19-1910, enacted in 2019,29 requires that any \npretrial risk assessment, before use in the state, first be ""shown to be free of bias against any class of \nindividuals protected from discrimination by state or federal law"", that any locality using a pretrial risk \nassessment must first formally validate the claim of its being free of bias, that ""all documents, records, and \ninformation used to build or validate the risk assessment shall be open to public inspection,"" and that assertions \nof trade secrets cannot be used ""to quash discovery in a criminal matter by a party to a criminal case."" \n22\n']","The NSF programs that ensure automated system safety and compliance include the National AI Research Institutes, which support research on safe, trustworthy, fair, and explainable AI algorithms and systems; the Cyber Physical Systems program, which supports research on developing safe autonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace program, which supports research on cybersecurity and privacy enhancing technologies in automated systems; the Formal Methods in the Field program, which supports research on rigorous formal verification and analysis of automated systems and machine learning; and the Designing Accountable Software Systems program, which supports research on rigorous and reproducible methodologies for developing software systems with legal and regulatory compliance in mind.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 21, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What drives the need for human input in sensitive automated systems?,"['You should be able to opt out, where appropriate, and \nhave access to a person who can quickly consider and \nremedy problems you encounter. You should be able to opt \nout from automated systems in favor of a human alternative, where \nappropriate. Appropriateness should be determined based on rea\xad\nsonable expectations in a given context and with a focus on ensuring \nbroad accessibility and protecting the public from especially harm\xad\nful impacts. In some cases, a human or other alternative may be re\xad\nquired by law. You should have access to timely human consider\xad\nation and remedy by a fallback and escalation process if an automat\xad\ned system fails, it produces an error, or you would like to appeal or \ncontest its impacts on you. Human consideration and fallback \nshould be accessible, equitable, effective, maintained, accompanied \nby appropriate operator training, and should not impose an unrea\xad\nsonable burden on the public. Automated systems with an intended \nuse within sensitive domains, including, but not limited to, criminal \njustice, employment, education, and health, should additionally be \ntailored to the purpose, provide meaningful access for oversight, \ninclude training for any people interacting with the system, and in\xad\ncorporate human consideration for adverse or high-risk decisions. \nReporting that includes a description of these human governance \nprocesses and assessment of their timeliness, accessibility, out\xad\ncomes, and effectiveness should be made public whenever possible. \nHUMAN ALTERNATIVES, CONSIDERATION\nALLBACK\nF\nAND\n, \n46\n']","The need for human input in sensitive automated systems is driven by the requirement for timely human consideration and remedy when automated systems fail, produce errors, or when individuals wish to appeal or contest the impacts of these systems. Additionally, human input is necessary to ensure that automated systems are tailored to their intended purpose, provide meaningful access for oversight, and incorporate human consideration for adverse or high-risk decisions.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 45, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What links field testing, user feedback, and GAI eval?","[' \n50 \nParticipatory Engagement Methods \nOn an ad hoc or more structured basis, organizations can design and use a variety of channels to engage \nexternal stakeholders in product development or review. Focus groups with select experts can provide \nfeedback on a range of issues. Small user studies can provide feedback from representative groups or \npopulations. Anonymous surveys can be used to poll or gauge reactions to specific features. Participatory \nengagement methods are often less structured than field testing or red teaming, and are more \ncommonly used in early stages of AI or product development. \nField Testing \nField testing involves structured settings to evaluate risks and impacts and to simulate the conditions \nunder which the GAI system will be deployed. Field style tests can be adapted from a focus on user \npreferences and experiences towards AI risks and impacts – both negative and positive. When carried \nout with large groups of users, these tests can provide estimations of the likelihood of risks and impacts \nin real world interactions. \nOrganizations may also collect feedback on outcomes, harms, and user experience directly from users in \nthe production environment after a model has been released, in accordance with human subject \nstandards such as informed consent and compensation. Organizations should follow applicable human \nsubjects research requirements, and best practices such as informed consent and subject compensation, \nwhen implementing feedback activities. \nAI Red-teaming \nAI red-teaming is an evolving practice that references exercises often conducted in a controlled \nenvironment and in collaboration with AI developers building AI models to identify potential adverse \nbehavior or outcomes of a GAI model or system, how they could occur, and stress test safeguards”. AI \nred-teaming can be performed before or after AI models or systems are made available to the broader \npublic; this section focuses on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify flaws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management efforts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n• \nGeneral Public: Performed by general users (not necessarily AI or technical experts) who are \nexpected to use the model or interact with its outputs, and who bring their own lived \nexperiences and perspectives to the task of AI red-teaming. These individuals may have been \nprovided instructions and material to complete tasks which may elicit harmful model behaviors. \nThis type of exercise can be more effective with large groups of AI red-teamers. \n• \nExpert: Performed by specialists with expertise in the domain or specific AI red-teaming context \nof use (e.g., medicine, biotech, cybersecurity). \n• \nCombination: In scenarios when it is difficult to identify and recruit specialists with sufficient \ndomain and contextual expertise, AI red-teaming exercises may leverage both expert and \n', ' \n49 \nearly lifecycle TEVV approaches are developed and matured for GAI, organizations may use \nrecommended “pre-deployment testing” practices to measure performance, capabilities, limits, risks, \nand impacts. This section describes risk measurement and estimation as part of pre-deployment TEVV, \nand examines the state of play for pre-deployment testing methodologies. \nLimitations of Current Pre-deployment Test Approaches \nCurrently available pre-deployment TEVV processes used for GAI applications may be inadequate, non-\nsystematically applied, or fail to reflect or mismatched to deployment contexts. For example, the \nanecdotal testing of GAI system capabilities through video games or standardized tests designed for \nhumans (e.g., intelligence tests, professional licensing exams) does not guarantee GAI system validity or \nreliability in those domains. Similarly, jailbreaking or prompt engineering tests may not systematically \nassess validity or reliability risks. \nMeasurement gaps can arise from mismatches between laboratory and real-world settings. Current \ntesting approaches often remain focused on laboratory conditions or restricted to benchmark test \ndatasets and in silico techniques that may not extrapolate well to—or directly assess GAI impacts in real-\nworld conditions. For example, current measurement gaps for GAI make it difficult to precisely estimate \nits potential ecosystem-level or longitudinal risks and related political, social, and economic impacts. \nGaps between benchmarks and real-world use of GAI systems may likely be exacerbated due to prompt \nsensitivity and broad heterogeneity of contexts of use. \nA.1.5. Structured Public Feedback \nStructured public feedback can be used to evaluate whether GAI systems are performing as intended \nand to calibrate and verify traditional measurement methods. Examples of structured feedback include, \nbut are not limited to: \n• \nParticipatory Engagement Methods: Methods used to solicit feedback from civil society groups, \naffected communities, and users, including focus groups, small user studies, and surveys. \n• \nField Testing: Methods used to determine how people interact with, consume, use, and make \nsense of AI-generated information, and subsequent actions and effects, including UX, usability, \nand other structured, randomized experiments. \n• \nAI Red-teaming: A structured testing exercise used to probe an AI system to find flaws and \nvulnerabilities such as inaccurate, harmful, or discriminatory outputs, often in a controlled \nenvironment and in collaboration with system developers. \nInformation gathered from structured public feedback can inform design, implementation, deployment \napproval, maintenance, or decommissioning decisions. Results and insights gleaned from these exercises \ncan serve multiple purposes, including improving data quality and preprocessing, bolstering governance \ndecision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation. \n']","Field testing, user feedback, and GAI evaluation are linked through structured public feedback mechanisms that assess how GAI systems perform in real-world conditions. Field testing evaluates risks and impacts in controlled settings, while user feedback, gathered through participatory engagement methods, helps organizations understand user interactions and experiences with AI-generated information. Together, these approaches inform the design, implementation, and governance of GAI systems.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 53, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}, {'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 52, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What risk controls for third-party GAI in compliance?,"[' \n48 \n• Data protection \n• Data retention \n• Consistency in use of defining key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n• Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV practices \n• Secure software development practices \n• Stakeholder engagement \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as different levels of human-AI configurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations \nOrganizations may seek to acquire, embed, incorporate, or use open-source or proprietary third-party \nGAI models, systems, or generated data for various applications across an enterprise. Use of these GAI \ntools and inputs has implications for all functions of the organization – including but not limited to \nacquisition, human resources, legal, compliance, and IT services – regardless of whether they are carried \nout by employees or third parties. Many of the actions cited above are relevant and options for \naddressing third-party considerations. \nThird party GAI integrations may give rise to increased intellectual property, data privacy, or information \nsecurity risks, pointing to the need for clear guidelines for transparency and risk management regarding \nthe collection and use of third-party data for model inputs. Organizations may consider varying risk \ncontrols for foundation models, fine-tuned models, and embedded tools, enhanced processes for \ninteracting with external GAI technologies or service providers. Organizations can apply standard or \nexisting risk controls and processes to proprietary or open-source GAI technologies, data, and third-party \nservice providers, including acquisition and procurement due diligence, requests for software bills of \nmaterials (SBOMs), application of service level agreements (SLAs), and statement on standards for \nattestation engagement (SSAE) reports to help with third-party transparency and risk management for \nGAI systems. \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in which GAI systems may be developed, used, and repurposed \ncomplicates risk mapping and pre-deployment measurement efforts. Robust test, evaluation, validation, \nand verification (TEVV) processes can be iteratively applied – and documented – in early stages of the AI \nlifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous \n']","Organizations can apply standard or existing risk controls and processes to proprietary or open-source GAI technologies, data, and third-party service providers, including acquisition and procurement due diligence, requests for software bills of materials (SBOMs), application of service level agreements (SLAs), and statement on standards for attestation engagement (SSAE) reports to help with third-party transparency and risk management for GAI systems.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 51, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What ensures effective incident response for third-party GAI?,"[' \n22 \nGV-6.2-003 \nEstablish incident response plans for third-party GAI technologies: Align incident \nresponse plans with impacts enumerated in MAP 5.1; Communicate third-party \nGAI incident response plans to all relevant AI Actors; Define ownership of GAI \nincident response functions; Rehearse third-party GAI incident response plans at \na regular cadence; Improve incident response plans based on retrospective \nlearning; Review incident response plans for alignment with relevant breach \nreporting, data protection, data privacy, or other laws. \nData Privacy; Human-AI \nConfiguration; Information \nSecurity; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization \nGV-6.2-004 \nEstablish policies and procedures for continuous monitoring of third-party GAI \nsystems in deployment. \nValue Chain and Component \nIntegration \nGV-6.2-005 \nEstablish policies and procedures that address GAI data redundancy, including \nmodel weights and other system artifacts. \nHarmful Bias and Homogenization \nGV-6.2-006 \nEstablish policies and procedures to test and manage risks related to rollover and \nfallback technologies for GAI systems, acknowledging that rollover and fallback \nmay include manual processing. \nInformation Integrity \nGV-6.2-007 \nReview vendor contracts and avoid arbitrary or capricious termination of critical \nGAI technologies or vendor services and non-standard terms that may amplify or \ndefer liability in unexpected ways and/or contribute to unauthorized data \ncollection by vendors or third-parties (e.g., secondary data use). Consider: Clear \nassignment of liability and responsibility for incidents, GAI system changes over \ntime (e.g., fine-tuning, drift, decay); Request: Notification and disclosure for \nserious incidents arising from third-party data and systems; Service Level \nAgreements (SLAs) in vendor contracts that address incident response, response \ntimes, and availability of critical support. \nHuman-AI Configuration; \nInformation Security; Value Chain \nand Component Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV, Third-party entities \n \nMAP 1.1: Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and prospective settings in \nwhich the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users \nalong with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, \nsociety, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or \nproduct AI lifecycle; and related TEVV and system metrics. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.1-001 \nWhen identifying intended purposes, consider factors such as internal vs. \nexternal use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty \n']","Effective incident response for third-party GAI is ensured by establishing incident response plans that align with impacts, communicating these plans to relevant AI actors, defining ownership of incident response functions, rehearsing the plans regularly, improving them based on retrospective learning, and reviewing for alignment with relevant laws.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 25, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What data leaks cause privacy issues?,"[' \n4 \n1. CBRN Information or Capabilities: Eased access to or synthesis of materially nefarious \ninformation or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) \nweapons or other dangerous materials or agents. \n2. Confabulation: The production of confidently stated but erroneous or false content (known \ncolloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.6 \n3. Dangerous, Violent, or Hateful Content: Eased production of and access to violent, inciting, \nradicalizing, or threatening content as well as recommendations to carry out self-harm or \nconduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging \nor stereotyping content. \n4. Data Privacy: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of \nbiometric, health, location, or other personally identifiable information or sensitive data.7 \n5. Environmental Impacts: Impacts due to high compute resource utilization in training or \noperating GAI models, and related outcomes that may adversely impact ecosystems. \n6. Harmful Bias or Homogenization: Amplification and exacerbation of historical, societal, and \nsystemic biases; performance disparities8 between sub-groups or languages, possibly due to \nnon-representative training data, that result in discrimination, amplification of biases, or \nincorrect presumptions about performance; undesired homogeneity that skews system or model \noutputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful \nbiases. \n7. Human-AI Configuration: Arrangements of or interactions between a human and an AI system \nwhich can result in the human inappropriately anthropomorphizing GAI systems or experiencing \nalgorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI \nsystems. \n8. Information Integrity: Lowered barrier to entry to generate and support the exchange and \nconsumption of content which may not distinguish fact from opinion or fiction or acknowledge \nuncertainties, or could be leveraged for large-scale dis- and mis-information campaigns. \n9. Information Security: Lowered barriers for offensive cyber capabilities, including via automated \ndiscovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber \n \n \n6 Some commenters have noted that the terms “hallucination” and “fabrication” anthropomorphize GAI, which \nitself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human \nentities. \n7 What is categorized as sensitive data or sensitive PII can be highly contextual based on the nature of the \ninformation, but examples of sensitive information include information that relates to an information subject’s \nmost intimate sphere, including political opinions, sex life, or criminal convictions. \n8 The notion of harm presumes some baseline scenario that the harmful factor (e.g., a GAI model) makes worse. \nWhen the mechanism for potential harm is a disparity between groups, it can be difficult to establish what the \nmost appropriate baseline is to compare against, which can result in divergent views on when a disparity between \nAI behaviors for different subgroups constitutes a harm. In discussing harms from disparities such as biased \nbehavior, this document highlights examples where someone’s situation is worsened relative to what it would have \nbeen in the absence of any AI system, making the outcome unambiguously a harm of the system. \n']","The context mentions impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data as causes of privacy issues.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 7, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What are the risks of collecting sensitive student data?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\n•\nContinuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep\napnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the\ndevice based on usage data. Patients were not aware that the data would be used in this way or monitored\nby anyone other than their doctor.70 \n•\nA department store company used predictive analytics applied to collected consumer data to determine that a\nteenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her\nhouse, revealing to her father that she was pregnant.71\n•\nSchool audio surveillance systems monitor student conversations to detect potential ""stress indicators"" as\na warning of potential violence.72 Online proctoring systems claim to detect if a student is cheating on an\nexam using biometric markers.73 These systems have the potential to limit student freedom to express a range\nof emotions at school and may inappropriately flag students with disabilities who need accommodations or\nuse screen readers or dictation software as cheating.74\n•\nLocation data, acquired from a data broker, can be used to identify people who visit abortion clinics.75\n•\nCompanies collect student data such as demographic information, free or reduced lunch status, whether\nthey\'ve used drugs, or whether they\'ve expressed interest in LGBTQI+ groups, and then use that data to \nforecast student success.76 Parents and education experts have expressed concern about collection of such\nsensitive data without express parental consent, the lack of transparency in how such data is being used, and\nthe potential for resulting discriminatory impacts.\n• Many employers transfer employee data to third party job verification services. This information is then used\nby potential future employers, banks, or landlords. In one case, a former employee alleged that a\ncompany supplied false data about her job title which resulted in a job offer being revoked.77\n37\n']","The risks of collecting sensitive student data include concerns about the lack of express parental consent, the lack of transparency in how the data is being used, and the potential for resulting discriminatory impacts. Additionally, the data collected can include sensitive information such as demographic details, drug use, and interest in LGBTQI+ groups, which may lead to inappropriate forecasting of student success and flagging of students with disabilities as cheating.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 36, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do AI red-teaming and stakeholder engagement connect in privacy risk assessment?,"[' \n35 \nMEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as \nidentified in the MAP function – to inform responsible use and governance. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.9-001 \nApply and document ML explanation results such as: Analysis of embeddings, \nCounterfactual prompts, Gradient-based attributions, Model \ncompression/surrogate models, Occlusion/term reduction. \nConfabulation \nMS-2.9-002 \nDocument GAI model details including: Proposed use and organizational value; \nAssumptions and limitations, Data collection methodologies; Data provenance; \nData quality; Model architecture (e.g., convolutional neural network, \ntransformers, etc.); Optimization objectives; Training algorithms; RLHF \napproaches; Fine-tuning or retrieval-augmented generation approaches; \nEvaluation data; Ethical considerations; Legal and regulatory requirements. \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting of training data \nsamples, and subsequent reverse engineering, model extraction, and \nmembership inference risks; Revealing biometric, confidential, copyrighted, \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \nTracking or revealing location information of users or members of training \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \nProperty \nMS-2.10-002 \nEngage directly with end-users and other stakeholders to understand their \nexpectations and concerns regarding content provenance. Use this feedback to \nguide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \n']","AI red-teaming and stakeholder engagement connect in privacy risk assessment by engaging directly with end-users and other stakeholders to understand their expectations and concerns regarding content provenance. This feedback is then used to guide the design of provenance data-tracking techniques, which is essential for addressing privacy risks identified during AI red-teaming assessments.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 38, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What connects attack surfaces to system and data risks?,"[' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']","The context discusses increased attack surfaces for targeted cyberattacks, which may compromise a system's availability or the confidentiality or integrity of training data, code, or model weights. This connection indicates that as attack surfaces increase, the risks to systems and data also escalate.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What laws show data privacy principles in action?,"[' \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \nDATA PRIVACY \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe Privacy Act of 1974 requires privacy protections for personal information in federal \nrecords systems, including limits on data retention, and also provides individuals a general \nright to access and correct their data. Among other things, the Privacy Act limits the storage of individual \ninformation in federal systems of records, illustrating the principle of limiting the scope of data retention. Under \nthe Privacy Act, federal agencies may only retain data about an individual that is “relevant and necessary” to \naccomplish an agency’s statutory purpose or to comply with an Executive Order of the President. The law allows \nfor individuals to be able to access any of their individual information stored in a federal system of records, if not \nincluded under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agen\xad\ncies must provide a method for an individual to determine if their personal information is stored in a particular \nsystem of records, and must provide procedures for an individual to contest the contents of a record about them. \nFurther, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not \ncomply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … \nopportunities…, or benefits.” \nNIST’s Privacy Framework provides a comprehensive, detailed and actionable approach for \norganizations to manage privacy risks. The NIST Framework gives organizations ways to identify and \ncommunicate their privacy risks and goals to support ethical decision-making in system, product, and service \ndesign or deployment, as well as the measures they are taking to demonstrate compliance with applicable laws \nor regulations. It has been voluntarily adopted by organizations across many different sectors around the world.78\nA school board’s attempt to surveil public school students—undertaken without \nadequate community input—sparked a state-wide biometrics moratorium.79 Reacting to a plan in \nthe city of Lockport, New York, the state’s legislature banned the use of facial recognition systems and other \n“biometric identifying technology” in schools until July 1, 2022.80 The law additionally requires that a report on \nthe privacy, civil rights, and civil liberties implications of the use of such technologies be issued before \nbiometric identification technologies can be used in New York schools. \nFederal law requires employers, and any consultants they may retain, to report the costs \nof surveilling employees in the context of a labor dispute, providing a transparency \nmechanism to help protect worker organizing. Employers engaging in workplace surveillance ""where \nan object there-of, directly or indirectly, is […] to obtain information concerning the activities of employees or a \nlabor organization in connection with a labor dispute"" must report expenditures relating to this surveillance to \nthe Department of Labor Office of Labor-Management Standards, and consultants who employers retain for \nthese purposes must also file reports regarding their activities.81\nPrivacy choices on smartphones show that when technologies are well designed, privacy \nand data agency can be meaningful and not overwhelming. These choices—such as contextual, timely \nalerts about location tracking—are brief, direct, and use-specific. Many of the expectations listed here for \nprivacy by design and use-specific consent mirror those distributed to developers as best practices when \ndeveloping for smart phone devices,82 such as being transparent about how user data will be used, asking for app \npermissions during their use so that the use-context will be clear to users, and ensuring that the app will still \nwork if users deny (or later revoke) some permissions. \n39\n']","The Privacy Act of 1974 exemplifies data privacy principles in action by requiring privacy protections for personal information in federal records systems, including limits on data retention and providing individuals a general right to access and correct their data. Additionally, federal law mandates employers to report the costs of surveilling employees during labor disputes, which serves as a transparency mechanism to protect worker organizing.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 38, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What ensures AI transparency per NIST?,"[' \nENDNOTES\n12. Expectations about reporting are intended for the entity developing or using the automated system. The\nresulting reports can be provided to the public, regulators, auditors, industry standards groups, or others\nengaged in independent review, and should be made public as much as possible consistent with law,\nregulation, and policy, and noting that intellectual property or law enforcement considerations may prevent\npublic release. These reporting expectations are important for transparency, so the American people can\nhave confidence that their rights, opportunities, and access as well as their expectations around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation has publicly described the health and other benefits of these\n“traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow\xad\nVehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups are here to help.\nA growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI\nmodels. MIT Technology Review. Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for\xad\nin-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget. Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985\xad\nImplementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\nDepartment-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation\xad\nPathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for\xad\nthe-intelligence-community\n64\n']",The answer to given question is not present in context,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 63, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What safeguards do ethics reviews provide for automated systems?,"[' \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nWhile technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can \nalso lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range \nof error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. \nAutomated systems sometimes rely on data from other systems, including historical data, allowing irrelevant informa\xad\ntion from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposeful\xad\nly designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended \nor unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad\nvators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections \nfrom unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consis\xad\ntently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harm\xad\nful outcomes. \n•\nA proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was imple\xad\nmented at hundreds of hospitals around the country. An independent study showed that the model predictions\nunderperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting\nlikelihood of sepsis.6\n•\nOn social media, Black people who quote and criticize racist messages have had their own speech silenced when\na platform’s automated moderation system failed to distinguish this “counter speech” (or other critique\nand journalism) from the original hateful messages to which such speech responded.7\n•\nA device originally developed to help people track and find lost items has been used as a tool by stalkers to track\nvictims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to\nprotect people from unwanted tracking by alerting people on their phones when a device is found to be moving\nwith them over time and also by having the device make an occasional noise, but not all phones are able\nto receive the notification and the devices remain a safety concern due to their misuse.8 \n•\nAn algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit,\neven if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions\nwere the result of a feedback loop generated from the reuse of data from previous arrests and algorithm\npredictions.9\n16\n']",Ethics reviews provide safeguards for automated systems by vetting key development decisions to prevent harm from occurring. They help identify and mitigate potential harms through pre-deployment testing and ongoing monitoring processes.,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 15, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What ensures fair design in automated systems?,"[' AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor \nbe designed to proactively protect you from harms \nimpacts of automated systems. You should be protected from inappropriate or irrelevant data use in the \ndesign, development, and deployment of automated systems, and from the compounded harm of its reuse. \nIndependent evaluation and reporting that confirms that the system is safe and effective, including reporting of \nsteps taken to mitigate potential harms, should be performed and the results made public whenever possible. \nALGORITHMIC DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms and systems should be used and designed in \nan equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified \ndifferent treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including \npregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other \nclassification protected by law. Depending on the specific circumstances, such algorithmic discrimination \nmay violate legal protections. Designers, developers, and deployers of automated systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to use and design systems in an equitable way. This protection should include proactive \nequity assessments as part of the system design, use of representative data and protection against proxies \nfor demographic features, ensuring accessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent \nevaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5\n']","Fair design in automated systems is ensured through proactive and continuous measures to protect individuals and communities from algorithmic discrimination. This includes conducting equity assessments as part of the system design, using representative data, ensuring accessibility for people with disabilities, performing pre-deployment and ongoing disparity testing and mitigation, and maintaining clear organizational oversight. Additionally, independent evaluation and reporting, including algorithmic impact assessments and disparity testing results, should be made public whenever possible to confirm these protections.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 4, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What GAI activities contribute most to carbon emissions?,"[' \n8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles \n']","The GAI activities that contribute most to carbon emissions include training, maintaining, and operating GAI systems, particularly during the pre-training, fine-tuning, and inference stages. Current estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-trip flights between San Francisco and New York.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 11, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What AI systems improve design in chem & bio?,"[' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']","Chemical and biological design tools (BDTs) are highly specialized AI systems trained on scientific data that aid in chemical and biological design, potentially improving design capabilities beyond what text-based LLMs can provide.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How to align synthetic data with real stats while ensuring privacy?,"[' \n41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Configuration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidentifiable information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, notification format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring \n \nMANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or \ndeactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration \n']","Consider opportunities to responsibly use synthetic data and other privacy enhancing techniques in GAI development, where appropriate and applicable, to match the statistical properties of real-world data without disclosing personally identifiable information or contributing to homogenization.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 44, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What ensures AI transparency per NIST?,"[' \nENDNOTES\n12. Expectations about reporting are intended for the entity developing or using the automated system. The\nresulting reports can be provided to the public, regulators, auditors, industry standards groups, or others\nengaged in independent review, and should be made public as much as possible consistent with law,\nregulation, and policy, and noting that intellectual property or law enforcement considerations may prevent\npublic release. These reporting expectations are important for transparency, so the American people can\nhave confidence that their rights, opportunities, and access as well as their expectations around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation has publicly described the health and other benefits of these\n“traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow\xad\nVehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups are here to help.\nA growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI\nmodels. MIT Technology Review. Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for\xad\nin-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget. Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985\xad\nImplementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\nDepartment-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation\xad\nPathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for\xad\nthe-intelligence-community\n64\n']",The answer to given question is not present in context,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 63, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What policies ensure GAI risk assessment with transparency and safety?,"[' \n14 \nGOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.2-001 \nEstablish transparency policies and processes for documenting the origin and \nhistory of training data and generated data for GAI applications to advance digital \ncontent transparency, while balancing the proprietary nature of training \napproaches. \nData Privacy; Information \nIntegrity; Intellectual Property \nGV-1.2-002 \nEstablish policies to evaluate risk-relevant capabilities of GAI and robustness of \nsafety measures, both prior to deployment and on an ongoing basis, through \ninternal and external evaluations. \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based \non the organization’s risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or defining risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, offensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces significant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002 \nEstablish minimum thresholds for performance or assurance criteria and review as \npart of deployment approval (“go/”no-go”) policies, procedures, and processes, \nwith reviewed processes and approval thresholds reflecting measurement of GAI \ncapabilities and risks. \nCBRN Information or Capabilities; \nConfabulation; Dangerous, \nViolent, or Hateful Content \nGV-1.3-003 \nEstablish a test plan and response policy, before developing highly capable models, \nto periodically evaluate whether the model may misuse CBRN information or \ncapabilities and/or offensive cyber capabilities. \nCBRN Information or Capabilities; \nInformation Security \n']","The policies that ensure GAI risk assessment with transparency and safety include establishing transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications, as well as establishing policies to evaluate risk-relevant capabilities of GAI and the robustness of safety measures prior to deployment and on an ongoing basis.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 17, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What dual aspects should automated systems cover for effective oversight?,"[' \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nOngoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibra\xad\ntion procedures, in place to ensure that their performance does not fall below an acceptable level over time, \nbased on changing real-world conditions or deployment contexts, post-deployment modification, or unexpect\xad\ned conditions. This ongoing monitoring should include continuous evaluation of performance metrics and \nharm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well \nas ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitor\xad\ning should take into account the performance of both technical system components (the algorithm as well as \nany hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing \nthe actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s \ndetermination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitor\xad\ning as a check in the event there are shortcomings in automated monitoring systems. These monitoring proce\xad\ndures should be in place for the lifespan of the deployed automated system. \nClear organizational oversight. Entities responsible for the development or use of automated systems \nshould lay out clear governance structures and procedures. This includes clearly-stated governance proce\xad\ndures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing \nassessment and mitigation. Organizational stakeholders including those with oversight of the business process \nor operation being automated, as well as other organizational divisions that may be affected due to the use of \nthe system, should be involved in establishing governance procedures. Responsibility should rest high enough \nin the organization that decisions about resources, mitigation, incident response, and potential rollback can be \nmade promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those \nholding this responsibility should be made aware of any use cases with the potential for meaningful impact on \npeople’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, \nit may be appropriate for an independent ethics review to be conducted before deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its \nreuse \nRelevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally, \njustification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19\n']",Automated systems should cover ongoing monitoring procedures and clear organizational oversight for effective oversight.,reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 18, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What term refers to GAI's misleading false content?,"[' \n6 \n2.2. Confabulation \n“Confabulation” refers to a phenomenon in which GAI systems generate and confidently present \nerroneous or false content in response to prompts. Confabulations also include generated outputs that \ndiverge from the prompts or other input or that contradict previously generated statements in the same \ncontext. These phenomena are colloquially also referred to as “hallucinations” or “fabrications.” \nConfabulations can occur across GAI outputs and contexts.9,10 Confabulations are a natural result of the \nway generative models are designed: they generate outputs that approximate the statistical distribution \nof their training data; for example, LLMs predict the next token or word in a sentence or phrase. While \nsuch statistical prediction can produce factually accurate and consistent outputs, it can also produce \noutputs that are factually inaccurate or internally inconsistent. This dynamic is particularly relevant when \nit comes to open-ended prompts for long-form responses and in domains which require highly \ncontextual and/or domain expertise. \nRisks from confabulations may arise when users believe false content – often due to the confident nature \nof the response – leading users to act upon or promote the false information. This poses a challenge for \nmany real-world applications, such as in healthcare, where a confabulated summary of patient \ninformation reports could cause doctors to make incorrect diagnoses and/or recommend the wrong \ntreatments. Risks of confabulated content may be especially important to monitor when integrating GAI \ninto applications involving consequential decision making. \nGAI outputs may also include confabulated logic or citations that purport to justify or explain the \nsystem’s answer, which may further mislead humans into inappropriately trusting the system’s output. \nFor instance, LLMs sometimes provide logical steps for how they arrived at an answer even when the \nanswer itself is incorrect. Similarly, an LLM could falsely assert that it is human or has human traits, \npotentially deceiving humans into believing they are speaking with another human. \nThe extent to which humans can be deceived by LLMs, the mechanisms by which this may occur, and the \npotential risks from adversarial prompting of such behavior are emerging areas of study. Given the wide \nrange of downstream impacts of GAI, it is difficult to estimate the downstream scale and impact of \nconfabulations. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe, Valid and Reliable, Explainable \nand Interpretable \n2.3. Dangerous, Violent, or Hateful Content \nGAI systems can produce content that is inciting, radicalizing, or threatening, or that glorifies violence, \nwith greater ease and scale than other technologies. LLMs have been reported to generate dangerous or \nviolent recommendations, and some models have generated actionable instructions for dangerous or \n \n \n9 Confabulations of falsehoods are most commonly a problem for text-based outputs; for audio, image, or video \ncontent, creative generation of non-factual content can be a desired behavior. \n10 For example, legal confabulations have been shown to be pervasive in current state-of-the-art LLMs. See also, \ne.g., \n']","The term that refers to GAI's misleading false content is ""confabulation.""",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 9, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What's the role of interdisciplinary teams & human-AI config in GAI risk mgmt?,"[' \n23 \nMP-1.1-002 \nDetermine and document the expected and acceptable GAI system context of \nuse in collaboration with socio-cultural and other domain experts, by assessing: \nAssumptions and limitations; Direct value to the organization; Intended \noperational environment and observed usage patterns; Potential positive and \nnegative impacts to individuals, public safety, groups, communities, \norganizations, democratic institutions, and the physical environment; Social \nnorms and expectations. \nHarmful Bias and Homogenization \nMP-1.1-003 \nDocument risk measurement plans to address identified risks. Plans may \ninclude, as applicable: Individual and group cognitive biases (e.g., confirmation \nbias, funding bias, groupthink) for AI Actors involved in the design, \nimplementation, and use of GAI systems; Known past GAI system incidents and \nfailure modes; In-context use and foreseeable misuse, abuse, and off-label use; \nOver reliance on quantitative metrics and methodologies without sufficient \nawareness of their limitations in the context(s) of use; Standard measurement \nand structured human feedback approaches; Anticipated human-AI \nconfigurations. \nHuman-AI Configuration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMP-1.1-004 \nIdentify and document foreseeable illegal uses or applications of the GAI system \nthat surpass organizational risk tolerances. \nCBRN Information or Capabilities; \nDangerous, Violent, or Hateful \nContent; Obscene, Degrading, \nand/or Abusive Content \nAI Actor Tasks: AI Deployment \n \nMAP 1.2: Interdisciplinary AI Actors, competencies, skills, and capacities for establishing context reflect demographic diversity and \nbroad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary \ncollaboration are prioritized. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.2-001 \nEstablish and empower interdisciplinary teams that reflect a wide range of \ncapabilities, competencies, demographic groups, domain expertise, educational \nbackgrounds, lived experiences, professions, and skills across the enterprise to \ninform and conduct risk measurement and management functions. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMP-1.2-002 \nVerify that data or benchmarks used in risk measurement, and users, \nparticipants, or subjects involved in structured GAI public feedback exercises \nare representative of diverse in-context user populations. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nAI Actor Tasks: AI Deployment \n \n']","Interdisciplinary teams play a crucial role in GAI risk management by reflecting a wide range of capabilities, competencies, demographic groups, domain expertise, educational backgrounds, lived experiences, professions, and skills. Their participation is documented, and opportunities for interdisciplinary collaboration are prioritized. Additionally, human-AI configuration is important as it addresses harmful bias and homogenization, ensuring that data or benchmarks used in risk measurement are representative of diverse in-context user populations.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 26, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do digital content transparency tools ensure AI traceability and integrity?,"[' \n34 \nMS-2.7-009 Regularly assess and verify that security measures remain effective and have not \nbeen compromised. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.8-001 \nCompile statistics on actual policy violations, take-down requests, and intellectual \nproperty infringement for organizational GAI systems: Analyze transparency \nreports across demographic groups, languages groups. \nIntellectual Property; Harmful Bias \nand Homogenization \nMS-2.8-002 Document the instructions given to data annotators or AI red-teamers. \nHuman-AI Configuration \nMS-2.8-003 \nUse digital content transparency solutions to enable the documentation of each \ninstance where content is generated, modified, or shared to provide a tamper-\nproof history of the content, promote transparency, and enable traceability. \nRobust version control systems can also be applied to track changes across the AI \nlifecycle over time. \nInformation Integrity \nMS-2.8-004 Verify adequacy of GAI system user instructions through user testing. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \n']","Digital content transparency solutions ensure AI traceability and integrity by enabling the documentation of each instance where content is generated, modified, or shared, providing a tamper-proof history of the content. Additionally, robust version control systems can be applied to track changes across the AI lifecycle over time.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 37, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What can be done to prevent algorithmic bias in automated systems?,"[' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n']","To prevent algorithmic bias in automated systems, proactive equity assessments should be conducted during the design phase to identify potential discrimination and effects on equity. Data used in system development should be representative and reviewed for bias, and the use of demographic information should be avoided to prevent algorithmic discrimination. Proactive testing should be performed to identify and remove proxies that may lead to discrimination, and organizations should monitor systems closely for any resulting algorithmic discrimination.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 25, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How do you ensure ethical data collection and privacy?,"[' \n \n \n \n \nSECTION TITLE\nDATA PRIVACY\nYou should be protected from abusive data practices via built-in protections and you \nshould have agency over how data about you is used. You should be protected from violations of \nprivacy through design choices that ensure such protections are included by default, including ensuring that \ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \nin plain language, and give you agency over data collection and the specific context of use; current hard-to\xad\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \nreporting that confirms your data decisions have been respected and provides an assessment of the \npotential impact of surveillance technologies on your rights, opportunities, or access. \nNOTICE AND EXPLANATION\nYou should know that an automated system is being used and understand how and why it \ncontributes to outcomes that impact you. Designers, developers, and deployers of automated systems \nshould provide generally accessible plain language documentation including clear descriptions of the overall \nsystem functioning and the role automation plays, notice that such systems are in use, the individual or organiza\xad\ntion responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice \nshould be kept up-to-date and people impacted by the system should be notified of significant use case or key \nfunctionality changes. You should know how and why an outcome impacting you was determined by an \nautomated system, including when the automated system is not the sole input determining the outcome. \nAutomated systems should provide explanations that are technically valid, meaningful and useful to you and to \nany operators or others who need to understand the system, and calibrated to the level of risk based on the \ncontext. Reporting that includes summary information about these automated systems in plain language and \nassessments of the clarity and quality of the notice and explanations should be made public whenever possible. \n6\n']","To ensure ethical data collection and privacy, designers, developers, and deployers of automated systems should seek user permission and respect their decisions regarding data collection, use, access, transfer, and deletion. They should implement built-in protections, ensure data collection conforms to reasonable expectations, and only collect data that is strictly necessary. Consent should be meaningful and understandable, and enhanced protections should be in place for sensitive domains. Additionally, there should be oversight of surveillance technologies to protect privacy and civil liberties.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What are the perks of logging GAI incidents for AI risk mgmt?,"[' \n53 \nDocumenting, reporting, and sharing information about GAI incidents can help mitigate and prevent \nharmful outcomes by assisting relevant AI Actors in tracing impacts to their source. Greater awareness \nand standardization of GAI incident reporting could promote this transparency and improve GAI risk \nmanagement across the AI ecosystem. \nDocumentation and Involvement of AI Actors \nAI Actors should be aware of their roles in reporting AI incidents. To better understand previous incidents \nand implement measures to prevent similar ones in the future, organizations could consider developing \nguidelines for publicly available incident reporting which include information about AI actor \nresponsibilities. These guidelines would help AI system operators identify GAI incidents across the AI \nlifecycle and with AI Actors regardless of role. Documentation and review of third-party inputs and \nplugins for GAI systems is especially important for AI Actors in the context of incident disclosure; LLM \ninputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents. \n \n']","Logging GAI incidents can facilitate smoother sharing of information with relevant AI Actors, empower them in responding to and managing AI incidents, and improve GAI risk management across the AI ecosystem. It also aids in documenting and reviewing third-party inputs and plugins, which is crucial for incident disclosure.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 56, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What makes it hard for people to challenge algorithmic decisions?,"["" \n \n \n \n \nNOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief summary of the problems which the principle seeks to address and protect \nagainst, including illustrative examples. \nAutomated systems now determine opportunities, from employment to credit, and directly shape the American \npublic’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this \nexpansive impact is not always visible. An applicant might not know whether a person rejected their resume or a \nhiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny\xad\ning their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting \ndecisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. \nNotice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable\xad\nness of a recommendation before enacting it. \nIn order to guard against potential harms, the American public needs to know if an automated system is being used. \nClear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like\xad\nwise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a \nparticular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, \nunaccountable, whether by design or by omission. These factors can make explanations both more challenging and \nmore important, and should not be used as a pretext to avoid explaining important decisions to the people impacted \nby those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline \nrequirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the \nvalidity and reasonable use of automated systems. \n•\nA lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home\nhealth-care assistance couldn't determine why, especially since the decision went against historical access\npractices. In a court hearing, the lawyer learned from a witness that the state in which the older client\nlived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it\nharder to understand and contest the decision.\n•\nA formal child welfare investigation is opened against a parent based on an algorithm and without the parent\never being notified that data was being collected and used as part of an algorithmic child maltreatment\nrisk assessment.84 The lack of notice or an explanation makes it harder for those performing child\nmaltreatment assessments to validate the risk assessment and denies parents knowledge that could help them\ncontest a decision.\n41\n""]","People find it hard to challenge algorithmic decisions because they are often denied the knowledge needed to address the impact of automated systems on their lives. The decision-making processes of these systems tend to be opaque and complex, making it difficult for individuals to ascertain how or why a decision was made. Additionally, the lack of clear and timely explanations can hinder their ability to contest decisions effectively.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 40, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True How is AI performance evaluated with human safety and privacy in mind?,"[' \n30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are \nrepresentative of the relevant population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and manage statistical biases related to GAI content provenance through \ntechniques such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity; Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument how content provenance data is tracked and how that data interacts \nwith privacy and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects; Leveraging privacy output filters; Removing any personally \nidentifiable information (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration; Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation or revoke their \nconsent for present or future use of their data in GAI applications. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004 \nUse techniques such as anonymization, differential privacy or other privacy-\nenhancing technologies to minimize the risks associated with linking AI-generated \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for \nconditions similar to deployment setting(s). Measures are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider baseline model performance on suites of benchmarks when selecting a \nmodel for fine tuning or enhancement with retrieval-augmented generation. \nInformation Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities using empirically validated methods. \nConfabulation; Information \nSecurity \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors, such as those \nwith system release approval authority. \nHuman-AI Configuration \n']","AI performance is evaluated with human safety and privacy in mind by implementing measures such as assessing and managing statistical biases related to GAI content provenance, documenting how content provenance data is tracked, providing human subjects with options to withdraw participation or revoke consent, and using techniques like anonymization and differential privacy to minimize risks associated with linking AI-generated content back to individual human subjects.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 33, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What issues come from easy access to obscene content?,"[' \n5 \noperations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may \ncompromise a system’s availability or the confidentiality or integrity of training data, code, or \nmodel weights. \n10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or \nlicensed content without authorization (possibly in situations which do not fall under fair use); \neased exposure of trade secrets; or plagiarism or illegal replication. \n11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, \ndegrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse \nmaterial (CSAM), and nonconsensual intimate images (NCII) of adults. \n12. Value Chain and Component Integration: Non-transparent or untraceable integration of \nupstream third-party components, including data that has been improperly obtained or not \nprocessed and cleaned due to increased automation from GAI; improper supplier vetting across \nthe AI lifecycle; or other issues that diminish transparency or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities \nIn the future, GAI may enable malicious actors to more easily access CBRN weapons and/or relevant \nknowledge, information, materials, tools, or technologies that could be misused to assist in the design, \ndevelopment, production, or use of CBRN weapons or other dangerous materials or agents. While \nrelevant biological and chemical threat knowledge and information is often publicly accessible, LLMs \ncould facilitate its analysis or synthesis, particularly by individuals without formal scientific training or \nexpertise. \nRecent research on this topic found that LLM outputs regarding biological threat creation and attack \nplanning provided minimal assistance beyond traditional search engine queries, suggesting that state-of-\nthe-art LLMs at the time these studies were conducted do not substantially increase the operational \nlikelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) – highly specialized AI systems trained on \nscientific data that aid in chemical and biological design – may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \nefficacious, including for beneficial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents. \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable \n']","Easy access to obscene content can lead to the production of and access to obscene, degrading, and/or abusive imagery, which can cause harm, including synthetic child sexual abuse material (CSAM) and nonconsensual intimate images (NCII) of adults.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 8, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True How do user feedback and community input assess AI risks?,"[' \n38 \nMEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.13-001 \nCreate measurement error models for pre-deployment metrics to demonstrate \nconstruct validity for each metric (i.e., does the metric effectively operationalize \nthe desired concept): Measure or estimate, and document, biases or statistical \nvariance in applied metrics or structured human feedback processes; Leverage \ndomain expertise when modeling complex societal constructs such as hateful \ncontent. \nConfabulation; Information \nIntegrity; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV \n \nMEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available \nmeasurement techniques or where metrics are not yet available. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.2-001 \nEstablish processes for identifying emergent GAI system risks including \nconsulting with external AI Actors. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are \nestablished and integrated into AI system evaluation metrics. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.3-001 \nConduct impact assessments on how AI-generated content might affect \ndifferent social, economic, and cultural groups. \nHarmful Bias and Homogenization \nMS-3.3-002 \nConduct studies to understand how end users perceive and interact with GAI \ncontent and accompanying content provenance within context of use. Assess \nwhether the content aligns with their expectations and how they may act upon \nthe information presented. \nHuman-AI Configuration; \nInformation Integrity \nMS-3.3-003 \nEvaluate potential biases and stereotypes that could emerge from the AI-\ngenerated content using appropriate methodologies including computational \ntesting methods as well as evaluating structured feedback input. \nHarmful Bias and Homogenization \n']","User feedback and community input assess AI risks through established feedback processes that allow end users and impacted communities to report problems and appeal system outcomes. These processes are integrated into AI system evaluation metrics, which include conducting impact assessments on how AI-generated content might affect different social, economic, and cultural groups, as well as understanding user perceptions and interactions with GAI content.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 41, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True What should automated systems consider for consent and ethics in sensitive data?,"[' \n \n \n \n \n \n \nDATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \xad\xad\xad\xad\xad\xad\nIn addition to the privacy expectations above for general non-sensitive data, any system collecting, using, shar-\ning, or storing sensitive data should meet the expectations below. Depending on the technological use case and \nbased on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child. \nProvide enhanced protections for data related to sensitive domains \nNecessary functions only. Sensitive data should only be used for functions strictly necessary for that \ndomain or for functions that are required for administrative reasons (e.g., school attendance records), unless \nconsent is acquired, if appropriate, and the additional expectations in this section are met. Consent for non-\nnecessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to \nreceive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance \ncompany) in order to facilitate payment for such a need, that data should only be used for that purpose. \nEthical review and use prohibitions. Any use of sensitive data or decision process based in part on sensi-\ntive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go \nthrough a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an indepen-\ndent ethics committee or similarly robust process). In some cases, this ethical review may determine that data \nshould not be used or shared for specific uses even with consent. Some novel uses of automated systems in this \ncontext, where the algorithm is dynamically developing and where the science behind the use case is not well \nestablished, may also count as human subject experimentation, and require special review under organizational \ncompliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and \ngovernance procedures. \nData quality. In sensitive domains, entities should be especially careful to maintain the quality of data to \navoid adverse consequences arising from decision-making based on flawed or inaccurate data. Such care is \nnecessary in a fragmented, complex data ecosystem and for datasets that have limited access such as for fraud \nprevention and law enforcement. It should be not left solely to individuals to carry the burden of reviewing and \ncorrecting data. Entities should conduct regular, independent audits and take prompt corrective measures to \nmaintain accurate, timely, and complete data. \nLimit access to sensitive data and derived data. Sensitive data and derived data should not be sold, \nshared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based \non a principle of local control, such that those individuals closest to the data subject have more access while \nthose who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a \nsuperintendent does not). \nReporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities devel-\noping technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data \nshould, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches \nthat resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a \ndescription of any data sold, shared, or made public, and how that data was assessed to determine it did not pres-\nent a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added \nbased on these procedures. Reporting should be provided in a clear and machine-readable manner. \n38\n']","Automated systems should consider that consent for sensitive data may need to be acquired from a guardian and/or child, and that consent for non-necessary functions should be optional. Additionally, any use of sensitive data or decision processes based on sensitive data that might limit rights, opportunities, or access should undergo a thorough ethical review and monitoring. This includes ensuring that data quality is maintained to avoid adverse consequences from flawed data, limiting access to sensitive data based on necessity, and providing regular public reports on data security lapses and ethical pre-reviews.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 37, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True "What links are there between digital IDs, welfare efficiency, and community impacts?","["" \n \n \n \n \nAPPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing scope of technology use in providing for social welfare, including \nin fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. \nHowever, various panelists individually cautioned that these systems may reduce burden for government \nagencies by increasing the burden and agency of people using and interacting with these technologies. \nAdditionally, these systems can produce feedback loops and compounded harm, collecting data from \ncommunities and using it to reinforce inequality. Various panelists suggested that these harms could be \nmitigated by ensuring community input at the beginning of the design process, providing ways to opt out of \nthese systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, \nand providing clear notice about the use of these systems and clear explanations of how and what the \ntechnologies are doing. Some panelists suggested that technology should be used to help people receive \nbenefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only \nused to provide a positive outcome; technology shouldn't be used to take supports away from people who need \nthem. \nPanel 6: The Healthcare System. This event explored current and emerging uses of technology in the \nhealthcare system and consumer products related to health. \nWelcome:\n•\nAlondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology\nPolicy\n•\nPatrick Gaspard, President and CEO, Center for American Progress\nModerator: Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of \nHealth and Human Services. \nPanelists: \n•\nMark Schneider, Health Innovation Advisor, ChristianaCare\n•\nZiad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management,\nUniversity of California, Berkeley School of Public Health\n•\nDorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and\nSadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania\n•\nDavid Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University\n•\nJamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for\nHealth Equity\xad\nPanelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and \noutcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of tech-\nnology as both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of \ndelivery, various panelists pointed to a number of concerns including access to and expense of broadband \nservice, the privacy concerns associated with telehealth systems, the expense associated with health \nmonitoring devices, and how this can exacerbate equity issues. On the issue of technology enhanced care, \nsome panelists spoke extensively about the way in which racial biases and the use of race in medicine \nperpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used \nin medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance \nof having the voices of those subjected to these technologies be heard.\n59\n""]","Panelists described the increasing scope of technology use in providing for social welfare, including digital ID systems, which are focused on improving efficiency and reducing cost. However, they cautioned that these systems may reduce the burden for government agencies by increasing the burden and agency of people using and interacting with these technologies. Additionally, these systems can produce feedback loops and compounded harm, collecting data from communities and using it to reinforce inequality. To mitigate these harms, it was suggested that community input should be ensured at the beginning of the design process, and there should be ways to opt out of these systems and use associated human-driven mechanisms instead.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 58, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What drives extra data protections in health and finance?,"[' \n \n \n \nDATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\nSome domains, including health, employment, education, criminal justice, and personal finance, have long been \nsingled out as sensitive domains deserving of enhanced data protections. This is due to the intimate nature of these \ndomains as well as the inability of individuals to opt out of these domains in any meaningful way, and the \nhistorical discrimination that has often accompanied data knowledge.69 Domains understood by the public to be \nsensitive also change over time, including because of technological developments. Tracking and monitoring \ntechnologies, personal tracking devices, and our extensive data footprints are used and misused more than ever \nbefore; as such, the protections afforded by current legal guidelines may be inadequate. The American public \ndeserves assurances that data related to such sensitive domains is protected and used appropriately and only in \nnarrowly defined contexts with clear benefits to the individual and/or society. \nTo this end, automated systems that collect, use, share, or store data related to these sensitive domains should meet \nadditional expectations. Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined \nbelow); are generated by technologies used in a sensitive domain; can be used to infer data from a sensitive domain or \nsensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioral data, \ngeolocation data, data related to interaction with the criminal justice system, relationship history and legal status such \nas custody and divorce information, and home, work, or school environmental data); or have the reasonable potential \nto be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm \ndue to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even \nif not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video \ndata. “Sensitive domains” are those in which activities being conducted can cause material harms, including signifi\xad\ncant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains \nthat have historically been singled out as deserving of enhanced data protections or where such enhanced protections \nare reasonably expected by the public include, but are not limited to, health, family planning and care, employment, \neducation, criminal justice, and personal finance. In the context of this framework, such domains are considered \nsensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains \nand data that are considered sensitive are understood to change over time based on societal norms and context. \n36\n']","Extra data protections in health and finance are driven by the intimate nature of these domains, the inability of individuals to opt out in a meaningful way, and the historical discrimination that has often accompanied data knowledge. Additionally, the potential for material harms, including significant adverse effects on human rights such as autonomy and dignity, civil liberties, and civil rights, necessitates enhanced protections.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 35, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What insights did OSTP seek from experts in AI Bill of Rights panels?,"[' \n \n \n \n \nSECTION TITLE\nAPPENDIX\nListening to the American People \nThe White House Office of Science and Technology Policy (OSTP) led a yearlong process to seek and distill \ninput from people across the country – from impacted communities to industry stakeholders to \ntechnology developers to other experts across fields and sectors, as well as policymakers across the Federal \ngovernment – on the issue of algorithmic and data-driven harms and potential remedies. Through panel \ndiscussions, public listening sessions, private meetings, a formal request for information, and input to a \npublicly accessible and widely-publicized email address, people across the United States spoke up about \nboth the promises and potential harms of these technologies, and played a central role in shaping the \nBlueprint for an AI Bill of Rights. \nPanel Discussions to Inform the Blueprint for An AI Bill of Rights \nOSTP co-hosted a series of six panel discussions in collaboration with the Center for American Progress, \nthe Joint Center for Political and Economic Studies, New America, the German Marshall Fund, the Electronic \nPrivacy Information Center, and the Mozilla Foundation. The purpose of these convenings – recordings of \nwhich are publicly available online112 – was to bring together a variety of experts, practitioners, advocates \nand federal government officials to offer insights and analysis on the risks, harms, benefits, and \npolicy opportunities of automated systems. Each panel discussion was organized around a wide-ranging \ntheme, exploring current challenges and concerns and considering what an automated society that \nrespects democratic values should look like. These discussions focused on the topics of consumer \nrights and protections, the criminal justice system, equal opportunities and civil justice, artificial \nintelligence and democratic values, social welfare and development, and the healthcare system. \nSummaries of Panel Discussions: \nPanel 1: Consumer Rights and Protections. This event explored the opportunities and challenges for \nindividual consumers and communities in the context of a growing ecosystem of AI-enabled consumer \nproducts, advanced platforms and services, “Internet of Things” (IoT) devices, and smart city products and \nservices. \nWelcome:\n•\nRashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and\nTechnology Policy\n•\nKaren Kornbluh, Senior Fellow and Director of the Digital Innovation and Democracy Initiative, German\nMarshall Fund\nModerator: \nDevin E. Willis, Attorney, Division of Privacy and Identity Protection, Bureau of Consumer Protection, Federal \nTrade Commission \nPanelists: \n•\nTamika L. Butler, Principal, Tamika L. Butler Consulting\n•\nJennifer Clark, Professor and Head of City and Regional Planning, Knowlton School of Engineering, Ohio\nState University\n•\nCarl Holshouser, Senior Vice President for Operations and Strategic Initiatives, TechNet\n•\nSurya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup\n•\nMariah Montgomery, National Campaign Director, Partnership for Working Families\n55\n']","OSTP sought insights and analysis on the risks, harms, benefits, and policy opportunities of automated systems from a variety of experts, practitioners, advocates, and federal government officials during the AI Bill of Rights panels. The discussions focused on consumer rights and protections, the criminal justice system, equal opportunities and civil justice, artificial intelligence and democratic values, social welfare and development, and the healthcare system.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 54, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What key elements ensure clarity in docs about an automated system's impact?,"[' \nYou should know that an automated system is being used, \nand understand how and why it contributes to outcomes \nthat impact you. Designers, developers, and deployers of automat\xad\ned systems should provide generally accessible plain language docu\xad\nmentation including clear descriptions of the overall system func\xad\ntioning and the role automation plays, notice that such systems are in \nuse, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated \nsystem is not the sole input determining the outcome. Automated \nsystems should provide explanations that are technically valid, \nmeaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible. \nNOTICE AND EXPLANATION\n40\n']","Key elements that ensure clarity in documentation about an automated system's impact include providing generally accessible plain language documentation, clear descriptions of the overall system functioning and the role of automation, timely updates about significant use case or key functionality changes, and explanations of outcomes that are clear, timely, and accessible.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 39, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True What biases to note for pre-deployment measurement error models?,"[' \n38 \nMEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and \ndocumented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.13-001 \nCreate measurement error models for pre-deployment metrics to demonstrate \nconstruct validity for each metric (i.e., does the metric effectively operationalize \nthe desired concept): Measure or estimate, and document, biases or statistical \nvariance in applied metrics or structured human feedback processes; Leverage \ndomain expertise when modeling complex societal constructs such as hateful \ncontent. \nConfabulation; Information \nIntegrity; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV \n \nMEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available \nmeasurement techniques or where metrics are not yet available. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.2-001 \nEstablish processes for identifying emergent GAI system risks including \nconsulting with external AI Actors. \nHuman-AI Configuration; \nConfabulation \nAI Actor Tasks: AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV \n \nMEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are \nestablished and integrated into AI system evaluation metrics. \nAction ID \nSuggested Action \nGAI Risks \nMS-3.3-001 \nConduct impact assessments on how AI-generated content might affect \ndifferent social, economic, and cultural groups. \nHarmful Bias and Homogenization \nMS-3.3-002 \nConduct studies to understand how end users perceive and interact with GAI \ncontent and accompanying content provenance within context of use. Assess \nwhether the content aligns with their expectations and how they may act upon \nthe information presented. \nHuman-AI Configuration; \nInformation Integrity \nMS-3.3-003 \nEvaluate potential biases and stereotypes that could emerge from the AI-\ngenerated content using appropriate methodologies including computational \ntesting methods as well as evaluating structured feedback input. \nHarmful Bias and Homogenization \n']","The context mentions documenting biases or statistical variance in applied metrics or structured human feedback processes, particularly when modeling complex societal constructs such as hateful content. However, it does not specify particular biases to note for pre-deployment measurement error models.",reasoning,"[{'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 41, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': ""D:20240805141702-04'00'"", 'modDate': ""D:20240805143048-04'00'"", 'trapped': ''}]",True "Which automated systems affect equal opportunities in edu, housing, & jobs?","[' \n \n \n \n \n \n \n \n \nAPPENDIX\nExamples of Automated Systems \nThe below examples are meant to illustrate the breadth of automated systems that, insofar as they have the \npotential to meaningfully impact rights, opportunities, or access to critical resources or services, should \nbe covered by the Blueprint for an AI Bill of Rights. These examples should not be construed to limit that \nscope, which includes automated systems that may not yet exist, but which fall under these criteria. \nExamples of automated systems for which the Blueprint for an AI Bill of Rights should be considered include \nthose that have the potential to meaningfully impact: \n• Civil rights, civil liberties, or privacy, including but not limited to:\nSpeech-related systems such as automated content moderation tools; \nSurveillance and criminal justice system algorithms such as risk assessments, predictive \n policing, automated license plate readers, real-time facial recognition systems (especially \n those used in public places or during protected activities like peaceful protests), social media \n monitoring, and ankle monitoring devices; \nVoting-related systems such as signature matching tools; \nSystems with a potential privacy impact such as smart home systems and associated data, \n systems that use or collect health-related data, systems that use or collect education-related \n data, criminal justice system data, ad-targeting systems, and systems that perform big data \n analytics in order to build profiles or infer personal information about individuals; and \nAny system that has the meaningful potential to lead to algorithmic discrimination. \n• Equal opportunities, including but not limited to:\nEducation-related systems such as algorithms that purport to detect student cheating or \n plagiarism, admissions algorithms, online or virtual reality student monitoring systems, \nprojections of student progress or outcomes, algorithms that determine access to resources or \n rograms, and surveillance of classes (whether online or in-person); \nHousing-related systems such as tenant screening algorithms, automated valuation systems that \n estimate the value of homes used in mortgage underwriting or home insurance, and automated \n valuations from online aggregator websites; and \nEmployment-related systems such as workplace algorithms that inform all aspects of the terms \n and conditions of employment including, but not limited to, pay or promotion, hiring or termina- \n tion algorithms, virtual or augmented reality workplace training programs, and electronic work \nplace surveillance and management systems. \n• Access to critical resources and services, including but not limited to:\nHealth and health insurance technologies such as medical AI systems and devices, AI-assisted \n diagnostic tools, algorithms or predictive models used to support clinical decision making, medical \n or insurance health risk assessments, drug addiction risk assessments and associated access alg \n-orithms, wearable technologies, wellness apps, insurance care allocation algorithms, and health\ninsurance cost and underwriting algorithms;\nFinancial system algorithms such as loan allocation algorithms, financial system access determi-\nnation algorithms, credit scoring systems, insurance algorithms including risk assessments, auto\n-mated interest rate determinations, and financial algorithms that apply penalties (e.g., that can\ngarnish wages or withhold tax returns);\n53\n']","Automated systems that affect equal opportunities in education include algorithms that detect student cheating or plagiarism, admissions algorithms, online or virtual reality student monitoring systems, projections of student progress or outcomes, algorithms that determine access to resources or programs, and surveillance of classes. In housing, tenant screening algorithms, automated valuation systems for mortgage underwriting or home insurance, and automated valuations from online aggregator websites are relevant. For employment, workplace algorithms that inform terms and conditions of employment, hiring or termination algorithms, virtual or augmented reality workplace training programs, and electronic workplace surveillance and management systems are included.",reasoning,"[{'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 52, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': ''}]",True