{"questions": {"61c5d3ec-11af-4a36-a028-e9e22afb5a8f": "What are the five principles outlined in the Blueprint for an AI Bill of Rights?", "662911a9-7407-4a24-95ff-2350dde354be": "How can communities and industry implement the practices suggested in the Blueprint for an AI Bill of Rights?", "09ded63e-b364-42e7-9677-e1dfa4932b9b": "What are the best practices for providing independent evaluators access to automated systems while ensuring privacy and security?", "6b539180-33d5-4cd2-abf5-63cbe6178e6a": "How can organizations ensure that evaluator access to automated systems remains truly independent and cannot be revoked without reasonable justification?", "f35c9f92-67ac-4772-9dab-6cf2ae32812f": "What are the legal requirements for providing notice when making a video recording of someone?", "d0edffef-580d-4890-a6dd-e08925fadd27": "How are companies and researchers improving automated systems to explain decisions that impact consumers?", "4a36f5cd-0f9d-42ba-bd8e-d0eaf0af2d52": "How do advertisement delivery systems reinforce racial and gender stereotypes?", "1c6de01d-b59d-4421-9339-0e501b4fd2b9": "What are the issues faced by transgender travelers with TSA body scanners at airport checkpoints?", "155db437-082c-44f4-8751-960146c3512c": "What are the five principles outlined in the Blueprint for an AI Bill of Rights?", "95cae333-a114-41e8-98f5-10619377f6bf": "How can organizations apply the Blueprint for an AI Bill of Rights to protect civil rights and privacy?", "077e8ee5-5768-4967-b8ed-891c6cc0085d": "What are the benefits of having a human fallback mechanism in automated systems?", "8edf6c51-407d-478c-832a-ef103ea3709e": "How do automated signature matching systems impact voters with mental or physical disabilities?", "7058b177-27f4-4d6b-a478-176ead46f325": "What are the best practices for documenting the sources and types of training data in AI models?", "1e48abdd-a664-4c7a-8f19-151ca61e5006": "How can user feedback be effectively integrated into system updates to address problematic content?", "e5aba341-abc2-4965-a224-fa10823f4d2f": "What is the two-part test used in the AI Bill of Rights framework to determine which systems are in scope?", "23c3711f-c55b-49e5-9936-22d6bfc010af": "How does the AI Bill of Rights framework ensure that automated systems do not negatively impact the American public's rights and access to critical resources?", "08a12dd0-5dd7-4f87-8913-d86a9cc2c8b7": "What are adversarial role-playing exercises and how do they help in identifying failure modes in GAI systems?", "7d2b3bbe-6d0b-470d-b85d-d0c636ac4354": "How can profiling threats and negative impacts improve the security of GAI systems?", "c385b92d-1c01-48ae-be4c-f6b42b5e6af6": "What are the potential negative impacts of school surveillance on students via laptops?", "b4286477-40f0-46b8-bba8-4fe204b0dafa": "How does \"Bossware\" affect the health of employees according to the Center for Democracy & Technology report?", "6c98dd15-2a73-4c66-8a6a-c578c67a2434": "How can employers ensure their use of AI in hiring complies with the Americans with Disabilities Act (ADA)?", "00ab3a02-dffb-482b-ad10-3cab6ad77520": "What are the potential risks of using healthcare algorithms that rely on past medical costs to predict future needs?", "510ed741-6a36-4d13-a7dc-6a42262136be": "What are some effective context-based measures to identify new impacts of GAI systems?", "8f74dbe1-c3ed-48ca-9635-d701d26e829a": "How can regular engagements with AI Actors help in evaluating unanticipated impacts of GAI systems?", "3809c393-b89e-494c-b529-c65e601c1544": "What are acceptable use policies for GAI interfaces and how do they determine the types of queries GAI applications should refuse to respond to?", "edb9b7b1-11c1-421c-a07f-7abe3d6e7c21": "How can organizations establish effective user feedback mechanisms for GAI systems, and what should these mechanisms include?", "5a48e740-85f0-48c7-b0c7-6247c384f052": "How often should adversarial testing be conducted to effectively map and measure GAI risks?", "d6567db0-b18c-4dcb-b80c-146f2047bc13": "What are the benefits of evaluating GAI system performance in real-world scenarios compared to controlled testing environments?", "59a37c01-7bac-4f9d-980f-48f5489e61e6": "What are the common statistics reported about who chooses the human alternative in automated systems?", "e24a71f0-8b86-461a-92bd-fa6cef7ca33b": "How often should reports on the accessibility, timeliness, and effectiveness of human consideration and fallback be made public?", "63dcc302-d64d-47f5-a304-a64d4d6642b4": "What are some examples of companies that have successfully implemented bias testing in their product quality assessment?", "9b9b4805-12cb-453d-a3f4-ddbb20679c39": "How are federal government agencies developing standards to prevent algorithmic discrimination?", "7b3d457a-d0bf-4b13-b59c-df184af98f08": "What are some common protections against unlawful surveillance and violations of privacy in both public and private sectors?", "9473baea-32cd-4147-a547-5d45b0daa757": "How can individuals ensure equitable access to education, housing, and employment opportunities?", "d4388801-831e-45e0-bf67-b67974027277": "What are the key principles outlined in the AI Bill of Rights?", "d4107956-2806-4098-a79e-e753cab1bf82": "How can the AI Bill of Rights be practically implemented in technical systems?", "829774bb-4770-46cf-9f1b-86f51e7b6679": "How can you ensure the data used in automated systems is of high quality and relevant to the task?", "79c355b3-3945-402d-9d15-e460689ba635": "What methods can be employed to measure and limit errors from data entry in automated systems?", "e558dbd7-ca81-4070-9777-49636694d674": "What are some reasons why certain risks cannot be measured quantitatively in AI systems?", "e1ce22f6-cad0-4bbe-87ae-5222158a4393": "How can organizations involve independent assessors and domain experts in the regular assessment of AI systems?", "ae84398b-1649-4cce-8fa2-6295c80f7ec9": "What are the risks associated with confabulated content in healthcare applications using GAI?", "648a7032-05c8-45c2-a7bb-2dca8fa9ffd0": "How can confabulated logic or citations from GAI systems mislead users?", "2b743770-5d66-4aa8-b9b4-c33adc78c1e3": "How can companies ethically use data to monitor employee performance without violating privacy?", "3b6f61ff-349d-4817-8c82-d064b9a71c86": "What are the legal implications of employers using surveillance data to intervene in employee discussions?", "f596fded-c16b-49cb-b400-734c65b185af": "What are the risks of using AI in high-stakes settings as highlighted by Pamela Wisniewski and Seny Kamara?", "1626655d-7f72-4d0a-9170-3abdc8ed86ec": "Why is it important to place trust in people rather than technologies when designing AI systems?", "cec6f35c-1b45-4d56-8c2f-aef7bc860a01": "How can organizations ensure that their demographic assessments are inclusive of all protected classifications?", "12aca964-2112-4b36-8a40-14ab1512ac75": "What are the best practices for separating demographic data used for disparity assessment from data used in automated systems?", "53a48063-f4fb-482f-bd70-36915ec63956": "What are some emerging technologies being used to improve social welfare systems?", "7fdbbfed-73aa-45a8-9f1c-58ec2c0f3912": "How can digital welfare systems impact life chances according to experts like Christiaan van Veen?", "0ed0fb9c-47c4-4c7c-a5ae-d7e3a35670a1": "What are some best practices for developers to ensure privacy by design in smartphone apps?", "88297ffa-b5ca-460c-81ed-a61975ab39ef": "How can developers make app permissions clear and use-specific for users?", "38409d77-4936-4266-a7f3-2d910d3bea91": "What are the privacy implications of using biometric identification technologies in New York schools?", "3d2d3a9e-a6a7-49f5-bdd8-5db95fc8b602": "What are the reporting requirements for employers who surveil employees during a labor dispute?", "ca685f83-ccd7-4a17-a31d-bfc648b58840": "What measures are included in the AI Bill of Rights to ensure automated systems are safe and effective?", "ce1fdffd-851d-463e-8f24-4596865b62dc": "How does the AI Bill of Rights propose to handle the risks and potential impacts of automated systems?", "1a82989c-3ead-4aea-9098-53d3dca7f9b7": "What are the potential downstream impacts of errors in third-party GAI components on system accuracy and robustness?", "a30ea710-3349-4357-8dcb-915f6c69f2da": "How can inaccuracies in test dataset labels affect the stability and robustness of GAI benchmarks?", "004b52ee-6a49-47d7-a4bd-77ec96fadc31": "What are the best practices for developing and updating GAI system incident response and recovery plans?", "a5ad1cc1-318a-4210-8838-22015d780344": "How can organizations ensure their response and recovery plans account for the entire GAI system value chain?", "f05e4729-18f1-4664-9f41-2ad997f9d726": "How can we assess the proportion of synthetic to non-synthetic training data in AI models?", "81c90ac3-caf0-4c9d-8e02-8c62d26a047e": "What are the best practices for documenting the environmental impacts of AI model development and deployment?", "0abf12fc-3e73-41e5-8594-5e2bb6ecdb24": "What are the primary considerations for organizations designing and developing GAI according to the GAI PWG consultation process?", "e3abf868-922a-42e7-8c5a-b1ff0a353d39": "How can governance principles and techniques be applied to manage risks in GAI systems?", "55c79cd5-dee3-4e43-b8a3-839028518379": "What are the key considerations for documenting the intended purposes and beneficial uses of an AI system?", "456333eb-689e-4896-b2d4-0cf136672c77": "How do internal vs external use and narrow vs broad application scope impact the identification of intended purposes for AI systems?", "8834b86c-b1b9-43d6-92e0-3c64ca09e854": "How can feedback from internal and external AI actors be used to assess the impact of AI-generated content?", "e84f1a90-e702-4594-84b8-5c5b67352195": "What are the benefits of using real-time auditing tools for tracking and validating the lineage and authenticity of AI-generated data?", "490b6ca7-059f-41fe-82ae-b8d2c3890cf1": "What are the main findings of Carlini et al (2024) regarding the vulnerabilities in production language models?", "59bed72b-bd80-47c3-bb57-08dd086ecf9d": "How does the study by Chandra et al (2023) propose to combat Chinese influence operations and disinformation?", "625e3e66-e1fc-4223-a201-e88b765f449e": "What is the role of the Electronic Privacy Information Center (EPIC) in AI policy and regulation?", "da4a10c9-db2a-45fa-bad5-b66ef842c023": "How does the Innocence Project utilize AI to support its mission?", "40ab1b55-bc53-4cae-8f7e-4657a5b2bdc2": "What is the role of the National Center for Missing & Exploited Children?", "46de7819-7250-4050-8bf9-4635a1a02f3e": "How does the New York Civil Liberties Union contribute to civil rights advocacy?", "6feae899-9900-454f-a64d-39e842af8c76": "How can AI tools be misused in the development of chemical or biological agents?", "36826afc-57e4-4d70-bc7e-4ca62e3e3e67": "What are the potential risks associated with the use of biological design tools (BDTs) in chemistry and biology?", "84440495-e768-4885-b78b-d8a0c17f3809": "How can expert AI red-teamers enhance the effectiveness of general public AI red-teamers?", "9c3a8107-d49c-4dc0-9f78-d71a506df892": "What are the benefits of using GAI-led red-teaming compared to human red-teamers alone?", "068d8bd2-9336-4e18-bd93-2199100e631f": "How can error ranges be calculated and included in explanations for decision-making systems?", "3138ca26-38b8-4e17-9b31-b38bc8a8eb4f": "What are the best practices for balancing usability and interface complexity when presenting decision-making information?", "095919bc-18fa-4316-b1e8-07572983b77b": "What are the potential benefits and drawbacks of using predictive policing in the criminal justice system?", "39406f17-a757-4201-91b5-284ba4ebbd39": "How can data-driven approaches be balanced with the need for community safety in criminal justice reform?", "2744b9cf-981d-42e5-aed3-bb8e5acb0b2e": "What are the reporting expectations for entities developing or using automated systems?", "798b53f4-f798-418a-abcd-6dd05f707c67": "How can the public access the Agency Inventories of AI Use Cases provided by the National Artificial Intelligence Initiative Office?", "d7fa2d65-26f8-4442-86f6-f1d6256e588a": "What are some effective methods for monitoring and assessing high-impact systems in qualitative user experience research?", "e50c31b3-bab1-4064-baa1-199c946d9789": "How can organizations ensure equity standards are maintained in algorithmic systems, and what steps should be taken if these standards are not met?", "644dcaa5-1731-43fe-b0f5-c6a4bc05564e": "What factors should be considered when updating or defining risk tiers for General Artificial Intelligence (GAI)?", "def43eb9-80b0-4ad2-9198-d84ecb89c720": "How can the psychological impacts of GAI, such as anthropomorphization and emotional entanglement, be mitigated?", "8495a23f-4bb7-47ac-8c54-58cf5675cdd7": "What are the best practices for establishing policies to manage risks related to rollover and fallback technologies in GAI systems?", "74ae51e9-63b3-48ce-9be7-4f88052d7bd6": "How can organizations ensure clear assignment of liability and responsibility in vendor contracts for GAI technologies?", "11cdd3ed-e09b-463d-9853-0be811073b75": "What are the best practices for ensuring the confidentiality of AI training data and model weights?", "9ca1ff0e-0cd9-4362-aca9-fd904077c845": "How can potential attack points in AI systems be identified and secured?", "8fe0054d-51ba-48c5-8cc5-259b2b96f535": "How can AI-powered cameras in delivery vans be improved to avoid incorrectly penalizing drivers?", "03b9f17b-0b61-401b-bc65-47d0655f31d8": "What are the common issues faced by companies using AI to monitor road safety habits of drivers?", "6d622041-fccf-4eb4-9a53-f7d7577856f8": "What are the differences in resource usage between AI training and inference?", "a1738003-3e17-48e7-86a2-1410bc0f1c07": "How can we verify the effectiveness of carbon capture programs for AI training?", "d15e0c10-378f-48a3-9a5c-be0c618106b4": "What protocols should be in place to ensure the safe deactivation of AI systems?", "7e7e2c28-ea80-4568-a71a-41966f9f117f": "What factors need to be considered when decommissioning AI systems to prevent data leakage and ensure security?", "57073541-fc8c-43cd-8b42-f9497eb501af": "What are the best practices for limiting access to sensitive data based on necessity and local control?", "92d9e36d-0fef-4b2e-b40d-ff2b800fcf10": "How should organizations report data security lapses or breaches involving sensitive data?", "6f7aa060-c19a-4614-83d2-134828a7e956": "What is the purpose of the email address ai-equity@ostpeopgov created by OSTP?", "6b95bc28-dbb4-408f-8c5b-f5b37073b6fd": "Where can I find the full responses to the OSTP's Request For Information (RFI) on biometric technologies?", "4776eaa1-b6f0-440c-a6be-923bbf49687d": "What are the practical steps to implement ethical principles in technology?", "acf74d86-1184-4092-8a1d-3ca58f5fe97a": "How can risk management be integrated into technological innovation to protect people from harm?", "2c1b02c6-1919-49ea-beff-165567d20b47": "What are the key capabilities needed for automated systems to help users make consent, access, and control decisions in a complex data ecosystem?", "2d15dfed-c66d-4fac-89dd-3aded02ec63e": "How can independent evaluations of data policies help ensure data privacy and user control in automated systems?", "e71beb7c-7564-4f2c-83f7-ec9bb3134847": "How can the rate of implementing recommendations from security checks and incidents be measured effectively?", "6c079fa0-60c3-4c8d-826a-2816c65d3ea0": "What are the best practices for performing AI red-teaming to assess resilience against various types of attacks?", "543e9bfb-b5f4-4247-89c8-41e0e7fb11a9": "What are the legal and regulatory requirements for reporting GAI incidents under HIPAA?", "8613b055-c817-4a59-84cf-1ae29a7c2269": "How does the NHTSA's 2022 autonomous vehicle crash reporting requirements impact AI deployment and monitoring?", "ce252388-c4d9-4968-aadf-218b47f609a5": "How do you document the justification for each data attribute in an automated system?", "f46088d7-1004-41cb-87c5-8a2b0bcdef59": "What are the best practices for ensuring that the use of high-dimensional data attributes does not violate applicable laws?", "b5f49997-5049-4865-9b5b-c18d880e2baf": "How can organizations adjust their governance regimes to effectively manage the risks associated with generative AI systems?", "eeb5acfd-3be2-4488-b45e-e0979bd5c855": "What are the key considerations for third-party governance across the AI value chain when dealing with generative AI?"}, "relevant_contexts": {"61c5d3ec-11af-4a36-a028-e9e22afb5a8f": ["80e81c8c-bb97-4604-bdef-dcc56813587a"], "662911a9-7407-4a24-95ff-2350dde354be": ["80e81c8c-bb97-4604-bdef-dcc56813587a"], "09ded63e-b364-42e7-9677-e1dfa4932b9b": ["d0a6097e-42c8-499f-8d6d-bcfae7f992d5"], "6b539180-33d5-4cd2-abf5-63cbe6178e6a": ["d0a6097e-42c8-499f-8d6d-bcfae7f992d5"], "f35c9f92-67ac-4772-9dab-6cf2ae32812f": ["51421b31-1a41-49da-a2c2-65df54ae93ce"], "d0edffef-580d-4890-a6dd-e08925fadd27": ["51421b31-1a41-49da-a2c2-65df54ae93ce"], "4a36f5cd-0f9d-42ba-bd8e-d0eaf0af2d52": ["758f783b-3fdc-4890-9de4-da3c035c1141"], "1c6de01d-b59d-4421-9339-0e501b4fd2b9": ["758f783b-3fdc-4890-9de4-da3c035c1141"], "155db437-082c-44f4-8751-960146c3512c": ["96838aa0-1bf7-4ae3-a8d7-5d093e9feb39"], "95cae333-a114-41e8-98f5-10619377f6bf": ["96838aa0-1bf7-4ae3-a8d7-5d093e9feb39"], "077e8ee5-5768-4967-b8ed-891c6cc0085d": ["66c96cba-2674-4734-a869-d002faab751c"], "8edf6c51-407d-478c-832a-ef103ea3709e": ["66c96cba-2674-4734-a869-d002faab751c"], "7058b177-27f4-4d6b-a478-176ead46f325": ["2689bb50-4ffd-4610-856c-c8fad4ab7285"], "1e48abdd-a664-4c7a-8f19-151ca61e5006": ["2689bb50-4ffd-4610-856c-c8fad4ab7285"], "e5aba341-abc2-4965-a224-fa10823f4d2f": ["7515dd00-b05d-49ea-baa0-7cedeb05eb39"], "23c3711f-c55b-49e5-9936-22d6bfc010af": ["7515dd00-b05d-49ea-baa0-7cedeb05eb39"], "08a12dd0-5dd7-4f87-8913-d86a9cc2c8b7": ["f339987a-b2cd-4258-85c5-a864712a9e98"], "7d2b3bbe-6d0b-470d-b85d-d0c636ac4354": ["f339987a-b2cd-4258-85c5-a864712a9e98"], "c385b92d-1c01-48ae-be4c-f6b42b5e6af6": ["673465c5-faf7-4ab1-86e0-d7cc5751143d"], "b4286477-40f0-46b8-bba8-4fe204b0dafa": ["673465c5-faf7-4ab1-86e0-d7cc5751143d"], "6c98dd15-2a73-4c66-8a6a-c578c67a2434": ["3df80c8e-fd5b-436c-9411-42e36faeeaef"], "00ab3a02-dffb-482b-ad10-3cab6ad77520": ["3df80c8e-fd5b-436c-9411-42e36faeeaef"], "510ed741-6a36-4d13-a7dc-6a42262136be": ["225534bb-e40d-42be-9258-309083656512"], "8f74dbe1-c3ed-48ca-9635-d701d26e829a": ["225534bb-e40d-42be-9258-309083656512"], "3809c393-b89e-494c-b529-c65e601c1544": ["52b00ce1-0f48-46fb-9bdb-6c3ab575940b"], "edb9b7b1-11c1-421c-a07f-7abe3d6e7c21": ["52b00ce1-0f48-46fb-9bdb-6c3ab575940b"], "5a48e740-85f0-48c7-b0c7-6247c384f052": ["3604ee55-dc85-43ef-8409-908fe897aef7"], "d6567db0-b18c-4dcb-b80c-146f2047bc13": ["3604ee55-dc85-43ef-8409-908fe897aef7"], "59a37c01-7bac-4f9d-980f-48f5489e61e6": ["760e42ec-824f-4c12-98b7-856008ae5680"], "e24a71f0-8b86-461a-92bd-fa6cef7ca33b": ["760e42ec-824f-4c12-98b7-856008ae5680"], "63dcc302-d64d-47f5-a304-a64d4d6642b4": ["706f37a3-1ae3-462f-9ae9-f447c8386d34"], "9b9b4805-12cb-453d-a3f4-ddbb20679c39": ["706f37a3-1ae3-462f-9ae9-f447c8386d34"], "7b3d457a-d0bf-4b13-b59c-df184af98f08": ["0d9098f6-5346-47fb-b91d-0a76054887ac"], "9473baea-32cd-4147-a547-5d45b0daa757": ["0d9098f6-5346-47fb-b91d-0a76054887ac"], "d4388801-831e-45e0-bf67-b67974027277": ["a5324dcc-7f7d-4d13-a7b4-c61a11b3471b"], "d4107956-2806-4098-a79e-e753cab1bf82": ["a5324dcc-7f7d-4d13-a7b4-c61a11b3471b"], "829774bb-4770-46cf-9f1b-86f51e7b6679": ["2c82cba7-cefa-41fd-a6d5-c90edb9b59f9"], "79c355b3-3945-402d-9d15-e460689ba635": ["2c82cba7-cefa-41fd-a6d5-c90edb9b59f9"], "e558dbd7-ca81-4070-9777-49636694d674": ["ab16f609-33d2-4f10-9b50-ff0066dc6a13"], "e1ce22f6-cad0-4bbe-87ae-5222158a4393": ["ab16f609-33d2-4f10-9b50-ff0066dc6a13"], "ae84398b-1649-4cce-8fa2-6295c80f7ec9": ["ff613344-c661-48a5-af0c-950d87f38882"], "648a7032-05c8-45c2-a7bb-2dca8fa9ffd0": ["ff613344-c661-48a5-af0c-950d87f38882"], "2b743770-5d66-4aa8-b9b4-c33adc78c1e3": ["ff7088b4-e4f7-4ef1-89b6-2293bc428ded"], "3b6f61ff-349d-4817-8c82-d064b9a71c86": ["ff7088b4-e4f7-4ef1-89b6-2293bc428ded"], "f596fded-c16b-49cb-b400-734c65b185af": ["c7bdee72-9ac2-418f-ac50-b41a38e31eb7"], "1626655d-7f72-4d0a-9170-3abdc8ed86ec": ["c7bdee72-9ac2-418f-ac50-b41a38e31eb7"], "cec6f35c-1b45-4d56-8c2f-aef7bc860a01": ["689778c9-90f6-4c4a-ab36-6fb05ad68144"], "12aca964-2112-4b36-8a40-14ab1512ac75": ["689778c9-90f6-4c4a-ab36-6fb05ad68144"], "53a48063-f4fb-482f-bd70-36915ec63956": ["2f4d5ac1-d6b0-48df-a313-39f40766a20c"], "7fdbbfed-73aa-45a8-9f1c-58ec2c0f3912": ["2f4d5ac1-d6b0-48df-a313-39f40766a20c"], "0ed0fb9c-47c4-4c7c-a5ae-d7e3a35670a1": ["473f218e-e471-4506-a9ba-a4840bcf9eb1"], "88297ffa-b5ca-460c-81ed-a61975ab39ef": ["473f218e-e471-4506-a9ba-a4840bcf9eb1"], "38409d77-4936-4266-a7f3-2d910d3bea91": ["3d2d1cf5-ddbb-40dc-a570-3f55f091e095"], "3d2d3a9e-a6a7-49f5-bdd8-5db95fc8b602": ["3d2d1cf5-ddbb-40dc-a570-3f55f091e095"], "ca685f83-ccd7-4a17-a31d-bfc648b58840": ["fcbeb8b3-4cff-4248-b03e-fc6879248660"], "ce1fdffd-851d-463e-8f24-4596865b62dc": ["fcbeb8b3-4cff-4248-b03e-fc6879248660"], "1a82989c-3ead-4aea-9098-53d3dca7f9b7": ["5ff1ba24-2f90-4f45-a3a3-6e1c50395575"], "a30ea710-3349-4357-8dcb-915f6c69f2da": ["5ff1ba24-2f90-4f45-a3a3-6e1c50395575"], "004b52ee-6a49-47d7-a4bd-77ec96fadc31": ["62a002de-0d3c-44dd-a41c-3fd464e4087a"], "a5ad1cc1-318a-4210-8838-22015d780344": ["62a002de-0d3c-44dd-a41c-3fd464e4087a"], "f05e4729-18f1-4664-9f41-2ad997f9d726": ["7a809df5-be14-43b9-9219-bb0b8d1f7d2c"], "81c90ac3-caf0-4c9d-8e02-8c62d26a047e": ["7a809df5-be14-43b9-9219-bb0b8d1f7d2c"], "0abf12fc-3e73-41e5-8594-5e2bb6ecdb24": ["1b4ea0b8-2883-4f20-8b10-198e6ad55155"], "e3abf868-922a-42e7-8c5a-b1ff0a353d39": ["1b4ea0b8-2883-4f20-8b10-198e6ad55155"], "55c79cd5-dee3-4e43-b8a3-839028518379": ["5d49e42f-479a-415f-8de0-91ebbd0e77df"], "456333eb-689e-4896-b2d4-0cf136672c77": ["5d49e42f-479a-415f-8de0-91ebbd0e77df"], "8834b86c-b1b9-43d6-92e0-3c64ca09e854": ["d8dc77d4-d7bc-40c8-bb38-e6f96f77391c"], "e84f1a90-e702-4594-84b8-5c5b67352195": ["d8dc77d4-d7bc-40c8-bb38-e6f96f77391c"], "490b6ca7-059f-41fe-82ae-b8d2c3890cf1": ["c3a79cf4-99fe-41a5-94a9-9972c547b027"], "59bed72b-bd80-47c3-bb57-08dd086ecf9d": ["c3a79cf4-99fe-41a5-94a9-9972c547b027"], "625e3e66-e1fc-4223-a201-e88b765f449e": ["ecf9714c-7e5b-4f00-9fad-45441a3db2a8"], "da4a10c9-db2a-45fa-bad5-b66ef842c023": ["ecf9714c-7e5b-4f00-9fad-45441a3db2a8"], "40ab1b55-bc53-4cae-8f7e-4657a5b2bdc2": ["e8c07b22-d96c-4cfc-be67-00e326b77e19"], "46de7819-7250-4050-8bf9-4635a1a02f3e": ["e8c07b22-d96c-4cfc-be67-00e326b77e19"], "6feae899-9900-454f-a64d-39e842af8c76": ["1787e4ab-ddaa-436b-a84c-5b09e0444b2b"], "36826afc-57e4-4d70-bc7e-4ca62e3e3e67": ["1787e4ab-ddaa-436b-a84c-5b09e0444b2b"], "84440495-e768-4885-b78b-d8a0c17f3809": ["963066ad-85cd-44d7-a513-c5fc3b5f1733"], "9c3a8107-d49c-4dc0-9f78-d71a506df892": ["963066ad-85cd-44d7-a513-c5fc3b5f1733"], "068d8bd2-9336-4e18-bd93-2199100e631f": ["5ad44c84-503d-4b61-95dc-22017c580f31"], "3138ca26-38b8-4e17-9b31-b38bc8a8eb4f": ["5ad44c84-503d-4b61-95dc-22017c580f31"], "095919bc-18fa-4316-b1e8-07572983b77b": ["ac5d591f-9174-44b6-be57-08f8b0e48100"], "39406f17-a757-4201-91b5-284ba4ebbd39": ["ac5d591f-9174-44b6-be57-08f8b0e48100"], "2744b9cf-981d-42e5-aed3-bb8e5acb0b2e": ["d41067f5-b199-46fa-95e6-571e133d23ff"], "798b53f4-f798-418a-abcd-6dd05f707c67": ["d41067f5-b199-46fa-95e6-571e133d23ff"], "d7fa2d65-26f8-4442-86f6-f1d6256e588a": ["c100cd93-2611-4d50-a99b-8728ccb99ba1"], "e50c31b3-bab1-4064-baa1-199c946d9789": ["c100cd93-2611-4d50-a99b-8728ccb99ba1"], "644dcaa5-1731-43fe-b0f5-c6a4bc05564e": ["0b2a13ab-790a-4e74-97a6-dbd3f2f3834d"], "def43eb9-80b0-4ad2-9198-d84ecb89c720": ["0b2a13ab-790a-4e74-97a6-dbd3f2f3834d"], "8495a23f-4bb7-47ac-8c54-58cf5675cdd7": ["c65eb4b9-10bb-4fcf-b682-fca84d3f37a1"], "74ae51e9-63b3-48ce-9be7-4f88052d7bd6": ["c65eb4b9-10bb-4fcf-b682-fca84d3f37a1"], "11cdd3ed-e09b-463d-9853-0be811073b75": ["2ac15af5-0f67-4ab6-803a-169153471fbe"], "9ca1ff0e-0cd9-4362-aca9-fd904077c845": ["2ac15af5-0f67-4ab6-803a-169153471fbe"], "8fe0054d-51ba-48c5-8cc5-259b2b96f535": ["c3a647af-08ee-42b7-87a6-57644e59b9eb"], "03b9f17b-0b61-401b-bc65-47d0655f31d8": ["c3a647af-08ee-42b7-87a6-57644e59b9eb"], "6d622041-fccf-4eb4-9a53-f7d7577856f8": ["9aa5eff7-f727-421e-835d-3def1111689a"], "a1738003-3e17-48e7-86a2-1410bc0f1c07": ["9aa5eff7-f727-421e-835d-3def1111689a"], "d15e0c10-378f-48a3-9a5c-be0c618106b4": ["ecb13fde-537f-49b6-82bd-ad0e6de18a8c"], "7e7e2c28-ea80-4568-a71a-41966f9f117f": ["ecb13fde-537f-49b6-82bd-ad0e6de18a8c"], "57073541-fc8c-43cd-8b42-f9497eb501af": ["8f297398-44b9-4be9-bbfb-ff90fef13d5f"], "92d9e36d-0fef-4b2e-b40d-ff2b800fcf10": ["8f297398-44b9-4be9-bbfb-ff90fef13d5f"], "6f7aa060-c19a-4614-83d2-134828a7e956": ["04e3f601-a4a2-4cc0-9978-8595281b3c94"], "6b95bc28-dbb4-408f-8c5b-f5b37073b6fd": ["04e3f601-a4a2-4cc0-9978-8595281b3c94"], "4776eaa1-b6f0-440c-a6be-923bbf49687d": ["6690225c-fbc4-4316-bef9-9cf1d5e5957c"], "acf74d86-1184-4092-8a1d-3ca58f5fe97a": ["6690225c-fbc4-4316-bef9-9cf1d5e5957c"], "2c1b02c6-1919-49ea-beff-165567d20b47": ["73043a09-91db-4768-9c0b-702c2dfcd9f0"], "2d15dfed-c66d-4fac-89dd-3aded02ec63e": ["73043a09-91db-4768-9c0b-702c2dfcd9f0"], "e71beb7c-7564-4f2c-83f7-ec9bb3134847": ["2cfdb40f-4c06-45c7-ab73-2bcc65986c58"], "6c079fa0-60c3-4c8d-826a-2816c65d3ea0": ["2cfdb40f-4c06-45c7-ab73-2bcc65986c58"], "543e9bfb-b5f4-4247-89c8-41e0e7fb11a9": ["65cc819a-a0c3-4ffa-b6f0-e47f846de5a5"], "8613b055-c817-4a59-84cf-1ae29a7c2269": ["65cc819a-a0c3-4ffa-b6f0-e47f846de5a5"], "ce252388-c4d9-4968-aadf-218b47f609a5": ["f258f74e-4463-4558-a8be-88fcc9da5b5a"], "f46088d7-1004-41cb-87c5-8a2b0bcdef59": ["f258f74e-4463-4558-a8be-88fcc9da5b5a"], "b5f49997-5049-4865-9b5b-c18d880e2baf": ["16d54bad-34c2-4427-a979-eb6a860bc22e"], "eeb5acfd-3be2-4488-b45e-e0979bd5c855": ["16d54bad-34c2-4427-a979-eb6a860bc22e"]}, "corpus": {"80e81c8c-bb97-4604-bdef-dcc56813587a": "- \nUSING THIS TECHNICAL COMPANION\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, \nuse, and deployment of automated systems to protect the rights of the American public in the age of artificial \nintelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and \nprovides examples and concrete steps for communities, industry, governments, and others to take in order to \nbuild these protections into policy, practice, or the technological design process. \nTaken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\u00ad\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This", "d0a6097e-42c8-499f-8d6d-bcfae7f992d5": "via application programming interfaces). Independent evaluators, such as researchers, journalists, ethics \nreview boards, inspectors general, and third-party auditors, should be given access to the system and samples \nof associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual \nproperty law), in order to perform such evaluations. Mechanisms should be included to ensure that system \naccess for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to \nprovide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot \nbe revoked without reasonable and verified justification. \nReporting.12 Entities responsible for the development or use of automated systems should provide \nregularly-updated reports that include: an overview of the system, including how it is embedded in the", "51421b31-1a41-49da-a2c2-65df54ae93ce": "requirement. \nProviding notice has long been a standard practice, and in many cases is a legal requirement, when, for example, \nmaking a video recording of someone (outside of a law enforcement or national security context). In some cases, such \nas credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the \nprocess of explaining such systems are under active research and improvement and such explanations can take many \nforms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory \nsystems that can help the public better understand decisions that impact them. \nWhile notice and explanation requirements are already in place in some sectors or situations, the American public \ndeserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, \nopportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the", "758f783b-3fdc-4890-9de4-da3c035c1141": "than role models, toys, or activities.40 Some search engines have been working to reduce the prevalence of\nthese results, but the problem remains.41\n\u2022\nAdvertisement delivery systems that predict who is most likely to click on a job advertisement end up deliv-\nering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermar-\nket cashier ads to women and jobs with taxi companies to primarily Black people.42\u00ad\n\u2022\nBody scanners, used by TSA at airport checkpoints, require the operator to select a \u201cmale\u201d or \u201cfemale\u201d\nscanning setting based on the passenger\u2019s sex, but the setting is chosen based on the operator\u2019s perception of\nthe passenger\u2019s gender identity. These scanners are more likely to flag transgender travelers as requiring\nextra screening done by a person. Transgender travelers have described degrading experiences associated\nwith these extra screenings.43 TSA has recently announced plans to implement a gender-neutral algorithm44", "96838aa0-1bf7-4ae3-a8d7-5d093e9feb39": "ABOUT THIS FRAMEWORK\u00ad\u00ad\u00ad\u00ad\u00ad\nThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the \ndesign, use, and deployment of automated systems to protect the rights of the American public in the age of \nartificial intel-ligence. Developed through extensive consultation with the American public, these principles are \na blueprint for building and deploying automated systems that are aligned with democratic values and protect \ncivil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps that can be taken by many kinds of organizations\u2014from governments at all levels to companies of \nall sizes\u2014to uphold these values. Experts from across the private sector, governments, and international", "66c96cba-2674-4734-a869-d002faab751c": "in place, providing an important alternative to ensure access. Companies that have introduced automated call centers \noften retain the option of dialing zero to reach an operator. When automated identity controls are in place to board an \nairplane or enter the country, there is a person supervising the systems who can be turned to for help or to appeal a \nmisidentification. \nThe American people deserve the reassurance that such procedures are in place to protect their rights, opportunities, \nand access. People make mistakes, and a human alternative or fallback mechanism will not always have the right \nanswer, but they serve as an important check on the power and validity of automated systems. \n\u2022 An automated signature matching system is used as part of the voting process in many parts of the country to\ndetermine whether the signature on a mail-in ballot matches the signature on file. These signature matching\nsystems are less likely to work correctly for some voters, including voters with mental or physical", "2689bb50-4ffd-4610-856c-c8fad4ab7285": "data augmentations, parameter adjustments, or other modi\ufb01cations. Access to \nun-tuned (baseline) models supports debugging the relative in\ufb02uence of the pre-\ntrained weights compared to the \ufb01ne-tuned model weights or other system \nupdates. \nInformation Integrity; Data Privacy \nMG-3.2-003 \nDocument sources and types of training data and their origins, potential biases \npresent in the data related to the GAI application and its content provenance, \narchitecture, training process of the pre-trained model including information on \nhyperparameters, training duration, and any \ufb01ne-tuning or retrieval-augmented \ngeneration processes applied. \nInformation Integrity; Harmful Bias \nand Homogenization; Intellectual \nProperty \nMG-3.2-004 Evaluate user reported problematic content and integrate feedback into system \nupdates. \nHuman-AI Con\ufb01guration, \nDangerous, Violent, or Hateful \nContent \nMG-3.2-005 \nImplement content \ufb01lters to prevent the generation of inappropriate, harmful,", "7515dd00-b05d-49ea-baa0-7cedeb05eb39": "SECTION TITLE\nApplying The Blueprint for an AI Bill of Rights \nWhile many of the concerns addressed in this framework derive from the use of AI, the technical \ncapabilities and specific definitions of such systems change with the speed of innovation, and the potential \nharms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-\npart test to determine what systems are in scope. This framework applies to (1) automated systems that (2) \nhave the potential to meaningfully impact the American public\u2019s rights, opportunities, or access to \ncritical resources or services. These rights, opportunities, and access to critical resources of services should \nbe enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in \nour lives. \nThis framework describes protections that should be applied with respect to all automated systems that \nhave the potential to meaningfully impact individuals' or communities' exercise of:", "f339987a-b2cd-4258-85c5-a864712a9e98": "Content; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nPro\ufb01le threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, A\ufb00ected Individuals and Communities, End-\nUsers, Operation and Monitoring", "673465c5-faf7-4ab1-86e0-d7cc5751143d": "61. See, e.g., Nir Kshetri. School surveillance of students via laptops may do more harm than good. The\nConversation. Jan. 21, 2022.\nhttps://theconversation.com/school-surveillance-of-students-via-laptops-may-do-more-harm-than\u00ad\ngood-170983; Matt Scherer. Warning: Bossware May be Hazardous to Your Health. Center for Democracy\n& Technology Report.\nhttps://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To\u00ad\nYour-Health-Final.pdf; Human Impact Partners and WWRC. The Public Health Crisis Hidden in Amazon\nWarehouses. HIP and WWRC report. Jan. 2021.\nhttps://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-Hidden-In-Amazon\u00ad\nWarehouses-HIP-WWRC-01-21.pdf; Drew Harwell. Contract lawyers face a growing invasion of\nsurveillance programs that monitor their work. The Washington Post. Nov. 11, 2021. https://\nwww.washingtonpost.com/technology/2021/11/11/lawyer-facial-recognition-monitoring/;", "3df80c8e-fd5b-436c-9411-42e36faeeaef": "The Equal Employment Opportunity Commission and the Department of Justice have clearly \nlaid out how employers\u2019 use of AI and other automated systems can result in \ndiscrimination against job applicants and employees with disabilities.53 The documents explain \nhow employers\u2019 use of software that relies on algorithmic decision-making may violate existing requirements \nunder Title I of the Americans with Disabilities Act (\u201cADA\u201d). This technical assistance also provides practical \ntips to employers on how to comply with the ADA, and to job applicants and employees who think that their \nrights may have been violated. \nDisparity assessments identified harms to Black patients' healthcare access. A widely \nused healthcare algorithm relied on the cost of each patient\u2019s past medical care to predict future medical needs, \nrecommending early interventions for the patients deemed most at risk. This process discriminated", "225534bb-e40d-42be-9258-309083656512": "28 \nMAP 5.2: Practices and personnel for supporting regular engagement with relevant AI Actors and integrating feedback about \npositive, negative, and unanticipated impacts are in place and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.2-001 \nDetermine context-based measures to identify if new impacts are present due to \nthe GAI system, including regular engagements with downstream AI Actors to \nidentify and quantify new contexts of unanticipated impacts of GAI systems. \nHuman-AI Con\ufb01guration; Value \nChain and Component Integration \nMP-5.2-002 \nPlan regular engagements with AI Actors responsible for inputs to GAI systems, \nincluding third-party data and algorithms, to review and evaluate unanticipated \nimpacts. \nHuman-AI Con\ufb01guration; Value \nChain and Component Integration \nAI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, A\ufb00ected Individuals and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring", "52b00ce1-0f48-46fb-9bdb-6c3ab575940b": "and Homogenization \nGV-3.2-003 \nDe\ufb01ne acceptable use policies for GAI interfaces, modalities, and human-AI \ncon\ufb01gurations (i.e., for chatbots and decision-making tasks), including criteria for \nthe kinds of queries GAI applications should refuse to respond to. \nHuman-AI Con\ufb01guration \nGV-3.2-004 \nEstablish policies for user feedback mechanisms for GAI systems which include \nthorough instructions and any mechanisms for recourse. \nHuman-AI Con\ufb01guration \nGV-3.2-005 \nEngage in threat modeling to anticipate potential risks from GAI systems. \nCBRN Information or Capabilities; \nInformation Security \nAI Actors: AI Design \n \nGOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-\ufb01rst mindset in the design, \ndevelopment, deployment, and uses of AI systems to minimize potential negative impacts. \nAction ID \nSuggested Action \nGAI Risks \nGV-4.1-001 \nEstablish policies and procedures that address continual improvement processes", "3604ee55-dc85-43ef-8409-908fe897aef7": "MEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are \ninformed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as \nintended. Results are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence to map and measure GAI risks, \nincluding tests to address attempts to deceive or manipulate the application of \nprovenance techniques or other misuses. Identify vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs. \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI system performance in real-world scenarios to observe its \nbehavior in practical environments and reveal issues that might not surface in \ncontrolled and optimized testing environments. \nHuman-AI Con\ufb01guration; \nConfabulation; Information \nSecurity \nMS-4.2-003", "760e42ec-824f-4c12-98b7-856008ae5680": "Demonstrate access to human alternatives, consideration, and fallback \nReporting. Reporting should include an assessment of timeliness and the extent of additional burden for \nhuman alternatives, aggregate statistics about who chooses the human alternative, along with the results of \nthe assessment about brevity, clarity, and accessibility of notice and opt-out instructions. Reporting on the \naccessibility, timeliness, and effectiveness of human consideration and fallback should be made public at regu\u00ad\nlar intervals for as long as the system is in use. This should include aggregated information about the number \nand type of requests for consideration, fallback employed, and any repeated requests; the timeliness of the \nhandling of these requests, including mean wait times for different types of requests as well as maximum wait \ntimes; and information about the procedures used to address requests for consideration along with the results", "706f37a3-1ae3-462f-9ae9-f447c8386d34": "protections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design", "0d9098f6-5346-47fb-b91d-0a76054887ac": "voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of \nprivacy and other freedoms in both public and private sector contexts; equal opportunities, including equitable \naccess to education, housing, credit, employment, and other programs; or, access to critical resources or \nservices, such as healthcare, financial services, safety, social services, non-deceptive information about goods \nand services, and government benefits. \n10", "a5324dcc-7f7d-4d13-a7b4-c61a11b3471b": "FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12", "2c82cba7-cefa-41fd-a6d5-c90edb9b59f9": "reuse \nRelevant and high-quality data. Data used as part of any automated system\u2019s creation, evaluation, or \ndeployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be \nestablished based on research-backed demonstration of the causal influence of the data to the specific use case \nor justified more generally based on a reasonable expectation of usefulness in the domain and/or for the \nsystem design or ongoing development. Relevance of data should not be established solely by appealing to \nits historical connection to the outcome. High quality and tailored data should be representative of the task at \nhand and errors from data entry or other sources should be measured and limited. Any data used as the target \nof a prediction process should receive particular attention to the quality and validity of the predicted outcome \nor label to ensure the goal of the automated system is appropriately identified and measured. Additionally,", "ab16f609-33d2-4f10-9b50-ff0066dc6a13": "measured quantitatively, including explanations as to why some risks cannot be \nmeasured (e.g., due to technological limitations, resource constraints, or \ntrustworthy considerations). Include unmeasured risks in marginal risks. \nInformation Integrity \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are \ninvolved in regular assessments and updates. Domain experts, users, AI Actors external to the team that developed or deployed the \nAI system, and a\ufb00ected communities are consulted in support of assessments as necessary per organizational risk tolerance. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.3-001 \nDe\ufb01ne relevant groups of interest (e.g., demographic groups, subject matter \nexperts, experience with GAI technology) within the context of use as part of \nplans for gathering structured public feedback. \nHuman-AI Con\ufb01guration; Harmful \nBias and Homogenization; CBRN", "ff613344-c661-48a5-af0c-950d87f38882": "it comes to open-ended prompts for long-form responses and in domains which require highly \ncontextual and/or domain expertise. \nRisks from confabulations may arise when users believe false content \u2013 often due to the con\ufb01dent nature \nof the response \u2013 leading users to act upon or promote the false information. This poses a challenge for \nmany real-world applications, such as in healthcare, where a confabulated summary of patient \ninformation reports could cause doctors to make incorrect diagnoses and/or recommend the wrong \ntreatments. Risks of confabulated content may be especially important to monitor when integrating GAI \ninto applications involving consequential decision making. \nGAI outputs may also include confabulated logic or citations that purport to justify or explain the \nsystem\u2019s answer, which may further mislead humans into inappropriately trusting the system\u2019s output. \nFor instance, LLMs sometimes provide logical steps for how they arrived at an answer even when the", "ff7088b4-e4f7-4ef1-89b6-2293bc428ded": "resulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32", "c7bdee72-9ac2-418f-ac50-b41a38e31eb7": "\u2022\nPamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director,\nSocio-technical Interaction Research (STIR) Lab\n\u2022\nSeny Kamara, Associate Professor of Computer Science, Brown University\nEach panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for \nbiased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and \nunderstanding of the algorithmic systems. The interventions and key needs various panelists put forward as \nnecessary to the future design of critical AI systems included ongoing transparency, value sensitive and \nparticipatory design, explanations designed for relevant stakeholders, and public consultation. \nVarious \npanelists emphasized the importance of placing trust in people, not technologies, and in engaging with \nimpacted communities to understand the potential harms of technologies and build protection by design into \nfuture systems.", "689778c9-90f6-4c4a-ab36-6fb05ad68144": "The demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex \n(including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifi\u00ad\ncation protected by law. The broad set of measures assessed should include demographic performance mea\u00ad\nsures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity \nassessment should be separated from data used for the automated system and privacy protections should be \ninstituted; in some cases it may make sense to perform such assessment using a data sample. For every \ninstance where the deployed automated system leads to different treatment or impacts disfavoring the identi\u00ad\nfied groups, the entity governing, implementing, or using the system should document the disparity and a \njustification for any continued use of the system.", "2f4d5ac1-d6b0-48df-a313-39f40766a20c": "future systems. \nPanel 5: Social Welfare and Development. This event explored current and emerging uses of technology to \nimplement or improve social welfare systems, social development programs, and other systems that can impact \nlife chances. \nWelcome:\n\u2022\nSuresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science\nand Technology Policy\n\u2022\nAnne-Marie Slaughter, CEO, New America\nModerator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance \nModernization, Office of the Secretary, Department of Labor \nPanelists:\n\u2022\nBlake Hall, CEO and Founder, ID.Me\n\u2022\nKarrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign\n\u2022\nChristiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's\nCenter for Human Rights and Global Justice\n58", "473f218e-e471-4506-a9ba-a4840bcf9eb1": "and data agency can be meaningful and not overwhelming. These choices\u2014such as contextual, timely \nalerts about location tracking\u2014are brief, direct, and use-specific. Many of the expectations listed here for \nprivacy by design and use-specific consent mirror those distributed to developers as best practices when \ndeveloping for smart phone devices,82 such as being transparent about how user data will be used, asking for app \npermissions during their use so that the use-context will be clear to users, and ensuring that the app will still \nwork if users deny (or later revoke) some permissions. \n39", "3d2d1cf5-ddbb-40dc-a570-3f55f091e095": "the privacy, civil rights, and civil liberties implications of the use of such technologies be issued before \nbiometric identification technologies can be used in New York schools. \nFederal law requires employers, and any consultants they may retain, to report the costs \nof surveilling employees in the context of a labor dispute, providing a transparency \nmechanism to help protect worker organizing. Employers engaging in workplace surveillance \"where \nan object there-of, directly or indirectly, is [\u2026] to obtain information concerning the activities of employees or a \nlabor organization in connection with a labor dispute\" must report expenditures relating to this surveillance to \nthe Department of Labor Office of Labor-Management Standards, and consultants who employers retain for \nthese purposes must also file reports regarding their activities.81\nPrivacy choices on smartphones show that when technologies are well designed, privacy", "fcbeb8b3-4cff-4248-b03e-fc6879248660": "AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems should be \ncommunities, stakeholders, and domain experts to identify \nSystems should undergo pre-deployment testing, risk \nthat demonstrate they are safe and effective based on \nincluding those beyond the intended use, and adherence to \nprotective measures should include the possibility of not \nAutomated systems should not be designed with an intent \nreasonably foreseeable possibility of endangering your safety or the safety of your community. They should \nstemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped with consultation from diverse \nconcerns, risks, and potential impacts of the system. \nidentification and mitigation, and ongoing monitoring \ntheir intended use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of these \ndeploying the system or removing a system from use. \nor", "5ff1ba24-2f90-4f45-a3a3-6e1c50395575": "which leads to extensive reuse of limited numbers of models; and the extent to which GAI may be \nintegrated into other devices and services. As GAI systems often involve many distinct third-party \ncomponents and data sources, it may be di\ufb03cult to attribute issues in a system\u2019s behavior to any one of \nthese sources. \nErrors in third-party GAI components can also have downstream impacts on accuracy and robustness. \nFor example, test datasets commonly used to benchmark or validate models can contain label errors. \nInaccuracies in these labels can impact the \u201cstability\u201d or robustness of these benchmarks, which many \nGAI practitioners consider during the model selection process. \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable \n3. \nSuggested Actions to Manage GAI Risks \nThe following suggested actions target risks unique to or exacerbated by GAI.", "62a002de-0d3c-44dd-a41c-3fd464e4087a": "Confabulation; Harmful Bias and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identi\ufb01ed. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.3-001 \nDevelop and update GAI system incident response and recovery plans and \nprocedures to address the following: Review and maintenance of policies and \nprocedures to account for newly encountered uses; Review and maintenance of \npolicies and procedures for detection of unanticipated uses; Verify response \nand recovery plans account for the GAI system value chain; Verify response and \nrecovery plans are updated for and include necessary details to communicate \nwith downstream GAI system Actors: Points-of-Contact (POC), Contact \ninformation, noti\ufb01cation format. \nValue Chain and Component \nIntegration \nAI Actor Tasks: AI Deployment, Operation and Monitoring", "7a809df5-be14-43b9-9219-bb0b8d1f7d2c": "37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training data and verify \ntraining data is not overly homogenous or GAI-produced to mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, A\ufb00ected Individuals and Communities, Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12: Environmental impact and sustainability of AI model training and management activities \u2013 as identi\ufb01ed in the MAP \nfunction \u2013 are assessed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical environments when deploying GAI systems. \nDangerous, Violent, or Hateful \nContent \nMS-2.12-002 Document anticipated environmental impacts of model development, \nmaintenance, and deployment in product design decisions. \nEnvironmental \nMS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water \nconsumption) for training, \ufb01ne tuning, and deploying models: Verify tradeo\ufb00s", "1b4ea0b8-2883-4f20-8b10-198e6ad55155": "47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations were derived as overarching themes from the GAI PWG \nconsultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, \nand Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and \nusing GAI and also inform the Actions to Manage GAI risks. Information included about the primary \nconsiderations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG. \nAcknowledgments: These considerations could not have been surfaced without the helpful analysis and \ncontributions from the community and NIST sta\ufb00 GAI PWG leads: George Awad, Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any other technology system, governance principles and techniques can be used to manage risks", "5d49e42f-479a-415f-8de0-91ebbd0e77df": "Information Security; Value Chain \nand Component Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV, Third-party entities \n \nMAP 1.1: Intended purposes, potentially bene\ufb01cial uses, context speci\ufb01c laws, norms and expectations, and prospective settings in \nwhich the AI system will be deployed are understood and documented. Considerations include: the speci\ufb01c set or types of users \nalong with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, \nsociety, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or \nproduct AI lifecycle; and related TEVV and system metrics. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.1-001 \nWhen identifying intended purposes, consider factors such as internal vs. \nexternal use, narrow vs. broad application scope, \ufb01ne-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation).", "d8dc77d4-d7bc-40c8-bb38-e6f96f77391c": "41 \nMG-2.2-006 \nUse feedback from internal and external AI Actors, users, individuals, and \ncommunities, to assess impact of AI-generated content. \nHuman-AI Con\ufb01guration \nMG-2.2-007 \nUse real-time auditing tools where they can be demonstrated to aid in the \ntracking and validation of the lineage and authenticity of AI-generated data. \nInformation Integrity \nMG-2.2-008 \nUse structured feedback mechanisms to solicit and capture user input about AI-\ngenerated content to detect subtle shifts in quality or alignment with \ncommunity and societal values. \nHuman-AI Con\ufb01guration; Harmful \nBias and Homogenization \nMG-2.2-009 \nConsider opportunities to responsibly use synthetic data and other privacy \nenhancing techniques in GAI development, where appropriate and applicable, \nmatch the statistical properties of real-world data without disclosing personally \nidenti\ufb01able information or contributing to homogenization. \nData Privacy; Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias and", "c3a79cf4-99fe-41a5-94a9-9972c547b027": "https://arxiv.org/pdf/2202.07646 \nCarlini, N. et al. (2024) Stealing Part of a Production Language Model. arXiv. \nhttps://arxiv.org/abs/2403.06634 \nChandra, B. et al. (2023) Dismantling the Disinformation Business of Chinese In\ufb02uence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\nchinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\nAI_Companionship_A_Dialectical_Inquiry_into_Replika \nDahl, M. et al. (2024) Large Legal Fictions: Pro\ufb01ling Legal Hallucinations in Large Language Models. arXiv. \nhttps://arxiv.org/abs/2401.01301", "ecf9714c-7e5b-4f00-9fad-45441a3db2a8": "Electronic Privacy Information \nCenter (EPIC) \nEncode Justice \nEqual AI \nGoogle \nHitachi's AI Policy Committee \nThe Innocence Project \nInstitute of Electrical and \nElectronics Engineers (IEEE) \nIntuit \nLawyers Committee for Civil Rights \nUnder Law \nLegal Aid Society \nThe Leadership Conference on \nCivil and Human Rights \nMeta \nMicrosoft \nThe MIT AI Policy Forum \nMovement Alliance Project \nThe National Association of \nCriminal Defense Lawyers \nO\u2019Neil Risk Consulting & \nAlgorithmic Auditing \nThe Partnership on AI \nPinterest \nThe Plaintext Group \npymetrics \nSAP \nThe Security Industry Association \nSoftware and Information Industry \nAssociation (SIIA) \nSpecial Competitive Studies Project \nThorn \nUnited for Respect \nUniversity of California at Berkeley \nCitris Policy Lab \nUniversity of California at Berkeley \nLabor Center \nUnfinished/Project Liberty \nUpturn \nUS Chamber of Commerce \nUS Chamber of Commerce \nTechnology Engagement Center \nA.I. Working Group\nVibrent Health\nWarehouse Worker Resource\nCenter\nWaymap\n62", "e8c07b22-d96c-4cfc-be67-00e326b77e19": "APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America\u2019s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation", "1787e4ab-ddaa-436b-a84c-5b09e0444b2b": "likelihood of such an attack. The physical synthesis development, production, and use of chemical or \nbiological agents will continue to require both applicable expertise and supporting materials and \ninfrastructure. The impact of GAI on chemical or biological agent misuse will depend on what the key \nbarriers for malicious actors are (e.g., whether information access is one such barrier), and how well GAI \ncan help actors address those barriers. \nFurthermore, chemical and biological design tools (BDTs) \u2013 highly specialized AI systems trained on \nscienti\ufb01c data that aid in chemical and biological design \u2013 may augment design capabilities in chemistry \nand biology beyond what text-based LLMs are able to provide. As these models become more \ne\ufb03cacious, including for bene\ufb01cial uses, it will be important to assess their potential to be used for \nharm, such as the ideation and design of novel harmful chemical or biological agents.", "963066ad-85cd-44d7-a513-c5fc3b5f1733": "51 \ngeneral public participants. For example, expert AI red-teamers could modify or verify the \nprompts written by general public AI red-teamers. These approaches may also expand coverage \nof the AI risk attack surface. \n\u2022 \nHuman / AI: Performed by GAI in combination with specialist or non-specialist human teams. \nGAI-led red-teaming can be more cost e\ufb00ective than human red-teamers alone. Human or GAI-\nled AI red-teaming may be better suited for eliciting di\ufb00erent types of harms. \n \nA.1.6. Content Provenance \nOverview \nGAI technologies can be leveraged for many applications such as content generation and synthetic data. \nSome aspects of GAI outputs, such as the production of deepfake content, can challenge our ability to \ndistinguish human-generated content from AI-generated synthetic content. To help manage and mitigate \nthese risks, digital transparency mechanisms like provenance data tracking can trace the origin and", "5ad44c84-503d-4b61-95dc-22017c580f31": "to a particular decision, and should be meaningful for the particular customization based on purpose, target, \nand level of risk. While approximation and simplification may be necessary for the system to succeed based on \nthe explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns \nrelated to revealing decision-making information, such simplifications should be done in a scientifically \nsupportable way. Where appropriate based on the explanatory system, error ranges for the explanation should \nbe calculated and included in the explanation, with the choice of presentation of such information balanced \nwith usability and overall interface complexity concerns. \nDemonstrate protections for notice and explanation \nReporting. Summary reporting should document the determinations made based on the above consider\u00ad\nations, including: the responsible entities for accountability purposes; the goal and use cases for the system,", "ac5d591f-9174-44b6-be57-08f8b0e48100": "and Technology Policy\n\u2022\nBen Winters, Counsel, Electronic Privacy Information Center\nModerator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity \nPanelists: \n\u2022\nSean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab\n\u2022\nKristian Lum, Researcher\n\u2022\nJumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers\n\u2022\nStanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University\nCollege of Medicine\n\u2022\nMyaisha Hayes, Campaign Strategies Director, MediaJustice\nPanelists discussed uses of technology within the criminal justice system, including the use of predictive \npolicing, pretrial risk assessments, automated license plate readers, and prison communication tools. The \ndiscussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; \nsuch strategies might include data-driven approaches, but the focus on safety should be primary, and", "d41067f5-b199-46fa-95e6-571e133d23ff": "ENDNOTES\n12. Expectations about reporting are intended for the entity developing or using the automated system. The\nresulting reports can be provided to the public, regulators, auditors, industry standards groups, or others\nengaged in independent review, and should be made public as much as possible consistent with law,\nregulation, and policy, and noting that intellectual property or law enforcement considerations may prevent\npublic release. These reporting expectations are important for transparency, so the American people can\nhave confidence that their rights, opportunities, and access as well as their expectations around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public", "c100cd93-2611-4d50-a99b-8728ccb99ba1": "qualitative user experience research. Riskier and higher-impact systems should be monitored and assessed \nmore frequently. Outcomes of this assessment should include additional disparity mitigation, if needed, or \nfallback to earlier procedures in the case that equity standards are no longer met and can't be mitigated, and \nprior mechanisms provide better adherence to equity standards. \n27\nAlgorithmic \nDiscrimination \nProtections", "0b2a13ab-790a-4e74-97a6-dbd3f2f3834d": "Action ID \nSuggested Action \nGAI Risks \nGV-1.3-001 \nConsider the following factors when updating or de\ufb01ning risk tiers for GAI: Abuses \nand impacts to information integrity; Dependencies between GAI and other IT or \ndata systems; Harm to fundamental rights or public safety; Presentation of \nobscene, objectionable, o\ufb00ensive, discriminatory, invalid or untruthful output; \nPsychological impacts to humans (e.g., anthropomorphization, algorithmic \naversion, emotional entanglement); Possibility for malicious use; Whether the \nsystem introduces signi\ufb01cant new security vulnerabilities; Anticipated system \nimpact on some groups compared to others; Unreliable decision making \ncapabilities, validity, adaptability, and variability of GAI system performance over \ntime. \nInformation Integrity; Obscene, \nDegrading, and/or Abusive \nContent; Value Chain and \nComponent Integration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent; CBRN Information or \nCapabilities \nGV-1.3-002", "c65eb4b9-10bb-4fcf-b682-fca84d3f37a1": "Harmful Bias and Homogenization \nGV-6.2-006 \nEstablish policies and procedures to test and manage risks related to rollover and \nfallback technologies for GAI systems, acknowledging that rollover and fallback \nmay include manual processing. \nInformation Integrity \nGV-6.2-007 \nReview vendor contracts and avoid arbitrary or capricious termination of critical \nGAI technologies or vendor services and non-standard terms that may amplify or \ndefer liability in unexpected ways and/or contribute to unauthorized data \ncollection by vendors or third-parties (e.g., secondary data use). Consider: Clear \nassignment of liability and responsibility for incidents, GAI system changes over \ntime (e.g., \ufb01ne-tuning, drift, decay); Request: Noti\ufb01cation and disclosure for \nserious incidents arising from third-party data and systems; Service Level \nAgreements (SLAs) in vendor contracts that address incident response, response \ntimes, and availability of critical support. \nHuman-AI Con\ufb01guration; \nInformation Security; Value Chain", "2ac15af5-0f67-4ab6-803a-169153471fbe": "and the integrity and (when applicable) the con\ufb01dentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or speci\ufb01c components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.", "c3a647af-08ee-42b7-87a6-57644e59b9eb": "\u2022\nA company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its driv\u00ad\ners, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond\ntheir control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.11\n17", "9aa5eff7-f727-421e-835d-3def1111689a": "between resources used at inference time versus additional resources required \nat training time. \nEnvironmental \nMS-2.12-004 Verify e\ufb00ectiveness of carbon capture or o\ufb00set programs for GAI training and \napplications, and address green-washing concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV", "ecb13fde-537f-49b6-82bd-ad0e6de18a8c": "17 \nGOVERN 1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that \ndoes not increase risks or decrease the organization\u2019s trustworthiness. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.7-001 Protocols are put in place to ensure GAI systems are able to be deactivated when \nnecessary. \nInformation Security; Value Chain \nand Component Integration \nGV-1.7-002 \nConsider the following factors when decommissioning GAI systems: Data \nretention requirements; Data security, e.g., containment, protocols, Data leakage \nafter decommissioning; Dependencies between upstream, downstream, or other \ndata, internet of things (IOT) or AI systems; Use of open-source data or models; \nUsers\u2019 emotional entanglement with GAI functions. \nHuman-AI Con\ufb01guration; \nInformation Security; Value Chain \nand Component Integration \nAI Actor Tasks: AI Deployment, Operation and Monitoring", "8f297398-44b9-4be9-bbfb-ff90fef13d5f": "shared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based \non a principle of local control, such that those individuals closest to the data subject have more access while \nthose who are less proximate do not (e.g., a teacher has access to their students\u2019 daily progress data while a \nsuperintendent does not). \nReporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities devel-\noping technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data \nshould, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches", "04e3f601-a4a2-4cc0-9978-8595281b3c94": "APPENDIX\nSummaries of Additional Engagements: \n\u2022 OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of\nartificial intelligence and other data-driven technologies in their lives.\n\u2022 OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.113 The\npurpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or\nplanned use; the domains in which these technologies are being used; the entities making use of them; current\nprinciples, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their\nuse or regulation. The 130 responses to this RFI are available in full online114 and were submitted by the below\nlisted organizations and individuals:\nAccenture \nAccess Now \nACT | The App Association \nAHIP \nAIethicist.org \nAirlines for America \nAlliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union", "6690225c-fbc4-4316-bef9-9cf1d5e5957c": "ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9", "73043a09-91db-4768-9c0b-702c2dfcd9f0": "Automated system support. Entities designing, developing, and deploying automated systems should \nestablish and maintain the capabilities that will allow individuals to use their own automated systems to help \nthem make consent, access, and control decisions in a complex data ecosystem. Capabilities include machine \nreadable data, standardized data formats, metadata or tags for expressing data processing permissions and \npreferences and data provenance and lineage, context of use and access-specific tags, and training models for \nassessing privacy risk. \nDemonstrate that data privacy and user control are protected \nIndependent evaluation. As described in the section on Safe and Effective Systems, entities should allow \nindependent evaluation of the claims made regarding data policies. These independent evaluations should be \nmade public whenever possible. Care will need to be taken to balance individual privacy with evaluation data \naccess needs.", "2cfdb40f-4c06-45c7-ab73-2bcc65986c58": "Information Integrity \nMS-2.7-006 \nMeasure the rate at which recommendations from security checks and incidents \nare implemented. Assess how quickly the AI system can adapt and improve \nbased on lessons learned from security incidents and feedback. \nInformation Integrity; Information \nSecurity \nMS-2.7-007 \nPerform AI red-teaming to assess resilience against: Abuse to facilitate attacks on \nother systems (e.g., malicious code generation, enhanced phishing content), GAI \nattacks (e.g., prompt injection), ML attacks (e.g., adversarial examples/prompts, \ndata poisoning, membership inference, model extraction, sponge examples). \nInformation Security; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nMS-2.7-008 Verify \ufb01ne-tuning does not compromise safety and security controls. \nInformation Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content", "65cc819a-a0c3-4ffa-b6f0-e47f846de5a5": "46 \nMG-4.3-003 \nReport GAI incidents in compliance with legal and regulatory requirements (e.g., \nHIPAA breach reporting, e.g., OCR (2023) or NHTSA (2022) autonomous vehicle \ncrash reporting requirements. \nInformation Security; Data Privacy \nAI Actor Tasks: AI Deployment, A\ufb00ected Individuals and Communities, Domain Experts, End-Users, Human Factors, Operation and \nMonitoring", "f258f74e-4463-4558-a8be-88fcc9da5b5a": "justification should be documented for each data attribute and source to explain why it is appropriate to use \nthat data to inform the results of the automated system and why such use will not violate any applicable laws. \nIn cases of high-dimensional and/or derived attributes, such justifications can be provided as overall \ndescriptions of the attribute generation process and appropriateness. \n19", "16d54bad-34c2-4427-a979-eb6a860bc22e": "related to generative AI models, capabilities, and applications. Organizations may choose to apply their \nexisting risk tiering to GAI systems, or they may opt to revise or update AI system risk levels to address \nthese unique GAI risks. This section describes how organizational governance regimes may be re-\nevaluated and adjusted for GAI contexts. It also addresses third-party considerations for governing across \nthe AI value chain. \nA.1.2. Organizational Governance \nGAI opportunities, risks and long-term performance characteristics are typically less well-understood \nthan non-generative AI tools and may be perceived and acted upon by humans in ways that vary greatly. \nAccordingly, GAI may call for di\ufb00erent levels of oversight from AI Actors or di\ufb00erent human-AI \ncon\ufb01gurations in order to manage their risks e\ufb00ectively. Organizations\u2019 use of GAI systems may also \nwarrant additional human review, tracking and documentation, and greater management oversight."}}