File size: 226,509 Bytes
a1bd2bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
question,contexts,answer,ground_truth,faithfulness,answer_relevancy,context_recall,context_precision,answer_correctness
What techniques are suggested to assess and manage statistical biases related to GAI content provenance?,"['• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 'Table of Contents \n1. \nIntroduction ..............................................................................................................................................1 \n2. \nOverview of Risks Unique to or Exacerbated by GAI .....................................................................2 \n3. \nSuggested Actions to Manage GAI Risks ......................................................................................... 12 \nAppendix A. Primary GAI Considerations ............................................................................................... 47 \nAppendix B. References ................................................................................................................................ 54'
 'listed for only some subcategories.  \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'Priorities Related to Information Integrity Research and Development.'
 'MP-2.3-001 \nAssess the accuracy, quality, reliability, and authenticity of GAI output by \ncomparing it to a set of known ground truth data and by using a variety of \nevaluation methods (e.g., human oversight and automated evaluation, proven \ncryptographic techniques, review of content inputs). \nInformation Integrity']",I don't know.,"Techniques such as re-sampling, re-weighting, or adversarial training are suggested to assess and manage statistical biases related to GAI content provenance.",0.0,0.0,0.0,0.249999999975,0.18342742304293966
What actions are suggested to ensure information integrity in the context of AI systems?,"['Priorities Related to Information Integrity Research and Development.'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles']",I don't know.,"The suggested actions to ensure information integrity in the context of AI systems include: employing methods to trace the origin and modifications of digital content (MS-1.1-001), integrating tools designed to analyze content provenance and detect data anomalies, verify the authenticity of digital signatures, and identify patterns associated with misinformation or manipulation (MS-1.1-002), and disaggregating evaluation metrics by demographic factors to identify any discrepancies in how content provenance mechanisms work across diverse populations (MS-1.1-003).",0.0,0.0,0.0,0.26785714284375,0.18303681022057922
What are the requirements for providing accessible notice and instructions for opting out of automated systems?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'meaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible.   \nNOTICE AND EXPLANATION\n40'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'You should know that an automated system is being used, \nand understand how and why it contributes to outcomes \nthat impact you. Designers, developers, and deployers of automat\xad\ned systems should provide generally accessible plain language docu\xad\nmentation including clear descriptions of the overall system func\xad\ntioning and the role automation plays, notice that such systems are in \nuse, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated \nsystem is not the sole input determining the outcome. Automated \nsystems should provide explanations that are technically valid, \nmeaningful and useful to you and to any operators or others who'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'whether they are genuinely helpful in solving an identified problem. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, panelists individually described the importance of: receiving community input into the \ndesign and use of technologies, public reporting on crucial elements of these systems, better notice and consent \nprocedures that ensure privacy based on context and use case, ability to opt-out of using these systems and \nreceive a fallback to a human process, providing explanations of decisions and how these systems work, the \nneed for governance including training in using these systems, ensuring the technological use cases are \ngenuinely related to the goal task and are locally validated to work, and the need for institution and protection \nof third party audits to ensure systems continue to be accountable and valid. \n57'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022']",I don't know.,"Those impacted by an automated system should be given a brief, clear notice that they are entitled to opt-out, along with clear instructions for how to opt-out. Instructions should be provided in an accessible form and should be easily findable by those impacted by the automated system. The brevity, clarity, and accessibility of the notice and instructions should be assessed (e.g., via user experience research).",0.0,0.0,0.6666666666666666,0.37152777776848955,0.18188111253680794
What issues arose from the system awarding benefits changing its criteria invisibly?,"['results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'errors and other system flaws. These flaws were only revealed when an explanation of the system\nwas demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a\ntimely manner.\n42'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'Priorities Related to Information Integrity Research and Development.'
 'times; and information about the procedures used to address requests for consideration along with the results \nof the evaluation of their accessibility. For systems used in sensitive domains, reporting should include infor\xad\nmation about training and governance procedures for these technologies. Reporting should also include docu\xad\nmentation of goals and assessment of meeting those goals, consideration of data included, and documentation \nof the governance of reasonable access to the technology. Reporting should be provided in a clear and \nmachine-readable manner. \n51']",I don't know.,Individuals were denied benefits due to data entry errors and other system flaws. These flaws were only revealed when an explanation of the system was demanded and produced. The lack of an explanation made it harder for errors to be corrected in a timely manner.,0.0,0.0,1.0,0.3333333333,0.1864493661106931
How is the federal government working to combat discrimination in mortgage lending?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Priorities Related to Information Integrity Research and Development.'
 'HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe federal government is working to combat discrimination in mortgage lending. The Depart\xad\nment of Justice has launched a nationwide initiative to combat redlining, which includes reviewing how \nlenders who may be avoiding serving communities of color are conducting targeted marketing and advertising.51 \nThis initiative will draw upon strong partnerships across federal agencies, including the Consumer Financial \nProtection Bureau and prudential regulators. The Action Plan to Advance Property Appraisal and Valuation \nEquity includes a commitment from the agencies that oversee mortgage lending to include a \nnondiscrimination standard in the proposed rules for Automated Valuation Models.52'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'evaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5'
 'APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation'
 '2018. https://ainowinstitute.org/aiareport2018.pdf\n51. Department of Justice. Justice Department Announces New Initiative to Combat Redlining. Oct. 22,\n2021. https://www.justice.gov/opa/pr/justice-department-announces-new-initiative-combat-redlining\n52. PAVE Interagency Task Force on Property Appraisal and Valuation Equity. Action Plan to Advance\nProperty Appraisal and Valuation Equity: Closing the Racial Wealth Gap by Addressing Mis-valuations for\nFamilies and Communities of Color. March 2022. https://pave.hud.gov/sites/pave.hud.gov/files/\ndocuments/PAVEActionPlan.pdf\n53. U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of\nSoftware, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. EEOC\xad\nNVTA-2022-2. May 12, 2022. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use\xad\nsoftware-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Companies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32']","The federal government is working to combat discrimination in mortgage lending through a nationwide initiative launched by the Department of Justice to combat redlining. This initiative includes reviewing how lenders who may be avoiding serving communities of color are conducting targeted marketing and advertising. It also involves strong partnerships across federal agencies, including the Consumer Financial Protection Bureau and prudential regulators. Additionally, the Action Plan to Advance Property Appraisal and Valuation Equity includes a commitment from the agencies that oversee mortgage lending to include a nondiscrimination standard in the proposed rules for Automated Valuation Models.","The federal government is working to combat discrimination in mortgage lending through a nationwide initiative launched by the Department of Justice to combat redlining. This includes reviewing how lenders who may be avoiding serving communities of color are conducting targeted marketing and advertising. The initiative involves strong partnerships across federal agencies, including the Consumer Financial Protection Bureau and prudential regulators. Additionally, the Action Plan to Advance Property Appraisal and Valuation Equity includes a commitment from the agencies that oversee mortgage lending to include a nondiscrimination standard in the proposed rules for Automated Valuation Models.",1.0,0.9924503623378166,1.0,0.2916666666520833,0.9998948917804034
"What are the characteristics of trustworthy AI that should be integrated into organizational policies, processes, procedures, and practices?","['organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'Priorities Related to Information Integrity Research and Development.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'harm, such as the ideation and design of novel harmful chemical or biological agents.  \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles']",I don't know.,"The characteristics of trustworthy AI that should be integrated into organizational policies, processes, procedures, and practices include establishing transparency policies and processes for documenting the origin and history of training data and generated data, and establishing policies to evaluate risk-relevant capabilities of GAI and robustness of safety measures.",0.0,0.0,1.0,0.9999999999,0.1791991601217512
What are the conditions under which individuals should be able to opt out from automated systems in favor of a human alternative?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36']",I don't know.,"Individuals should be able to opt out from automated systems in favor of a human alternative where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.",0.0,0.0,0.6666666666666666,0.36666666664833336,0.18517566892878545
What is data poisoning and how can it affect GAI system outputs?,"['Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Table of Contents \n1. \nIntroduction ..............................................................................................................................................1 \n2. \nOverview of Risks Unique to or Exacerbated by GAI .....................................................................2 \n3. \nSuggested Actions to Manage GAI Risks ......................................................................................... 12 \nAppendix A. Primary GAI Considerations ............................................................................................... 47 \nAppendix B. References ................................................................................................................................ 54'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'listed for only some subcategories.  \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later.'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'Priorities Related to Information Integrity Research and Development.'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,']",I don't know.,Data poisoning is a cybersecurity risk where an adversary compromises a training dataset used by a model to manipulate its outputs or operation. Malicious tampering with data or parts of the model could exacerbate risks associated with GAI system outputs.,0.0,0.0,0.0,0.0,0.17944988259517342
How do opaque decision-making processes in automated systems impact individuals' ability to contest decisions?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'Priorities Related to Information Integrity Research and Development.'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty']",I don't know.,"Opaque decision-making processes in automated systems impact individuals' ability to contest decisions by denying them the knowledge they need to address the impact of these systems on their lives. Without clear explanations, it becomes harder to understand and contest decisions, as illustrated by examples such as a lawyer unable to determine why a client was cut off from Medicaid-funded assistance and a parent not being notified about data collection for a child maltreatment risk assessment.",0.0,0.0,0.6666666666666666,0.22499999998875,0.18758492202920485
Who participated in the OSTP meetings focused on the development of the Blueprint for an AI Bill of Rights?,"['BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'ing sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Priorities Related to Information Integrity Research and Development.']",I don't know.,"Participants in the OSTP meetings included Adobe, American Civil Liberties Union (ACLU), The Aspen Commission on Information Disorder, The Awood Center, The Australian Human Rights Commission, Biometrics Institute, The Brookings Institute, BSA | The Software Alliance, Cantellus Group, Center for American Progress, Center for Democracy and Technology, Center on Privacy and Technology at Georgetown Law, Christiana Care, Color of Change, Coworker, Data Robot, Data Trust Alliance, Data and Society Research Institute, Deepmind, EdSAFE AI Alliance, Electronic Privacy Information Center (EPIC), Encode Justice, Equal AI, Google, Hitachi's AI Policy Committee, The Innocence Project, Institute of Electrical and Electronics Engineers (IEEE), Intuit, Lawyers Committee for Civil Rights Under Law, Legal Aid Society, The Leadership Conference on Civil and Human Rights, Meta, Microsoft, The MIT AI Policy Forum, Movement Alliance Project, The National Association of Criminal Defense Lawyers, O’Neil Risk Consulting & Algorithmic Auditing, The Partnership on AI, Pinterest, The Plaintext Group, pymetrics, SAP, The Security Industry Association, Software and Information Industry Association (SIIA), Special Competitive Studies Project, Thorn, United for Respect, University of California at Berkeley Citris Policy Lab, University of California at Berkeley Labor Center, Unfinished/Project Liberty, Upturn, US Chamber of Commerce, US Chamber of Commerce Technology Engagement Center A.I. Working Group, Vibrent Health, Warehouse Worker Resource Center, and Waymap.",0.0,0.0,0.0,0.5656084655990388,0.18263698692094232
What actions are suggested for explaining and validating an AI model to ensure responsible use and governance?,"['lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952'
 'ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9'
 'and management. One possible way to further categorize these risks, derived in part from the UK’s International \nScientific Report on the Safety of Advanced AI, could be: 1) Technical / Model risks (or risk from malfunction): \nConfabulation; Dangerous or Violent Recommendations; Data Privacy; Value Chain and Component Integration; \nHarmful Bias, and Homogenization; 2) Misuse by humans (or malicious use): CBRN Information or Capabilities; \nData Privacy; Human-AI Configuration; Obscene, Degrading, and/or Abusive Content; Information Integrity; \nInformation Security; 3) Ecosystem / societal risks (or systemic risks): Data Privacy; Environmental; Intellectual \nProperty. We also note that some risks are cross-cutting between these categories.']",I don't know.,"Suggested actions for explaining and validating an AI model to ensure responsible use and governance include: applying and documenting ML explanation results such as analysis of embeddings, counterfactual prompts, gradient-based attributions, model compression/surrogate models, and occlusion/term reduction. Additionally, documenting GAI model details including proposed use and organizational value, assumptions and limitations, data collection methodologies, data provenance, data quality, model architecture, optimization objectives, training algorithms, RLHF approaches, fine-tuning or retrieval-augmented generation approaches, evaluation data, ethical considerations, and legal and regulatory requirements.",0.0,0.0,0.0,0.249999999975,0.17786667628969524
What provisions are included in the Biometric Information Privacy Act enacted by the state of Illinois?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Priorities Related to Information Integrity Research and Development.'
 'NOTICE & \nEXPLANATION \nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality, through laws, policies, and practical \ntechnical and sociotechnical approaches to protecting rights, opportunities, and access. \xad\xad\xad\xad\xad\nPeople in Illinois are given written notice by the private sector if their biometric informa-\ntion is used. The Biometric Information Privacy Act enacted by the state contains a number of provisions \nconcerning the use of individual biometric data and identifiers. Included among them is a provision that no private \nentity may ""collect, capture, purchase, receive through trade, or otherwise obtain"" such information about an \nindividual, unless written notice is provided to that individual or their legally appointed representative. 87\nMajor technology companies are piloting new ways to communicate with the public about'
 'ENDNOTES\n85. Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The\nChicago Sun Times. May 18, 2017.\nhttps://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought\xad\nto-keep-secret\n86. Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case.\nACLU. Jun. 2, 2017.\nhttps://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking\xad\nhighlighted-idaho-aclu-case\n87. Illinois General Assembly. Biometric Information Privacy Act. Effective Oct. 3, 2008.\nhttps://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57\n88. Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022.\nhttps://partnershiponai.org/paper/about-ml-reference-document/1/\n89. See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker\nBarnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'for individuals to be able to access any of their individual information stored in a federal system of records, if not \nincluded under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agen\xad\ncies must provide a method for an individual to determine if their personal information is stored in a particular \nsystem of records, and must provide procedures for an individual to contest the contents of a record about them. \nFurther, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not \ncomply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … \nopportunities…, or benefits.”'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'Public and Private Sector Uses of Biometric Technologies. Issued Oct. 8, 2021.\nhttps://www.federalregister.gov/documents/2021/10/08/2021-21975/notice-of-request-for\xad\ninformation-rfi-on-public-and-private-sector-uses-of-biometric-technologies\n114. National Artificial Intelligence Initiative Office. Public Input on Public and Private Sector Uses of\nBiometric Technologies. Accessed Apr. 19, 2022.\nhttps://www.ai.gov/86-fr-56300-responses/\n115. Thomas D. Olszewski, Lisa M. Van Pay, Javier F. Ortiz, Sarah E. Swiersz, and Laurie A. Dacus.\nSynopsis of Responses to OSTP’s Request for Information on the Use and Governance of Biometric\nTechnologies in the Public and Private Sectors. Science and Technology Policy Institute. Mar. 2022.\nhttps://www.ida.org/-/media/feature/publications/s/sy/synopsis-of-responses-to-request-for\xad\ninformation-on-the-use-and-governance-of-biometric-technologies/ida-document-d-33070.ashx\n73']","The Biometric Information Privacy Act enacted by the state of Illinois includes a provision that no private entity may ""collect, capture, purchase, receive through trade, or otherwise obtain"" biometric information about an individual unless written notice is provided to that individual or their legally appointed representative.","The Biometric Information Privacy Act enacted by the state of Illinois contains a number of provisions concerning the use of individual biometric data and identifiers. Included among them is a provision that no private entity may 'collect, capture, purchase, receive through trade, or otherwise obtain' such information about an individual, unless written notice is provided to that individual or their legally appointed representative.",1.0,0.9524662717484101,1.0,0.3333333333,0.8904414765041544
"How should designers, developers, and deployers of automated systems handle data collection and user consent to ensure data privacy?","['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'You should be protected from abusive data practices via built-in \nprotections and you should have agency over how data about \nyou is used. You should be protected from violations of privacy through \ndesign choices that ensure such protections are included by default, including \nensuring that data collection conforms to reasonable expectations and that \nonly data strictly necessary for the specific context is collected. Designers, de\xad\nvelopers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and de\xad\nletion of your data in appropriate ways and to the greatest extent possible; \nwhere not possible, alternative privacy by design safeguards should be used. \nSystems should not employ user experience and design decisions that obfus\xad\ncate user choice or burden users with defaults that are privacy invasive. Con\xad\nsent should only be used to justify collection of data in cases where it can be'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'to proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'or unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance']","Designers, developers, and deployers of automated systems should handle data collection and user consent to ensure data privacy by:

1. Seeking user permission and respecting their decisions regarding the collection, use, access, transfer, and deletion of their data in appropriate ways and to the greatest extent possible.
2. Using alternative privacy by design safeguards when seeking user consent is not possible.
3. Ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.
4. Avoiding user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive.
5. Using consent to justify data collection only in cases where it can be appropriately obtained.","Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use.",1.0,0.9620969491154683,0.75,0.4265873015766369,0.8906893876783905
How do systems related to the assignment of penalties assist decision-makers in adjudicating administrative or criminal penalties?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'APPENDIX\nSystems that impact the safety of communities such as automated traffic control systems, elec \n-ctrical grid controls, smart city technologies, and industrial emissions and environmental\nimpact control algorithms; and\nSystems related to access to benefits or services or assignment of penalties such as systems that\nsupport decision-makers who adjudicate benefits such as collating or analyzing information or\nmatching records, systems which similarly assist in the adjudication of administrative or criminal\npenalties, fraud detection algorithms, services or benefits access control algorithms, biometric\nsystems used as access control, and systems which make benefits or services related decisions on a\nfully or partially autonomous basis (such as a determination to revoke benefits).\n54'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'Priorities Related to Information Integrity Research and Development.'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952']","Based on the provided context, systems related to the assignment of penalties assist decision-makers in adjudicating administrative or criminal penalties by collating or analyzing information, matching records, and supporting decision-makers in the adjudication process. These systems may also include fraud detection algorithms and access control algorithms, and they can make decisions on a fully or partially autonomous basis, such as determining to revoke benefits.

","Systems related to the assignment of penalties assist decision-makers in adjudicating administrative or criminal penalties by collating or analyzing information or matching records, and by assisting in the adjudication process.",1.0,0.9576964399573923,1.0,0.49999999995,0.6132362982414824
What should be the focus of risk identification and mitigation for automated systems before deployment?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'to proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'or unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'meaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible.   \nNOTICE AND EXPLANATION\n40'
 'Priorities Related to Information Integrity Research and Development.'
 'and mitigation proportionate with those impacts. Automated systems with the intended purpose of violating \nthe safety of others should not be developed or used; systems with such safety violations as identified unin\xad\ntended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessi\xad\ntate rollback or significant modification to a launched automated system. \n18']","The focus of risk identification and mitigation for automated systems before deployment should include:

- A description of any data used to train machine learning models or for other purposes, including how data sources were processed and interpreted.
- A summary of what data might be missing, incomplete, or erroneous, and data relevancy justifications.
- The results of public consultation such as concerns raised and any decisions made due to these concerns.
- Risk identification and management assessments and any steps taken to mitigate potential harms.
- The results of performance testing including, but not limited to, accuracy, differential demographic impact, resulting error rates (overall and per demographic group), and comparisons to previously deployed systems.
- Ongoing monitoring procedures and regular performance testing reports, including monitoring frequency.","Identified risks should focus on the potential for meaningful impact on people’s rights, opportunities, or access and include those to impacted communities that may not be direct users of the automated system, risks resulting from purposeful misuse of the system, and other concerns identified via the consultation process.",1.0,0.9919162046375328,1.0,0.9571428571291837,0.22400833910263965
What procedures should be established and maintained for escalating GAI system incidents to the organizational risk management authority?,"['Table of Contents \n1. \nIntroduction ..............................................................................................................................................1 \n2. \nOverview of Risks Unique to or Exacerbated by GAI .....................................................................2 \n3. \nSuggested Actions to Manage GAI Risks ......................................................................................... 12 \nAppendix A. Primary GAI Considerations ............................................................................................... 47 \nAppendix B. References ................................................................................................................................ 54'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'Priorities Related to Information Integrity Research and Development.'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 'times; and information about the procedures used to address requests for consideration along with the results \nof the evaluation of their accessibility. For systems used in sensitive domains, reporting should include infor\xad\nmation about training and governance procedures for these technologies. Reporting should also include docu\xad\nmentation of goals and assessment of meeting those goals, consideration of data included, and documentation \nof the governance of reasonable access to the technology. Reporting should be provided in a clear and \nmachine-readable manner. \n51']",I don't know.,Establish and maintain procedures for escalating GAI system incidents to the organizational risk management authority when specific criteria for deactivation or disengagement is met for a particular context of use or for the GAI system as a whole.,0.0,0.0,0.0,0.11111111109999999,0.1806177835407171
How do GAI systems contribute to the creation of deepfakes and what are the potential impacts of these deepfakes?,"['and a human who is interacting with it. Humans bring their unique perspectives, experiences, or domain-\nspecific expertise to interactions with AI systems but may not have detailed knowledge of AI systems and \nhow they work. As a result, human experts may be unnecessarily “averse” to GAI systems, and thus \ndeprive themselves or others of GAI’s beneficial uses.  \nConversely, due to the complexity and increasing reliability of GAI technology, over time, humans may \nover-rely on GAI systems or may unjustifiably perceive GAI content to be of higher quality than that \nproduced by other sources. This phenomenon is an example of automation bias, or excessive deference \nto automated systems. Automation bias can exacerbate other risks of GAI, such as risks of confabulation \nor risks of bias or homogenization. \nThere may also be concerns about emotional entanglement between humans and GAI systems, which \ncould lead to negative psychological impacts.'
 '10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations.  \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'warrant additional human review, tracking and documentation, and greater management oversight.  \nAI technology can produce varied outputs in multiple modalities and present many classes of user \ninterfaces. This leads to a broader set of AI Actors interacting with GAI systems for widely differing \napplications and contexts of use. These can include data labeling and preparation, development of GAI \nmodels, content moderation, code generation and review, text generation and editing, image and video \ngeneration, summarization, search, and chat. These activities can take place within organizational \nsettings or in the public domain. \nOrganizations can restrict AI applications that cause harm, exceed stated risk tolerances, or that conflict \nwith their tolerances or values. Governance tools and protocols that are applied to other types of AI \nsystems can be applied to GAI systems. These plans and actions include: \n• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']","GAI systems contribute to the creation of deepfakes by enabling a higher degree of sophistication for malicious actors to produce disinformation that is targeted towards specific demographics. Current and emerging multimodal models make it possible to generate both text-based disinformation and highly realistic ""deepfakes"" – synthetic audiovisual content and photorealistic images. The potential impacts of these deepfakes include manipulating human and machine perception, and enabling additional disinformation threats as future GAI models are trained on new data modalities.","GAI systems contribute to the creation of deepfakes by enabling the generation of highly realistic synthetic audiovisual content and photorealistic images. The potential impacts of these deepfakes include eroding public trust in true or valid evidence and information, causing downstream effects such as a drop in the stock market, and assisting malicious actors in creating compelling imagery and propaganda to support disinformation campaigns.",1.0,0.9712528193454298,0.5,0.99999999995,0.7194400533942749
How can synthetic content detection help manage and mitigate risks associated with AI-generated content?,"['Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'arXiv. https://arxiv.org/pdf/2308.14752 \nPartnership on AI (2023) Building a Glossary for Synthetic Media Transparency Methods, Part 1: Indirect \nDisclosure. https://partnershiponai.org/glossary-for-synthetic-media-transparency-methods-part-1-\nindirect-disclosure/ \nQu, Y. et al. (2023) Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-\nTo-Image Models. arXiv. https://arxiv.org/pdf/2305.13873 \nRafat, K. et al. (2023) Mitigating carbon footprint for knowledge distillation based deep learning model \ncompression. PLOS One. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285668 \nSaid, I. et al. (2022) Nonconsensual Distribution of Intimate Images: Exploring the Role of Legal Attitudes \nin Victimization and Perpetration. Sage. \nhttps://journals.sagepub.com/doi/full/10.1177/08862605221122834#bibr47-08862605221122834 \nSandbrink, J. (2023) Artificial intelligence and biological misuse: Differentiating risks of language models'
 'Priorities Related to Information Integrity Research and Development.'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'and management. One possible way to further categorize these risks, derived in part from the UK’s International \nScientific Report on the Safety of Advanced AI, could be: 1) Technical / Model risks (or risk from malfunction): \nConfabulation; Dangerous or Violent Recommendations; Data Privacy; Value Chain and Component Integration; \nHarmful Bias, and Homogenization; 2) Misuse by humans (or malicious use): CBRN Information or Capabilities; \nData Privacy; Human-AI Configuration; Obscene, Degrading, and/or Abusive Content; Information Integrity; \nInformation Security; 3) Ecosystem / societal risks (or systemic risks): Data Privacy; Environmental; Intellectual \nProperty. We also note that some risks are cross-cutting between these categories.'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022']",I don't know.,"Synthetic content detection can help manage and mitigate risks associated with AI-generated content by facilitating greater information access about both authentic and synthetic content to users, enabling better knowledge of trustworthiness in AI systems. It can also enable processes to trace negative outcomes back to their source, improve information integrity, and uphold public trust.",0.0,0.0,1.0,0.11111111109999999,0.177284727830764
What organizational practices are in place to enable AI testing and incident identification?,"['• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'Priorities Related to Information Integrity Research and Development.'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty']","Based on the provided context, organizational practices in place to enable AI testing and incident identification include:

- Documentation practices such as logging, recording, and analyzing GAI (Generative Artificial Intelligence) incidents.
- Regular information sharing.
- Maintaining change management records.
- Keeping version history and metadata.

These practices facilitate smoother sharing of information with relevant AI Actors and empower them to respond to and manage AI incidents.

(Source: Document from https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf, page 56)","Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. Specifically, policies are established for measuring the effectiveness of employed content provenance methodologies (e.g., cryptography, watermarking, steganography, etc.) and identifying the minimum set of criteria necessary for GAI system incident reporting such as System ID, Title, Reporter, System/Source, Data Reported, Date of Incident, Description, Impact(s), and Stakeholder(s) Impacted.",1.0,1.0000000000000002,0.5,0.874999999978125,0.7962767244097781
What techniques are suggested to minimize risks associated with data privacy in evaluations involving human subjects?,"['external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'Priorities Related to Information Integrity Research and Development.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'evaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5']",The context suggests that organizations should follow human subjects research requirements and best practices such as informed consent and subject compensation to minimize risks associated with data privacy in evaluations involving human subjects.,"Techniques such as anonymization, differential privacy, or other privacy-enhancing technologies are suggested to minimize the risks associated with linking AI-generated content back to individual human subjects.",0.75,0.9604483967726186,0.0,0.19999999998,0.4283001890495529
Who were some of the participants from the private sector and civil society in the OSTP meetings?,"['Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency.'
 'whether they are genuinely helpful in solving an identified problem. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, panelists individually described the importance of: receiving community input into the \ndesign and use of technologies, public reporting on crucial elements of these systems, better notice and consent \nprocedures that ensure privacy based on context and use case, ability to opt-out of using these systems and \nreceive a fallback to a human process, providing explanations of decisions and how these systems work, the \nneed for governance including training in using these systems, ensuring the technological use cases are \ngenuinely related to the goal task and are locally validated to work, and the need for institution and protection \nof third party audits to ensure systems continue to be accountable and valid. \n57'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Priorities Related to Information Integrity Research and Development.'
 'APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation \nMITRE Corporation \nNational Association for the \nAdvancement of Colored People \nLegal Defense and Educational \nFund \nNational Association of Criminal \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On Government Oversight \nRecording Industry Association of \nAmerica \nRobert Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware & Information Industry \nAssociation'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance']","Based on the provided context, some of the participants from the private sector and civil society in the OSTP meetings included:

- Microsoft Corporation
- MITRE Corporation
- National Association for the Advancement of Colored People Legal Defense and Educational Fund
- National Association of Criminal Defense Lawyers
- National Center for Missing & Exploited Children
- National Fair Housing Alliance
- National Immigration Law Center
- NEC Corporation of America
- New America’s Open Technology Institute
- New York Civil Liberties Union
- Notre Dame Technology Ethics Center
- Onfido
- Oosto
- Palantir
- Pangiam
- Parity Technologies
- Philadelphia Unemployment Project
- Project On Government Oversight
- Recording Industry Association of America
- Science, Technology, and Public Policy Program at University of Michigan Ann Arbor
- Security Industry Association
- Software & Information Industry Association","Participants in the OSTP meetings from the private sector and civil society included: Adobe, American Civil Liberties Union (ACLU), The Aspen Commission on Information Disorder, The Awood Center, The Australian Human Rights Commission, Biometrics Institute, The Brookings Institute, BSA | The Software Alliance, Cantellus Group, Center for American Progress, Center for Democracy and Technology, Center on Privacy and Technology at Georgetown Law, Christiana Care, Color of Change, Coworker, Data Robot, Data Trust Alliance, Data and Society Research Institute, Deepmind, EdSAFE AI Alliance, Electronic Privacy Information Center (EPIC), Encode Justice, Equal AI, Google, Hitachi's AI Policy Committee, The Innocence Project, Institute of Electrical and Electronics Engineers (IEEE), Intuit, Lawyers Committee for Civil Rights Under Law, Legal Aid Society, The Leadership Conference on Civil and Human Rights, Meta, Microsoft, The MIT AI Policy Forum, Movement Alliance Project, The National Association of Criminal Defense Lawyers, O’Neil Risk Consulting & Algorithmic Auditing, The Partnership on AI, Pinterest, The Plaintext Group, pymetrics, SAP, The Security Industry Association, Software and Information Industry Association (SIIA), Special Competitive Studies Project, Thorn, United for Respect, University of California at Berkeley Citris Policy Lab, University of California at Berkeley Labor Center, Unfinished/Project Liberty, Upturn, US Chamber of Commerce, US Chamber of Commerce Technology Engagement Center A.I. Working Group, Vibrent Health, Warehouse Worker Resource Center, Waymap.",,0.9999999999999994,1.0,0.8124999999796875,0.23404496086632567
What role does the National Institute of Standards and Technology (NIST) play in advancing artificial intelligence (AI)?,"['About AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest'
 '57 \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \nNational Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook \nNational Institue of Standards and Technology (2023) Framing Risk \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/1-sec-risk \nNational Institute of Standards and Technology (2023) The Language of Trustworthy AI: An In-Depth \nGlossary of Terms https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary \nNational Institue of Standards and Technology (2022) Towards a Standard for Identifying and Managing \nBias in Artificial Intelligence https://www.nist.gov/publications/towards-standard-identifying-and-\nmanaging-bias-artificial-intelligence'
 'https://www.bloomberg.com/graphics/2023-generative-ai-bias/. \nNational Institute of Standards and Technology (2024) Adversarial Machine Learning: A Taxonomy and \nTerminology of Attacks and Mitigations https://csrc.nist.gov/pubs/ai/100/2/e2023/final \nNational Institute of Standards and Technology (2023) AI Risk Management Framework. \nhttps://www.nist.gov/itl/ai-risk-management-framework \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 3: AI \nRisks and Trustworthiness. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 6: AI \nRMF Profiles. https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Core_And_Profiles/6-sec-profile \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix A: \nDescriptions of AI Actor Tasks.'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'Priorities Related to Information Integrity Research and Development.'
 'accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging \ngeneration of artificially intelligent partners.95 The National Science Foundation’s program on Fairness in \nArtificial Intelligence also includes a specific interest in research foundations for explainable AI.96\n45'
 'from the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \n[email protected] \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to'
 'Descriptions of AI Actor Tasks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_A#:~:text=AI%20actors%\n20in%20this%20category,data%20providers%2C%20system%20funders%2C%20product'
 'imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency.']","The National Institute of Standards and Technology (NIST) develops measurements, technology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without harm to people or the planet. NIST has conducted both fundamental and applied work on AI for more than a decade and is helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy AI. NIST also established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to build the science necessary for the safe, secure, and trustworthy development and use of AI.","The National Institute of Standards and Technology (NIST) develops measurements, technology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without harm to people or the planet. NIST is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy AI and has established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to build the science necessary for safe, secure, and trustworthy development and use of AI.",1.0,0.978036094470781,1.0,0.8928571428348213,0.9993736562177171
What should entities responsible for the development or use of automated systems include in their governance structures and procedures to ensure clear organizational oversight?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'times; and information about the procedures used to address requests for consideration along with the results \nof the evaluation of their accessibility. For systems used in sensitive domains, reporting should include infor\xad\nmation about training and governance procedures for these technologies. Reporting should also include docu\xad\nmentation of goals and assessment of meeting those goals, consideration of data included, and documentation \nof the governance of reasonable access to the technology. Reporting should be provided in a clear and \nmachine-readable manner. \n51'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'monitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer \nconcrete directions for how those changes can be made. \n• Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can \nbe provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should \nbe made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law \nenforcement, or national security considerations may prevent public release. Where public reports are not possible, the \ninformation should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard \ning individuals’ rights. These reporting expectations are important for transparency, so the American people can have'
 'Priorities Related to Information Integrity Research and Development.'
 'whether they are genuinely helpful in solving an identified problem. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of \nthese technologies, panelists individually described the importance of: receiving community input into the \ndesign and use of technologies, public reporting on crucial elements of these systems, better notice and consent \nprocedures that ensure privacy based on context and use case, ability to opt-out of using these systems and \nreceive a fallback to a human process, providing explanations of decisions and how these systems work, the \nneed for governance including training in using these systems, ensuring the technological use cases are \ngenuinely related to the goal task and are locally validated to work, and the need for institution and protection \nof third party audits to ensure systems continue to be accountable and valid. \n57'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty']","Entities responsible for the development or use of automated systems should include the following in their governance structures and procedures to ensure clear organizational oversight:

1. **Monitoring**: Regular monitoring of the automated systems.
2. **Governance Structures**: Establishing clear governance structures for the automated systems.
3. **Reporting**: Expectations about reporting, which can be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review. Reports should be made public as much as possible, consistent with law, regulation, and policy, while considering intellectual property, law enforcement, or national security considerations.
4. **Oversight Bodies**: Where public reports are not possible, information should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguarding individuals’ rights.

These measures are important for transparency so that the public can have confidence in the automated systems being used.","Entities responsible for the development or use of automated systems should include clearly-stated governance procedures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing assessment and mitigation. Organizational stakeholders, including those with oversight of the business process or operation being automated, as well as other organizational divisions that may be affected due to the use of the system, should be involved in establishing governance procedures. Responsibility should rest high enough in the organization that decisions about resources, mitigation, incident response, and potential rollback can be made promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those holding this responsibility should be made aware of any use cases with the potential for meaningful impact on people’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, it may be appropriate for an independent ethics review to be conducted before deployment.",0.9090909090909091,0.978493825176774,1.0,0.7440476190352183,0.7317903632773278
How has the customer service industry successfully integrated automated services with human support teams?,"['BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9']",I don't know.,The customer service industry has successfully integrated automated services such as chat-bots and AI-driven call response systems with escalation to a human support team. Many businesses now use partially automated customer service platforms that help answer customer questions and compile common problems for human agents to review. These integrated human-AI systems allow companies to provide faster customer care while maintaining human agents to answer calls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to successful customer service.,0.0,0.0,0.0,0.19999999998,0.18032948270080051
What steps should be taken to inform AI stakeholders during the deactivation or disengagement of AI systems?,"['• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Priorities Related to Information Integrity Research and Development.']",I don't know.,"Establish and maintain communication plans to inform AI stakeholders as part of the deactivation or disengagement process of a specific GAI system (including for open-source models) or context of use, including reasons, workarounds, user access removal, alternative processes, contact information, etc.",0.0,0.0,0.0,0.2916666666520833,0.17902589024025395
How do the U.S. AI Safety Institute and AI Risk Management Framework support the 2023 AI Exec Order?,"['BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '57 \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \nNational Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook \nNational Institue of Standards and Technology (2023) Framing Risk \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/1-sec-risk \nNational Institute of Standards and Technology (2023) The Language of Trustworthy AI: An In-Depth \nGlossary of Terms https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary \nNational Institue of Standards and Technology (2022) Towards a Standard for Identifying and Managing \nBias in Artificial Intelligence https://www.nist.gov/publications/towards-standard-identifying-and-\nmanaging-bias-artificial-intelligence'
 'https://www.bloomberg.com/graphics/2023-generative-ai-bias/. \nNational Institute of Standards and Technology (2024) Adversarial Machine Learning: A Taxonomy and \nTerminology of Attacks and Mitigations https://csrc.nist.gov/pubs/ai/100/2/e2023/final \nNational Institute of Standards and Technology (2023) AI Risk Management Framework. \nhttps://www.nist.gov/itl/ai-risk-management-framework \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 3: AI \nRisks and Trustworthiness. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 6: AI \nRMF Profiles. https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Core_And_Profiles/6-sec-profile \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix A: \nDescriptions of AI Actor Tasks.'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'Priorities Related to Information Integrity Research and Development.'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 '54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']",I don't know.,"The U.S. AI Safety Institute and the AI Risk Management Framework support the 2023 Executive Order on Safe, Secure, and Trustworthy AI by continuing efforts to build the science necessary for the safe, secure, and trustworthy development and use of AI.",0.0,0.0,0.0,0.4166666666458333,0.17786617457007506
How to balance synthetic vs. non-synthetic data and document model details to avoid bias?,"['and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']",I don't know.,"To balance synthetic vs. non-synthetic data and document model details to avoid bias, assess the proportion of synthetic to non-synthetic training data and verify that the training data is not overly homogenous or GAI-produced. Additionally, document GAI model details including proposed use and organizational value, assumptions and limitations, data collection methodologies, data provenance, data quality, model architecture, optimization objectives, training algorithms, RLHF approaches, fine-tuning or retrieval-augmented generation approaches, evaluation data, ethical considerations, and legal and regulatory requirements.",0.0,0.0,0.0,0.19999999998,0.18112171206326816
How to address data privacy and ensure AI integrity in human evaluations?,"['FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'Priorities Related to Information Integrity Research and Development.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 '54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant.']",I don't know.,"To address data privacy and ensure AI integrity in human evaluations, the following actions are suggested: 1. Document how content provenance data is tracked and how that data interacts with privacy and security, including anonymizing data to protect privacy, leveraging privacy output filters, and removing any personally identifiable information (PII). 2. Provide human subjects with options to withdraw participation or revoke their consent for present or future use of their data in GAI applications. 3. Use techniques such as anonymization, differential privacy, or other privacy-enhancing technologies to minimize the risks associated with linking AI-generated content back to individual human subjects.",0.0,0.0,0.0,0.5845238095091964,0.18061559566984886
How can public feedback and incident reporting improve GAI traceability and risk management?,"['Table of Contents \n1. \nIntroduction ..............................................................................................................................................1 \n2. \nOverview of Risks Unique to or Exacerbated by GAI .....................................................................2 \n3. \nSuggested Actions to Manage GAI Risks ......................................................................................... 12 \nAppendix A. Primary GAI Considerations ............................................................................................... 47 \nAppendix B. References ................................................................................................................................ 54'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'Priorities Related to Information Integrity Research and Development.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'listed for only some subcategories.  \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later.'
 'meaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible.   \nNOTICE AND EXPLANATION\n40'
 'harm, such as the ideation and design of novel harmful chemical or biological agents.  \nWhile some of these described capabilities lie beyond the reach of existing GAI tools, ongoing \nassessments of this risk would be enhanced by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning and GAI systems’ connection or access to relevant data and tools. \nTrustworthy AI Characteristic: Safe, Explainable and Interpretable']","Public feedback and incident reporting can improve GAI traceability and risk management by facilitating smoother sharing of information with relevant AI Actors. Regular information sharing, change management records, version history, and metadata can empower AI Actors in responding to and managing AI incidents. Documentation practices, including logging, recording, and analyzing GAI incidents, are crucial for this process.","Public feedback and incident reporting can improve GAI traceability and risk management by promoting transparency and enhancing awareness of performance changes. Documentation and sharing of information about GAI incidents help relevant AI Actors trace impacts to their source, while structured public feedback provides context and depth that automated systems may lack. This feedback can be integrated into monitoring processes to mitigate potential risks and harms from outputs.",0.8181818181818182,0.9999999999999999,0.25,0.37301587300343914,0.3857288288642894
"How could automation bias worsen misinformation risks from GAI, given model collapse and trust erosion?","['and a human who is interacting with it. Humans bring their unique perspectives, experiences, or domain-\nspecific expertise to interactions with AI systems but may not have detailed knowledge of AI systems and \nhow they work. As a result, human experts may be unnecessarily “averse” to GAI systems, and thus \ndeprive themselves or others of GAI’s beneficial uses.  \nConversely, due to the complexity and increasing reliability of GAI technology, over time, humans may \nover-rely on GAI systems or may unjustifiably perceive GAI content to be of higher quality than that \nproduced by other sources. This phenomenon is an example of automation bias, or excessive deference \nto automated systems. Automation bias can exacerbate other risks of GAI, such as risks of confabulation \nor risks of bias or homogenization. \nThere may also be concerns about emotional entanglement between humans and GAI systems, which \ncould lead to negative psychological impacts.'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 'inputs and content delivered through these plugins is often distributed, with inconsistent or insufficient \naccess control. \nDocumentation practices including logging, recording, and analyzing GAI incidents can facilitate \nsmoother sharing of information with relevant AI Actors. Regular information sharing, change \nmanagement records, version history and metadata can also empower AI Actors responding to and \nmanaging AI incidents.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'even if the inferences are not accurate (e.g., confabulations), and especially if they reveal information \nthat the individual considers sensitive or that is used to disadvantage or harm them. \nBeyond harms from information exposure (such as extortion or dignitary harm), wrong or inappropriate \ninferences of PII can contribute to downstream or secondary harmful impacts. For example, predictive \ninferences made by GAI models based on PII or protected attributes can contribute to adverse decisions, \nleading to representational or allocative harms to individuals or groups (see Harmful Bias and \nHomogenization below).']","Automation bias can worsen misinformation risks from Generative Artificial Intelligence (GAI) by leading humans to over-rely on GAI systems or unjustifiably perceive GAI content to be of higher quality than that produced by other sources. This excessive deference to automated systems can exacerbate risks such as confabulation (the generation of false information) and bias or homogenization. As a result, misinformation can spread more easily because people might trust and disseminate incorrect or biased information generated by GAI without sufficient scrutiny. This can lead to model collapse, where the quality and reliability of the AI model degrade over time, and trust erosion, where users lose confidence in the AI system and its outputs.","Automation bias can exacerbate other risks of GAI, such as risks of confabulation or risks of bias or homogenization.",0.46153846153846156,0.929886978768589,1.0,0.59999999997,0.9814270413495314
How do consultations and monitoring ensure automated systems' safety and effectiveness?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'to proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 ""technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of""
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9'
 'or unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance']","Consultations and monitoring ensure the safety and effectiveness of automated systems by involving public consultation to address concerns and make decisions based on those concerns, conducting risk identification and management assessments, and performing independent evaluations and reporting. These processes include steps to mitigate potential harms, performance testing for accuracy and demographic impact, and ongoing monitoring procedures with regular performance testing reports. This comprehensive approach helps to proactively protect against harms from unintended uses or impacts of automated systems.","Consultations ensure automated systems' safety and effectiveness by engaging diverse impacted communities and experts to consider concerns and risks unique to those communities. This feedback is documented and used to reconsider the development of the system. Monitoring ensures ongoing performance by including recalibration procedures, continuous evaluation of performance metrics and harm assessments, updates of systems, retraining of machine learning models, and fallback mechanisms. Both technical system components and human operators are monitored, with manual human-led monitoring as a check for shortcomings in automated monitoring systems.",1.0,0.9888831953529978,0.75,0.6116666666544333,0.6381752792639237
How do EO 13960 and NIST AI RMF ensure AI transparency and accountability for federal agencies?,"['lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 '57 \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \nNational Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook \nNational Institue of Standards and Technology (2023) Framing Risk \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/1-sec-risk \nNational Institute of Standards and Technology (2023) The Language of Trustworthy AI: An In-Depth \nGlossary of Terms https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary \nNational Institue of Standards and Technology (2022) Towards a Standard for Identifying and Managing \nBias in Artificial Intelligence https://www.nist.gov/publications/towards-standard-identifying-and-\nmanaging-bias-artificial-intelligence'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'https://www.bloomberg.com/graphics/2023-generative-ai-bias/. \nNational Institute of Standards and Technology (2024) Adversarial Machine Learning: A Taxonomy and \nTerminology of Attacks and Mitigations https://csrc.nist.gov/pubs/ai/100/2/e2023/final \nNational Institute of Standards and Technology (2023) AI Risk Management Framework. \nhttps://www.nist.gov/itl/ai-risk-management-framework \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 3: AI \nRisks and Trustworthiness. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Chapter 6: AI \nRMF Profiles. https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Core_And_Profiles/6-sec-profile \nNational Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix A: \nDescriptions of AI Actor Tasks.'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Priorities Related to Information Integrity Research and Development.'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'listed for only some subcategories.  \n \n \n13 As this document was focused on the GAI PWG efforts and primary considerations (see Appendix A), AI RMF \nsubcategories not addressed here may be added later.'
 'Descriptions of AI Actor Tasks. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_A#:~:text=AI%20actors%\n20in%20this%20category,data%20providers%2C%20system%20funders%2C%20product']",I don't know.,"Executive Order 13960 requires that AI used by federal agencies is transparent and accountable, among other principles. The NIST AI Risk Management Framework, which is being developed through a consensus-driven, open, transparent, and collaborative process, will also consider and encompass principles such as transparency and accountability during the pre-design, design and development, deployment, use, and testing and evaluation of AI technologies and systems.",0.0,0.0,0.0,0.499999999975,0.17810188855970693
How does surveillance software for monitoring union talks intersect with data privacy and regulatory gaps?,"['Companies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Priorities Related to Information Integrity Research and Development.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61'
 ""records and education-related data in order to do so. The overarching concerns of surveillance in these \ndomains included concerns about the chilling effects of surveillance on student expression, inappropriate \ncontrol of tenants via surveillance, and the way that surveillance of workers blurs the boundary between work \nand life and exerts extreme and potentially damaging control over workers' lives. Additionally, some panelists \npointed out ways that data from one situation was misapplied in another in a way that limited people's \nopportunities, for example data from criminal justice settings or previous evictions being used to block further \naccess to housing. Throughout, various panelists emphasized that these technologies are being used to shift the \nburden of oversight and efficiency from employers to workers, schools to students, and landlords to tenants, in \nways that diminish and encroach on equality of opportunity; assessment of these technologies should include""
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'Alliance for Automotive Innovation \nAmelia Winger-Bearskin \nAmerican Civil Liberties Union \nAmerican Civil Liberties Union of \nMassachusetts \nAmerican Medical Association \nARTICLE19 \nAttorneys General of the District of \nColumbia, Illinois, Maryland, \nMichigan, Minnesota, New York, \nNorth Carolina, Oregon, Vermont, \nand Washington \nAvanade \nAware \nBarbara Evans \nBetter Identity Coalition \nBipartisan Policy Center \nBrandon L. Garrett and Cynthia \nRudin \nBrian Krupp \nBrooklyn Defender Services \nBSA | The Software Alliance \nCarnegie Mellon University \nCenter for Democracy & \nTechnology \nCenter for New Democratic \nProcesses \nCenter for Research and Education \non Accessible Technology and \nExperiences at University of \nWashington, Devva Kasnitz, L Jean \nCamp, Jonathan Lazar, Harry \nHochheiser \nCenter on Privacy & Technology at \nGeorgetown Law \nCisco Systems \nCity of Portland Smart City PDX \nProgram \nCLEAR \nClearview AI \nCognoa \nColor of Change \nCommon Sense Media \nComputing Community Consortium']",I don't know.,"Companies use surveillance software to track employee discussions about union activity and use the resulting data to surveil individual employees and surreptitiously intervene in discussions. The United States lacks a comprehensive statutory or regulatory framework governing the rights of the public when it comes to personal data, which includes the use of surveillance software in workplaces.",0.0,0.0,1.0,0.62499999996875,0.18098271214915762
How can provenance tracking and public feedback improve AI content risk management?,"['FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'and management. One possible way to further categorize these risks, derived in part from the UK’s International \nScientific Report on the Safety of Advanced AI, could be: 1) Technical / Model risks (or risk from malfunction): \nConfabulation; Dangerous or Violent Recommendations; Data Privacy; Value Chain and Component Integration; \nHarmful Bias, and Homogenization; 2) Misuse by humans (or malicious use): CBRN Information or Capabilities; \nData Privacy; Human-AI Configuration; Obscene, Degrading, and/or Abusive Content; Information Integrity; \nInformation Security; 3) Ecosystem / societal risks (or systemic risks): Data Privacy; Environmental; Intellectual \nProperty. We also note that some risks are cross-cutting between these categories.'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 '2 \nThis work was informed by public feedback and consultations with diverse stakeholder groups as part of NIST’s \nGenerative AI Public Working Group (GAI PWG). The GAI PWG was an open, transparent, and collaborative \nprocess, facilitated via a virtual workspace, to obtain multistakeholder input on GAI risk management and to \ninform NIST’s approach. \nThe focus of the GAI PWG was limited to four primary considerations relevant to GAI: Governance, Content \nProvenance, Pre-deployment Testing, and Incident Disclosure (further described in Appendix A). As such, the \nsuggested actions in this document primarily address these considerations. \nFuture revisions of this profile will include additional AI RMF subcategories, risks, and suggested actions based \non additional considerations of GAI as the space evolves and empirical evidence indicates additional risks. A \nglossary of terms pertinent to GAI risk management will be developed and hosted on NIST’s Trustworthy &'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'Priorities Related to Information Integrity Research and Development.'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']","Provenance tracking and public feedback can improve AI content risk management by providing detailed information on the data used to train machine learning models, including how data sources were processed and interpreted, and identifying any missing, incomplete, or erroneous data. This transparency allows for better risk identification and management assessments, and helps in mitigating potential harms. Public feedback can highlight concerns and influence decisions, ensuring that the AI system aligns with public expectations and values. Additionally, ongoing monitoring and performance testing, informed by provenance tracking, can ensure the system remains accurate and fair over time. 

Source: Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 19, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': '', '_id': '0dfb25b274794704951bd01224d5c65a', '_collection_name': 'snowflake-arctic-embed-l-recursive-base'}, page_content='organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,').","Provenance tracking and public feedback can improve AI content risk management by tracing the origin and history of content, enabling better knowledge of trustworthiness in AI systems, and facilitating greater information access about both authentic and synthetic content. Integrating pre- and post-deployment external feedback into the monitoring process for GAI models can enhance awareness of performance changes and mitigate potential risks and harms from outputs.",0.8181818181818182,1.0000000000000002,0.8,0.44047619046517855,0.8237425286176446
How can designers ensure user understanding and privacy in automated systems?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'You should know that an automated system is being used, \nand understand how and why it contributes to outcomes \nthat impact you. Designers, developers, and deployers of automat\xad\ned systems should provide generally accessible plain language docu\xad\nmentation including clear descriptions of the overall system func\xad\ntioning and the role automation plays, notice that such systems are in \nuse, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated \nsystem is not the sole input determining the outcome. Automated \nsystems should provide explanations that are technically valid, \nmeaningful and useful to you and to any operators or others who'
 'You should be protected from abusive data practices via built-in \nprotections and you should have agency over how data about \nyou is used. You should be protected from violations of privacy through \ndesign choices that ensure such protections are included by default, including \nensuring that data collection conforms to reasonable expectations and that \nonly data strictly necessary for the specific context is collected. Designers, de\xad\nvelopers, and deployers of automated systems should seek your permission \nand respect your decisions regarding collection, use, access, transfer, and de\xad\nletion of your data in appropriate ways and to the greatest extent possible; \nwhere not possible, alternative privacy by design safeguards should be used. \nSystems should not employ user experience and design decisions that obfus\xad\ncate user choice or burden users with defaults that are privacy invasive. Con\xad\nsent should only be used to justify collection of data in cases where it can be'
 'to proactively protect you from harms stemming from unintended, \nyet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\xad\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\xad\ning that confirms that the system is safe and effective, including re\xad\nporting of steps taken to mitigate potential harms, should be per\xad\nformed and the results made public whenever possible. \n15'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles']","Designers can ensure user understanding and privacy in automated systems by:

1. **Providing Accessible Documentation**: Designers, developers, and deployers should provide generally accessible plain language documentation, including clear descriptions of the overall system functioning and the role automation plays. This documentation should include notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

2. **Keeping Users Informed**: Users should be notified of significant use case or key functionality changes. They should know how and why an outcome impacting them was determined by an automated system, including when the automated system is not the sole input determining the outcome.

3. **Ensuring Data Privacy**: Users should be protected from abusive data practices via built-in protections and should have agency over how data about them is used. This includes ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers should seek user permission and respect their decisions regarding the collection, use, access, transfer, and deletion of their data in appropriate ways and to the greatest extent possible. Where not possible, alternative privacy by design safeguards should be used.

4. **Avoiding Privacy-Invasive Defaults**: Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify the collection of data in cases where it can be appropriately obtained.

By following these steps, designers can help ensure that users understand how automated systems impact them and that their privacy is protected.","Designers can ensure user understanding and privacy in automated systems by providing plain language documentation, clear descriptions of system functioning, notice of system use, and explanations of outcomes. They should seek user permission and respect decisions regarding data collection, use, access, transfer, and deletion. Consent requests should be brief, understandable, and give users agency over data collection. Enhanced protections and restrictions should be applied to sensitive domains, and surveillance technologies should be subject to heightened oversight.",1.0,0.999308689219987,0.5,0.26666666665333333,0.8371097611849012
How do equity and disparity assessments help prevent algorithmic bias?,"['ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'evaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5'
 'an equitable way.  This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'The demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex \n(including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifi\xad\ncation protected by law. The broad set of measures assessed should include demographic performance mea\xad\nsures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity \nassessment should be separated from data used for the automated system and privacy protections should be \ninstituted; in some cases it may make sense to perform such assessment using a data sample. For every \ninstance where the deployed automated system leads to different treatment or impacts disfavoring the identi\xad\nfied groups, the entity governing, implementing, or using the system should document the disparity and a \njustification for any continued use of the system.'
 'protections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'people’s legal problem stories. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936 \nHaran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. \nhttps://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/ \nInformation Technology Industry Council (2024) Authenticating AI-Generated Content. \nhttps://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf \nJain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To Equal Opportunity. arXiv. \nhttps://arxiv.org/pdf/2305.08157 \nJi, Z. et al (2023) Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, \nArticle 248. https://doi.org/10.1145/3571730 \nJones-Jang, S. et al. (2022) How do people react to AI failure? Automation bias, algorithmic aversion, and \nperceived controllability. Oxford. https://academic.oup.com/jcmc/article/28/1/zmac029/6827859] \nJussupow, E. et al. (2020) Why Are We Averse Towards Algorithms? A Comprehensive Literature Review']","Equity and disparity assessments help prevent algorithmic bias by ensuring that protections are built into the design, deployment, and ongoing use of automated systems. These assessments include proactive equity evaluations as part of system design, the use of representative data, protection against proxies for demographic features, ensuring accessibility for people with disabilities, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections. This comprehensive approach helps identify and mitigate potential biases, ensuring fair treatment across different demographic groups.","Equity and disparity assessments help prevent algorithmic bias by conducting proactive equity assessments in the design phase to review potential input data, associated historical context, accessibility for people with disabilities, and societal goals to identify potential discrimination and effects on equity. Disparity assessments test automated systems using a broad set of measures to assess whether the system components produce disparities, and demographic data collected for disparity assessment should be separated from data used for the automated system with privacy protections instituted. When a disparity is identified, steps may be taken to mitigate or eliminate it, and ongoing monitoring and mitigation are performed to assess algorithmic discrimination that might arise from unforeseen interactions or changes.",1.0,1.0000000000000002,1.0,0.5216269841182903,0.991481212382073
"How do school audio surveillance systems for stress indicators intersect with data protections, especially for students with disabilities?","[""records and education-related data in order to do so. The overarching concerns of surveillance in these \ndomains included concerns about the chilling effects of surveillance on student expression, inappropriate \ncontrol of tenants via surveillance, and the way that surveillance of workers blurs the boundary between work \nand life and exerts extreme and potentially damaging control over workers' lives. Additionally, some panelists \npointed out ways that data from one situation was misapplied in another in a way that limited people's \nopportunities, for example data from criminal justice settings or previous evictions being used to block further \naccess to housing. Throughout, various panelists emphasized that these technologies are being used to shift the \nburden of oversight and efficiency from employers to workers, schools to students, and landlords to tenants, in \nways that diminish and encroach on equality of opportunity; assessment of these technologies should include""
 'policing, automated license plate readers, real-time facial recognition systems (especially  \n    those used in public places or during protected activities like peaceful protests), social media  \n    monitoring, and ankle monitoring devices; \nVoting-related systems such as signature matching tools; \nSystems with a potential privacy impact such as smart home systems and associated data,  \n    systems that use or collect health-related data, systems that use or collect education-related  \n    data, criminal justice system data, ad-targeting systems, and systems that perform big data  \n    analytics in order to build profiles or infer personal information about individuals; and \nAny system that has the meaningful potential to lead to algorithmic discrimination. \n• Equal opportunities, including but not limited to:\nEducation-related systems such as algorithms that purport to detect student cheating or  \n    plagiarism, admissions algorithms, online or virtual reality student monitoring systems,'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Companies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'Priorities Related to Information Integrity Research and Development.'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 ""technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of""]",I don't know.,School audio surveillance systems monitor student conversations to detect potential 'stress indicators' as a warning of potential violence. These systems have the potential to limit student freedom to express a range of emotions at school and may inappropriately flag students with disabilities who need accommodations or use screen readers or dictation software as cheating.,0.0,0.0,0.6666666666666666,0.59999999997,0.185658045881626
How does the tech companion help implement AI Bill of Rights to protect Americans from automated harms?,"['BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \nSAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13'
 'ing sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized \nemail address, people throughout the United States, public servants across Federal agencies, and members of the \ninternational community spoke up about both the promises and potential harms of these technologies, and \nplayed a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these \ndiscussions include that AI has transformative potential to improve Americans’ lives, and that preventing the \nharms of these technologies is both necessary and achievable. The Appendix includes a full list of public engage-\nments. \n4'
 'ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'moving principles into practice. \nThe expectations given in the Technical Companion are meant to serve as a blueprint for the development of \nadditional technical standards and practices that should be tailored for particular sectors and contexts. While \nexisting laws informed the development of the Blueprint for an AI Bill of Rights, this framework does not detail \nthose laws beyond providing them as examples, where appropriate, of existing protective measures. This \nframework instead shares a broad, forward-leaning vision of recommended principles for automated system \ndevelopment and use to inform private and public involvement with these systems where they have the poten\xad\ntial to meaningfully impact rights, opportunities, or access. Additionally, this framework does not analyze or \ntake a position on legislative and regulatory proposals in municipal, state, and federal government, or those in \nother countries.'
 'technical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The \nexpectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing'
 'or unintended uses lead to unintended harms. \nMany of the harms resulting from these technologies are preventable, and actions are already being taken to protect \nthe public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \nkey development decisions are vetted by an ethics review; others have identified and mitigated harms found through \npre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta\xad\ntion processes that may be applied when considering the use of new automated systems, and existing product develop\xad\nment and testing practices already protect the American public from many potential harms. \nStill, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \nthese existing practices, increase confidence in the use of automated systems, and protect the American public. Inno\xad']","The technical companion to the Blueprint for an AI Bill of Rights helps implement the AI Bill of Rights by providing practical next steps to move the principles into practice. It promotes common approaches that allow technological innovation to flourish while protecting people from harm. The companion serves as a reference for anyone impacted by automated systems or involved in their development, design, deployment, evaluation, or policy-making. It outlines expectations for automated systems, which are meant to serve as a blueprint for developing additional technical standards and practices tailored for particular sectors and contexts. These expectations include pre-deployment testing, ongoing monitoring, and other practices to prevent unintended harms and ensure key development decisions are vetted by an ethics review.","The technical companion provides examples and concrete steps for communities, industry, governments, and others to take in order to build protections into policy, practice, or the technological design process. It outlines practical steps for the development of additional technical standards and practices, including pre-deployment testing, ongoing monitoring, and governance structures for automated systems. It also includes expectations about reporting for transparency, so the American people can have confidence that their rights, opportunities, and access are respected.",1.0,0.9153379955282014,1.0,0.9626543209769582,0.49869533163999014
How to mitigate Human-AI risks in evaluations and pre-deployment?,"['FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 '54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant.'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 'evaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5']",I don't know.,"To mitigate Human-AI risks in evaluations and pre-deployment, several actions can be taken: 1. Assess and manage statistical biases related to GAI content provenance through techniques such as re-sampling, re-weighting, or adversarial training. 2. Document how content provenance data is tracked and how that data interacts with privacy and security, including anonymizing data, leveraging privacy output filters, and removing any personally identifiable information (PII). 3. Provide human subjects with options to withdraw participation or revoke their consent for present or future use of their data in GAI applications. 4. Use techniques such as anonymization, differential privacy, or other privacy-enhancing technologies to minimize the risks associated with linking AI-generated content back to individual human subjects. 5. Evaluate claims of model capabilities using empirically validated methods. 6. Share results of pre-deployment testing with relevant GAI Actors. 7. Utilize a purpose-built testing environment such as NIST Dioptra to empirically evaluate GAI trustworthy characteristics. 8. Avoid extrapolating GAI system performance or capabilities from narrow, non-systematic, and anecdotal assessments. 9. Document the extent to which human domain knowledge is employed to improve GAI system performance. 10. Review and verify sources and citations in GAI system outputs during pre-deployment risk measurement and ongoing monitoring activities. 11. Track and document instances of anthropomorphization in GAI system interfaces. 12. Verify GAI system training data and TEVV data provenance, and ensure that fine-tuning or retrieval-augmented generation data is grounded. 13. Regularly review security and safety guardrails, especially if the GAI system is being operated in novel circumstances.",0.0,0.0,0.0,0.5259259259171605,0.2429127996658477
How to design explanatory mechanisms in high-risk automated systems for pre-decision transparency and post-decision accessibility?,"['detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'meaningful and useful to you and to any operators or others who \nneed to understand the system, and calibrated to the level of risk \nbased on the context. Reporting that includes summary information \nabout these automated systems in plain language and assessments of \nthe clarity and quality of the notice and explanations should be made \npublic whenever possible.   \nNOTICE AND EXPLANATION\n40'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \ntherefore does not require compliance with the principles described herein. It also is not determinative of what \nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \nintelligence activities. \nThe appropriate application of the principles set forth in this white paper depends significantly on the \ncontext in which automated systems are being utilized. In some circumstances, application of these principles'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty']",I don't know.,"In high-risk automated systems, explanatory mechanisms should be built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully transparent models should be used), rather than as an after-the-decision interpretation. The explanation provided by a system should accurately reflect the factors and the influences that led to a particular decision, and should be meaningful for the particular customization based on purpose, target, and level of risk.",0.0,0.0,1.0,0.22619047617916663,0.1800906539558884
How do biometrics affect access and decisions for benefits or penalties?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Priorities Related to Information Integrity Research and Development.'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'APPENDIX\nSystems that impact the safety of communities such as automated traffic control systems, elec \n-ctrical grid controls, smart city technologies, and industrial emissions and environmental\nimpact control algorithms; and\nSystems related to access to benefits or services or assignment of penalties such as systems that\nsupport decision-makers who adjudicate benefits such as collating or analyzing information or\nmatching records, systems which similarly assist in the adjudication of administrative or criminal\npenalties, fraud detection algorithms, services or benefits access control algorithms, biometric\nsystems used as access control, and systems which make benefits or services related decisions on a\nfully or partially autonomous basis (such as a determination to revoke benefits).\n54'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'Companies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32']","Biometric systems are used as access control mechanisms and can make decisions related to benefits or services on a fully or partially autonomous basis, such as determining whether to revoke benefits. 

Source: Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 53, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': ""D:20220920133035-04'00'"", 'modDate': ""D:20221003104118-04'00'"", 'trapped': '', '_id': '6f20c4fe7a8a4f27bf9169511d1cd8d5', '_collection_name': 'snowflake-arctic-embed-l-recursive-base'}, page_content='APPENDIX\nSystems that impact the safety of communities such as automated traffic control systems, elec \n-ctrical grid controls, smart city technologies, and industrial emissions and environmental\nimpact control algorithms; and\nSystems related to access to benefits or services or assignment of penalties such as systems that\nsupport decision-makers who adjudicate benefits such as collating or analyzing information or\nmatching records, systems which similarly assist in the adjudication of administrative or criminal\npenalties, fraud detection algorithms, services or benefits access control algorithms, biometric\nsystems used as access control, and systems which make benefits or services related decisions on a\nfully or partially autonomous basis (such as a determination to revoke benefits).\n54')",The answer to given question is not present in context,0.9285714285714286,0.8873551634118172,1.0,0.0,0.9263373617376327
How do equity assessments and disparity tests reduce algorithmic bias?,"['evaluation and plain language reporting in the form of an algorithmic impact assessment, including \ndisparity testing results and mitigation information, should be performed and made public whenever \npossible to confirm these protections. \n5'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'an equitable way.  This protection should include proactive equity \nassessments as part of the system design, use of representative data \nand protection against proxies for demographic features, ensuring \naccessibility for people with disabilities in design and development, \npre-deployment and ongoing disparity testing and mitigation, and \nclear organizational oversight. Independent evaluation and plain \nlanguage reporting in the form of an algorithmic impact assessment, \nincluding disparity testing results and mitigation information, \nshould be performed and made public whenever possible to confirm \nthese protections.\n23'
 'The demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex \n(including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual \norientation), religion, age, national origin, disability, veteran status, genetic information, or any other classifi\xad\ncation protected by law. The broad set of measures assessed should include demographic performance mea\xad\nsures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity \nassessment should be separated from data used for the automated system and privacy protections should be \ninstituted; in some cases it may make sense to perform such assessment using a data sample. For every \ninstance where the deployed automated system leads to different treatment or impacts disfavoring the identi\xad\nfied groups, the entity governing, implementing, or using the system should document the disparity and a \njustification for any continued use of the system.'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'protections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not \nlaunched, preventing harm to the public. Federal government agencies have been developing standards and guidance \nfor the use of automated systems in order to help prevent bias. Non-profits and companies have developed best \npractices for audits and impact assessments to help identify potential algorithmic discrimination and provide \ntransparency to the public in the mitigation of such biases. \nBut there is much more work to do to protect the public from algorithmic discrimination to use and design']","Equity assessments and disparity tests reduce algorithmic bias by ensuring that the design and deployment of automated systems are equitable. This involves:

1. **Proactive Equity Assessments**: Conducting these assessments as part of the system design to identify and address potential biases early in the development process.
2. **Use of Representative Data**: Ensuring that the data used to train the algorithms is representative of the diverse populations that the system will impact.
3. **Protection Against Proxies for Demographic Features**: Implementing measures to prevent the use of proxies that could indirectly introduce bias based on demographic characteristics.
4. **Accessibility for People with Disabilities**: Ensuring that the system is accessible to people with disabilities during both design and development phases.
5. **Pre-deployment and Ongoing Disparity Testing and Mitigation**: Conducting disparity tests before deploying the system and continuously monitoring it to identify and mitigate any biases that may arise.
6. **Clear Organizational Oversight**: Establishing oversight mechanisms to ensure accountability and adherence to equity principles.

These steps help to identify and mitigate biases, ensuring that the automated systems treat all demographic groups fairly and equitably.","Equity assessments and disparity tests reduce algorithmic bias by ensuring that systems are designed and used in an equitable way. This includes proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, ensuring accessibility for people with disabilities, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.",1.0,0.9975734006680298,1.0,0.5799999999884,0.9942069947664742
How does synthetic data proportion help prevent AI model collapse?,"['lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 '54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant.'
 'guide the design of provenance data-tracking techniques. \nHuman-AI Configuration; \nInformation Integrity \nMS-2.10-003 Verify deduplication of GAI training data samples, particularly regarding synthetic \ndata. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV'
 'arXiv. https://arxiv.org/pdf/2308.14752 \nPartnership on AI (2023) Building a Glossary for Synthetic Media Transparency Methods, Part 1: Indirect \nDisclosure. https://partnershiponai.org/glossary-for-synthetic-media-transparency-methods-part-1-\nindirect-disclosure/ \nQu, Y. et al. (2023) Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-\nTo-Image Models. arXiv. https://arxiv.org/pdf/2305.13873 \nRafat, K. et al. (2023) Mitigating carbon footprint for knowledge distillation based deep learning model \ncompression. PLOS One. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285668 \nSaid, I. et al. (2022) Nonconsensual Distribution of Intimate Images: Exploring the Role of Legal Attitudes \nin Victimization and Perpetration. Sage. \nhttps://journals.sagepub.com/doi/full/10.1177/08862605221122834#bibr47-08862605221122834 \nSandbrink, J. (2023) Artificial intelligence and biological misuse: Differentiating risks of language models'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952']",I don't know.,Assessing the proportion of synthetic to non-synthetic training data and verifying that the training data is not overly homogenous or GAI-produced helps mitigate concerns of model collapse.,0.0,0.0,0.0,0.36666666664833336,0.18136181947853552
How do hiring algorithms and social media data impact democracy?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'SECTION TITLE\xad\nFOREWORD\nAmong the great challenges posed to democracy today is the use of technology, data, and automated systems in \nways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and \nprevent our access to critical resources or services. These problems are well documented. In America and around \nthe world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used \nin hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s \nopportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or \nconsent. \nThese outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraor-'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Companies use surveillance software to track employee discussions about union activity and use the\nresulting data to surveil individual employees and surreptitiously intervene in discussions.67\n32'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'Priorities Related to Information Integrity Research and Development.'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']","Based on the provided context, hiring algorithms and social media data impact democracy by reflecting and reproducing existing unwanted inequities or embedding new harmful bias and discrimination. Additionally, unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent. These uses of technology, data, and automated systems can limit opportunities and prevent access to critical resources or services, posing significant challenges to democracy.

","Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.",0.9230769230769231,0.9999999999999997,1.0,0.249999999975,0.9821304947746031
How can unproven tech and data misuse cause harm?,"['Priorities Related to Information Integrity Research and Development.'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \nsteps to move these principles into practice and promote common approaches that allow technological \ninnovation to flourish while protecting people from harm. \n9'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 ""technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised \nconcerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of \nunjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated \nsystems. Throughout, many of the panelists individually emphasized that the impact of these systems on \nindividuals and communities is potentially severe: the systems lack individualization and work against the \nbelief that people can change for the better, system use can lead to the loss of jobs and custody of children, and \nsurveillance can lead to chilling effects for communities and sends negative signals to community members \nabout how they're viewed. \nIn discussion of technical and governance interventions that that are needed to protect against the harms of""]","The provided context mentions that unproven technology and data misuse can lead to several harms, including:

1. **Replication of Unjust Outcomes**: Adverse or irrelevant data can lead to the replication of unjust outcomes.
2. **Confirmation Bias**: There is a tendency for people to defer to potentially inaccurate automated systems, which can reinforce confirmation bias.
3. **Severe Impact on Individuals and Communities**: The systems can lack individualization, work against the belief that people can change for the better, and lead to significant negative consequences such as loss of jobs and custody of children.
4. **Surveillance and Chilling Effects**: Surveillance can lead to chilling effects for communities and send negative signals to community members about how they are viewed.

These points highlight the potential severe impacts of unproven technology and data misuse on individuals and communities.","Unproven technology and data misuse can cause harm in several ways. Technologies that have not been proven to work within an acceptable range of error can lead to substantial and unjustified harm. Automated systems relying on historical data can allow irrelevant information from past decisions to affect decision-making in unrelated situations. Technologies can also be designed to violate safety, such as those facilitating stalking, or lead to unintended harms through their use. Examples include a proprietary model for predicting sepsis that underperformed and caused 'alert fatigue,' social media moderation systems silencing counter speech, tracking devices being misused by stalkers, and police deployment algorithms sending police to incorrect neighborhoods due to feedback loops from reused data.",1.0,0.951317364861573,0.8,0.5666666666477778,0.6805715192213955
Why compare system performance with human methods after extensive tests?,"['and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'results, and actions taken; and the procedures for and results from independent evaluations. Reporting \nshould be provided in a plain language and machine-readable manner. \n20'
 'Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 'lifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous']",I don't know.,"System performance should be compared with the in-place, potentially human-driven, status quo procedures, with existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, and as a lifecycle minimum performance standard.",0.0,0.0,0.0,0.49999999995,0.1832016020730998
How do confident errors mislead users?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content'
 'decision making, and enhancing system documentation and debugging practices. When implementing \nfeedback activities, organizations should follow human subjects research requirements and best \npractices such as informed consent and subject compensation.'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'Priorities Related to Information Integrity Research and Development.'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'purpose specification. For example, most model developers do not disclose specific data sources on \nwhich models were trained, limiting user awareness of whether personally identifiably information (PII) \nwas trained on and, if so, how it was collected.  \nModels may leak, generate, or correctly infer sensitive information about individuals. For example, \nduring adversarial attacks, LLMs have revealed sensitive information (from the public domain) that was \nincluded in their training data. This problem has been referred to as data memorization, and may pose \nexacerbated privacy risks even for data present only in a small number of training samples.  \nIn addition to revealing sensitive information in GAI training data, GAI models may be able to correctly \ninfer PII or sensitive data that was not in their training data nor disclosed by the user by stitching \ntogether information from disparate sources. These inferences can have negative impact on an individual'
 'errors and other system flaws. These flaws were only revealed when an explanation of the system\nwas demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a\ntimely manner.\n42'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022']",I don't know.,"Confident errors, also known as confabulations or fabrications, mislead users by producing confidently stated but erroneous or false content.",0.0,0.0,0.0,0.0,0.18321323393366046
"How do data biases impact loans, hiring, and education?","['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'Priorities Related to Information Integrity Research and Development.'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'Information Integrity; Information \nSecurity; Dangerous, Violent, or \nHateful Content']",I don't know.,"Data biases impact loans by causing applicants who attended Historically Black Colleges or Universities (HBCUs) to be charged higher loan prices for refinancing student loans compared to those who did not attend HBCUs, even when controlling for other credit-related factors. In hiring, a tool that learned features from a company's predominantly male employees rejected women applicants for discriminatory reasons, penalizing resumes with the word 'women’s.' In education, a predictive model used by universities to predict student dropout rates used race directly as a predictor, resulting in Black students being deemed at higher risk of dropping out compared to their white peers, which could guide them away from certain majors.",0.0,0.0,0.0,0.19642857141875,0.1828868085143257
How to design systems to avoid algorithmic bias and ensure safety?,"['for any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections'
 'FROM \nPRINCIPLES \nTO PRACTICE \nA TECHINCAL COMPANION TO\nTHE Blueprint for an \nAI BILL OF RIGHTS\n12'
 'detailed steps toward actualizing these principles in the technological design process. These principles help \nprovide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, \nor access to critical needs. \n3'
 'ethnicity, which can lead to race-based health inequities.47\n25\nAlgorithmic \nDiscrimination \nProtections'
 'BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022'
 'and biological design tools. arXiv. https://arxiv.org/pdf/2306.13952'
 'external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty'
 'and data that are considered sensitive are understood to change over time based on societal norms and context. \n36'
 '• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications  \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls \n• Commercial use \n• Data provenance'
 'organization’s business processes or other activities, system goals, any human-run procedures that form a \npart of the system, and specific performance expectations; a description of any data used to train machine \nlearning models or for other purposes, including how data sources were processed and interpreted, a \nsummary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the \nresults of public consultation such as concerns raised and any decisions made due to these concerns; risk \nidentification and management assessments and any steps taken to mitigate potential harms; the results of \nperformance testing including, but not limited to, accuracy, differential demographic impact, resulting \nerror rates (overall and per demographic group), and comparisons to previously deployed systems; \nongoing monitoring procedures and regular performance testing reports, including monitoring frequency,']",I don't know.,"To design systems to avoid algorithmic bias and ensure safety, systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts. They should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring to demonstrate they are safe and effective based on their intended use. Protective measures should include the possibility of not deploying the system or removing a system from use. Designers, developers, and deployers should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This includes proactive equity assessments, use of representative data, protection against proxies for demographic features, ensuring accessibility for people with disabilities, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment should be performed and made public whenever possible.",0.0,0.0,0.5,0.09999999999,0.17870744270437017