Sarah Ciston
try inference version
cc195e8
|
raw
history blame
12.7 kB
metadata
title: Critical AI Prompt Battle
author: Sarah Ciston
editors:
  - Emily Martinez
  - Minne Atairu
category: critical-ai

p5.js Critical AI Prompt Battle

By Sarah Ciston With Emily Martinez and Minne Atairu

What are we making?

In this tutorial, you can build a tool to run several AI chat prompts at once and compare their results. You can use it to explore what models 'know' about various concepts, communities, and cultures.

This tutorial is part 2 in a series of 5 tutorials that focus on using AI creatively and thoughtfully. Part 1: [Making a ToolBox for Making Critical AI] Part 3: [Training Dataset Explorer] Part 4: [Machine Learning Model Inspector & Poetry Machine] Part 5: [Putting Critical Tools into Practice]

The code and content in this tutorial build on information from the prior tutorial to start creating your first tool for your p5.js Critical AI Kit. It also builds on fantastic work on critical prompt programming by Yasmin Morgan (2022), Katy Gero et al.(2024), and Minne Atairu (2024).

Why compare prompts?

When you're using a chatbot to generate code or an email, it's easy to imagine its outputs are neutral and harmless. It seems like any system would output basically the same result. Does this matter for basic uses like making a plain image or having a simple conversation? Absolutely. Training datasets are shaping even the most innocuous outputs. This training shows up in subtle insidious ways.

Unfortunately, the sleek chatbot interface hides all the decision-making that leads to a prompt output. To glimpse the differences, we can test many variations by making our own tool. With our tool, we can hope to understand more about the underlying assumptions contained in the training dataset. That gives us more information to decide how we select and use these models — and for which contexts.

Steps

1. Make a copy of your toolkit prototype from Tutorial One and rename it "Critical AI Prompt Battle" to follow along.

To jump ahead, you can make a copy of the finished example in the editor. But we really encourage you to type along with us!

X. Import the Hugging Face library for working with Transformer models.

Put this code at the top of sketch.js:

import { pipeline, env } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';

env.allowLocalModels = false;

The import phrase says we are bringing in a library (or module) and the curly braces let us specify which specific functions from the library we want to use, in case we don't want to import the entire thing. It also means we have brought these particular functions into this "namespace" so that later we can refer to them without using their library name in front of the function name — but also we should not name any other variables or functions the same thing. More information on importing Modules.

X. Create global variables to use later.

Declare these variables at the top of your script so that they can be referenced in multiple functions throughout the project:

var promptInput // will be a field for insert a text value
var blankArray = [] // will be an array to insert a list of text values from multiple fields

We will be making a form that lets us write a prompt and send it to a model. It will have extra inputs for making variations of the prompt it sends. The promptInput variable will carry the prompt we create, and the blankArray will carry the variations we tell the model to insert into the prompt.

X. [PSEUDOCODE] Create makeInterface() and add features

X. [PSEUDOCODE] Connect form, test with console.log

X. Write instructions for your model.

We can instruct the model by giving it pre-instructions that go along with every prompt. We'll write also write those instructions now. Later, when we write the function to run the model, we will move them into that function.

let PREPROMPT = `Return an array of sentences. In each sentence, fill in the [BLANK] in the following sentence with each word I provide in the array ${blankArray}. Replace any [FILL] with an appropriate word of your choice.`

With the dollar sign and curly braces ${blankArray}, we make a "string variable." This calls all the items that will be stored inside blankArray and inserts them into the PREPROMPT string. Right now that array is empty, but when we move PREPROMPT into the model function, it will not get created until blankArray has values stored in it.

X. [PSEUDOCODE] Add async function runModel() wrapping HF API await. {// explain link to await} explain

X. [PSEUDOCODE] Add model results processing with await

X. [PSEUDOCODE] Connect model results, send model results to interface

X. [PSEUDOCODE] Test with simple example.

A basic prompt may include WHAT/WHO is described, WHERE they are, WHAT they're doing, perhaps also describing HOW. When writing your prompt, replace one of these aspects with [BLANK] so that you instruct the model to fill it in iteratively with the words you provide (Morgan 2022, Gero 2023). Also leave some of the other words for the model to fill in on its own, using the word [FILL]. We instructed the model to replace these on its own in the PREPROMPT.

X. [PSEUDOCODE] Test with more complex example (add a model, add a field)

X. [PSEUDOCODE] Add a model to the tool.

You can change which model your tool works with by README.md and to sketch.js Search the list of models available.

X. [PSEUDOCODE] Make a list of topics that interest you to try with your tool.

Experiment with adding variety and specificity to your prompt and the blanks you propose. Try different sentence structures and topics. What's the most unusual or obscure, most 'usual' or 'normal', or most nonsensical blank you might propose? Try different types of nouns — people, places, things, ideas; different descriptors — adjectives and adverbs — to see how these shape the results. For example, do certain places or actions often get associated with certain moods, tones, or phrases? Where are these based on outdated or stereotypical assumptions? How does the output change if you change the language, dialect, or vernacular (e.g. slang versus business phrasing)? (Atairu 2024).

"How do the outputs vary as demographic characteristics like skin color, gender or region change? Do these variances reflect any known harmful societal stereotypes?" (Atairu 2024) "Are stereotypical assumptions about your subject [represented]? Consider factors such as race, gender, socioeconomic status, ability. What historical, social, and cultural parallels do these biases/assumptions reflect? Discuss how these elements might mirror real-world issues or contexts. (Atairu 2024)

Reflections

Here we have created a tool to test different kinds of prompts quickly and to modify them easily, allowing us to compare prompts at scale. By comparing how outputs change with subtle shifts in prompts, we can explore how implicit bias emerges from [repeated and amplified through] large-scale machine learning models. It helps us understand that unwanted outputs are not just glitches in an otherwise working system, and that every output (no matter how boring) contains the influence of its dataset.

Compare different prompts:

See how subtle changes in your inputs can lead to large changes in the output. Sometimes these also reveal large gaps in the model's available knowledge. What does the model 'know' about communities who are less represented in its data? How has this data been limited?

Reconsider neutral:

This tool helps [reveal/us recognize] that [no version of a text, and no language model, is neutral./there is no 'neutral' output]. Each result is informed by context. Each result reflects differences in representation and cultural understanding, which have been amplified by the statistical power of the model.

Consider your choice of words and tools:

How does this help you think "against the grain"? Rather than taking the output of a system for granted as valid, how might you question or reflect on it? How will you use this tool in your practice?

Next steps

Expand your tool:

This tool lets you scale up your prompt adjustments. We have built a tool comparing word choices in the same basic prompt. You've also built a simple interface for accessing pre-trained models that does not require using [a login/another company's interface]. It lets you easily control your input and output, with the interface you built.

Keep playing with the p5.js DOM functions to build your interface & the HuggingFace API. What features might you add? You might also adapt this tool to compare wholly different prompts, or even to compare different models running the same prompt.

Next we will add additional aspects to the interface that let you adjust more features and explore even further.

Further considerations

Consider making it a habit to add text like "AI generated" to the title of any content you produce using a generative AI tool, and include details of your process in its description (Atairu 2024).

References

Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, and Elena L. Glassman. 2024. Supporting Sensemaking of Large Language Model Outputs at Scale. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 838, 1–21. https://doi.org/10.1145/3613904.3642139

Morgan, Yasmin. 2022. "AIxDesign Icebreakers, Mini-Games & Interactive Exercises." https://aixdesign.co/posts/ai-icebreakers-mini-games-interactive-exercises

Ref Minne's worksheet (Atairu 2024)

============================================

Tutorial 1:

X. Create a class instance of p5 in sketch.js

Because we are going to use several other libraries alongside p5.js, it will be necessary and helpful to use p5.js in "Instance Mode." You may have seen this before in this Multiple Canvases example.

Our p5.js Instance is basically a wrapper that allows us to hold all of our p5.js functions together in one place and label them, so that the program can recognize them as belonging to p5.js.

First we declare a new p5() class instance:

new p5(function (p5) {
    //
}

Then, all our usual p5.js coding will happen within these curly braces.

new p5(function (p5) {
    p5.setup = function(){
        //
    }

    p5.draw = function(){
        // 
    }
}

Important: When using any functions specific to p5.js, you will start them out with a label of whatever you called your p5.js instance. In this case we called it p5 so our functions will be called p5.setup() and p5.draw() instead of the setup() and draw() you may recognize.

This will apply to any other function that is special to p5.js, like p5.noCanvas, but not to other functions which are standard to Javascript. Anything code written outside of the new p5(){} instance will not understand any p5.js syntax.

Let's add the instance mode version of p5.noCanvas() because we will be working directly with the DOM and don't need a canvas.

new p5(function (p5) {
    p5.setup = function(){
        p5.noCanvas()
        console.log('p5 instance loaded')
    }

    p5.draw = function(){
        // 
    }
}

We can also check that the p5 instance is working correctly by adding console.log('p5 instance loaded') to p5.setup(), since you won't yet see a canvas or any DOM elements

Check that the page loaded, since we don't have a canvas

window.onload = function(){
    console.log('DOM loaded, sketch.js loaded')
}

X. Add authorization to your space.

Paste this code into your sketch.js file.

Also add this to your README.md.

hf_oauth: true
hf_oauth_scopes:
  - read-repos
  - write-repos
  - inference-api

When you next load your app, click Authorize

screenshot of Hugging Face app authorization screen

To check if your authorization has worked, visit the Settings for your Hugging Face profile. Click Connected Apps and you should see the name of your Space.

screenshot of authorized space in Hugging Face Settings interface