Is the AI spell-casting metaphor harmful or helpful?
5th October 2022
For a few weeks now I’ve been promoting spell-casting as a metaphor for prompt design against generative AI systems such as GPT-3 and Stable Diffusion.
Here’s an example, in this snippet from my recent Changelog podcast episode.
Relevant section towards the end (transcription assisted by Whisper):
When you’re working with these, you’re not a programmer anymore. You’re a wizard, right? I always wanted to be a wizard. We get to be wizards now. And we’re learning these spells. We don’t know why they work. Why does Neuromancer work? Who knows? Nobody knows. But you add it to your spell book and then you combine it with other spells. And if you’re unlucky and combine them in the wrong way, you might get demons coming out at you.
I had an interesting debate on Twitter this morning about whether or not this metaphor is harmful or helpful. There are some very interesting points to discuss!
The short version: I’m now convinced that the value of this metaphor changes based on the audience.
The key challenge here is to avoid implying that these systems are “magical” in that they are incomprehensible and mysterious. As such, I believe the metaphor is only appropriate when you’re talking to people who are working with these systems from a firm technical perspective.
Expanding the spell-casting metaphor
When I compare prompts to spells and I’m talking to another software engineer, here’s the message I am trying to convey:
Writing prompts is not like writing regular code. There is no API reference or programming language specification that will let you predict exactly what will happen.
Instead, you have to experiment: try different fragments of prompts and see what works. As you get a feel for these fragments you can then start exploring what happens when you combine them together.
Over time you will start to develop an intuition for what works. You’ll build your own collection of fragments and patterns, and exchange those with other people.
The weird thing about this process is that no-one can truly understand exactly how each fragment works—not even the creators of the models. We’ve learned that “Trending on artstation” produces better images with Stable Diffusion—but we can only ever develop a vague intuition for why.
It honestly feels more like fictional spell-casting than programming. Each fragment is a new spell that you have learned and can add to your spell book.
It’s confusing, and surprising, and a great deal of fun.
For me, this captures my experience working with prompts pretty accurately. My hope is that this is a useful way to tempt other programmers into exploring this fascinating new area.
The other thing I like about this metaphor is that, to my mind, it touches on some of the risks of generative AI as well.
Fiction is full of tales of magic gone wrong: of wizards who lost control of forces that they did not fully understand.
When I think about prompt injection attacks I imagine good wizards and evil wizards casting spells and counter-spells at each other! Software vulnerabilities in plain English totally fit my mental model of casting spells.
But in debating this on Twitter I realized that whether this metaphor makes sense to you relies pretty heavily on which specific magic system comes to mind for you.
I was raised on Terry Pratchett’s Discworld, which has a fantastically rich and deeply satirical magic system. Incorrect incantations frequently produce demons! Discworld wizards are mostly academics who spend more time thinking about lunch than practicing magic. The most interesting practitioners are the witches, for who the most useful magic is more like applied psychology (“headalogy” in the books.)
If your mental model of “magic” is unexplained supernatural phenomenon and fairies granting wishes then my analogy doesn’t really fit.
Magic as a harmful metaphor for AI
The argument for this metaphor causing harm is tied to the larger challenge of helping members of the public understand what is happening in this field.
Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’ by Emily M. Bender is a useful summary of some of these challenges.
In Technology Is Magic, Just Ask The Washington Post from 2015 Jon Evans makes the case that treating technology as “magic” runs a risk of people demanding solutions to societal problems that cannot be delivered.
Understanding exactly what these systems are capable of and how they work is a hard enough for people with twenty years of software engineering experience, let alone everyone else.
The last thing people need is to be told that these systems are “magic”—something that is permanently beyond their understanding and control.
These systems are not magic. They’re mathematics. It turns out that if you throw enough matrix multiplication and example data (literally terabytes of it) at a problem, you can get a system that can appear to do impossible things.
But implying that they are magic—or even that they are “intelligent”—does not give people a useful mental model. GPT-3 is not a wizard, and it’s not intelligent: it’s a stochastic parrot, capable of nothing more than predicting which word should come next to form a sentence that best matches the corpus it has been trained on.
This matters to me a great deal. In conversations I have had around AI ethics the only universal answer I’ve found is that it is ethical to help people understand what these systems can do and how they work.
So I plan to be more intentional with my metaphors. I’ll continue to enthuse about spell-casting with fellow nerds who aren’t at risk of assuming these systems are incomprehensible magic, but I’ll keep searching for better ways to help explain these systems to everyone else.
More recent articles
- Notes from Bing Chat—Our First Encounter With Manipulative AI - 19th November 2024
- Project: Civic Band - scraping and searching PDF meeting minutes from hundreds of municipalities - 16th November 2024
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024