Simon Willison’s Weblog

Subscribe

A conversation about prompt engineering with CBC Day 6

18th March 2023

I’m on Canadian radio this morning! I was interviewed by Peter Armstrong for CBC Day 6 about the developing field of prompt engineering.

You can listen here on the CBC website.

CBC also published this article based on the interview, which includes some of my answers that didn’t make the audio version: These engineers are being hired to get the most out of AI tools without coding.

Here’s my own lightly annotated transcript (generated with the help of Whisper).

Peter: AI Whisperer, or more properly known as Prompt Engineers, are part of a growing field of humans who make their living working with AI

Their job is to craft precise phrases to get a desired outcome from an AI

Some experts are skeptical about how much control AI whisperers actually have

But more and more companies are hiring these prompt engineers to work with AI tools

There are even online marketplaces where freelance engineers can sell the prompts they’ve designed

Simon Willison is an independent researcher and developer who has studied AI prompt engineering

Good morning, Simon. Welcome to Day 6

Simon: Hi, it’s really great to be here

Peter: So this is a fascinating and kind of perplexing job

What exactly does a prompt engineer do?

Simon: So we have these new AI models that you can communicate to with English language

You type them instructions in English and they do the thing that you ask them to do, which feels like it should be the easiest thing in the world

But it turns out actually getting great results out of these things, using these for the kinds of applications people want to sort of summarization and extracting facts requires a lot of quite deep knowledge as to how to use them and what they’re capable of and how to get the best results out of them

So, prompt engineering is essentially the discipline of becoming an expert in communicating with these things

It’s very similar to being a computer programmer except weird and different in all sorts of new ways that we’re still trying to understand

Peter: You’ve said in some of your writing and talking about this that it’s important for prompt engineers to resist what you call superstitious thinking

What do you mean by that?

My piece In defense of prompt engineering talks about the need to resist superstitious thinking.

Simon: It’s very easy when talking to one of these things to think that it’s an AI out of science fiction, to think that it’s like the Star Trek computer and it can understand and do anything

And that’s very much not the case

These systems are extremely good at pretending to be all powerful, all knowing things, but they have massive, massive flaws in them

So it’s very easy to become superstitious, to think, oh wow, I asked it to read this web page, I gave it a link to an article and it read it

It didn’t read it!

This is a common misconception that comes up when people are using ChatGPT. I wrote about this and provided some illustrative examples in ChatGPT can’t access the internet, even though it really looks like it can.

A lot of the time it will invent things that look like it did what you asked it to, but really it’s sort of imitating what would look like a good answer to the question that you asked it

Peter: Well, and I think that’s what’s so interesting about this, that it’s not sort of core science computer programming

There’s a lot of almost, is it fair to call it intuition

Like what makes a prompt engineer good at being a prompt engineer?

Simon: I think intuition is exactly right there

The way you get good at this is firstly by using these things a lot

It takes a huge amount of practice and experimentation to understand what these things can do, what they can’t do, and just little tweaks in how you talk to them might have huge effect in what they say back to you

Peter: You know, you talked a little bit about the assumption that we can’t assume this is some all-knowing futuristic AI that knows everything and yet you know we already have people calling these the AI whispers which to my ears sounds a little bit mystical

How much of this is is you know magic as opposed to science?

Simon: The comparison to magic is really interesting because when you’re working with these it really can feel like you’re a sort of magician you sort of cast spells at it you don’t fully understand what they’re going to do and and it reacts sometimes well and sometimes it reacts poorly

And I’ve talked to AI practitioners who kind of talk about collecting spells for their spell book

But it’s also a very dangerous comparison to make because magic is, by its nature, impossible for people to comprehend and can do anything

And these AI models are absolutely not that

See Is the AI spell-casting metaphor harmful or helpful? for more on why magic is a dangerous comparison to make!

Fundamentally, they’re mathematics

And you can understand how they work and what they’re capable of if you put the work in

Peter: I have to admit, when I first heard about this, I thought it was a kind of a made up job or a bit of a scam to just get people involved

But the more I’ve read on it, the more I’ve understood that this is a real skill

But I do think back to, it wasn’t all that long ago that we had Google search specialists that helped you figure out how to search for something on Google

Now we all take for granted because we can do it

I wonder if you think, do prompt engineers have a future or are we all just going to eventually be able catch up with them and use this AI more effectively?

Simon: I think a lot of prompt engineering will become a skill that people develop

Many people in their professional and personal lives are going to learn to use these tools, but I also think there’s going to be space for expertise

There will always be a level at which it’s worth investing sort of full-time experience in in solving some of these problems, especially for companies that are building entire product around these AI engines under the hood

Peter: You know, this is a really exciting time

I mean, it’s a really exciting week

We’re getting all this new stuff

It’s amazing to watch people use it and see what they can do with it

And I feel like my brain is split

On the one hand, I’m really excited about it

On the other hand, I’m really worried about it

Are you in that same place?

And what are the things you’re excited about versus the things that you’re worried about?

Simon: I’m absolutely in the same place as you there

This is both the most exciting and the most terrifying technology I’ve ever encountered in my career

Something I’m personally really excited about right now is developments in being able to run these AIs on your own personal devices

I have a series of posts about this now, starting with Large language models are having their Stable Diffusion moment where I talk about first running a useful large language model on my own laptop.

Right now, if you want to use these things, you have to use them against cloud services run by these large companies

But there are increasing efforts to get them to scale down to run on your own personal laptops or even on your own personal phone

I ran a large language model that Facebook Research released just at the weekend on my laptop for the first time, and it started spitting out useful results

And that felt like a huge moment in terms of sort of the democratization of this technology, putting it into people’s hands and meaning that things where you’re concerned about your own privacy and so forth suddenly become feasible because you’re not talking to the cloud, you’re talking to the sort of local model

Peter: You know, if I typed into one of these chat bots, you know, should I be worried about the rise of AI

It would absolutely tell me not to be

If I ask you the same question, should we be worried and should we be spending more time figuring out how this is going to seep its way into various corners of our lives?

Simon: I think we should absolutely be worried because this is going to have a major impact on society in all sorts of ways that we don’t predict and some ways that we can predict

I’m not worried about the sort of science fiction scenario where the AI breaks out of my laptop and takes over the world

But there are many very harmful things you can do with a machine that can imitate human beings and that can produce realistic human text

My thinking on this was deeply affected by Emily M. Bender, who observed that “applications that aim to believably mimic humans bring risk of extreme harms” as highlighted in this fascinating profile in New York Magazine.

The fact that anyone can churn out very convincing but completely made up text right now will have a major impact in terms of how much can you trust the things that you’re reading online

If you read a review of a restaurant, was it written by a human being or did somebody fire up an AI model and generate 100 positive reviews all in one go?

So there are all sorts of different applications to this

Some are definitely bad, some are definitely good

And seeing how this all plays out is something that I think society will have to come to terms with over the next few months and the next few years

Peter: Simon, really appreciate your insight and just thanks for coming with us on the show today

Simon: Thanks very much for having me

For more related content, take a look at the prompt engineering and generative AI tags on my blog.