Simon Willison’s Weblog

Series: Prompt injection

A security vulnerability in software built on top of large AI language models like GPT-3.

Atom feed

Prompt injection attacks against GPT-3

Riley Goodside, yesterday:

[... 1408 words]

I don’t know how to solve prompt injection

Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.

[... 581 words]

You can’t solve AI security problems with more AI

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

[... 1234 words]