95 posts tagged “prompt-injection”
Prompt Injection is a security attack against applications built on top of Large Language Models, introduced here and further described in this series of posts.
2022
You can’t solve AI security problems with more AI
One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.
[... 1,288 words]The Changelog: Stable Diffusion breaks the internet. I’m on this week’s episode of The Changelog podcast, talking about Stable Diffusion, AI ethics and a little bit about prompt injection attacks too.
Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack. I’m quoted in this Ars Technica article about prompt injection and the Remoteli.io Twitter bot.
I don’t know how to solve prompt injection
Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.
[... 581 words]