Simon Willison’s Weblog

Subscribe

Entries tagged security, gpt3

Filters: Type: entry × security × gpt3 × Sorted by date


You can’t solve AI security problems with more AI

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

[... 1234 words]

Prompt injection attacks against GPT-3

Riley Goodside, yesterday:

[... 1453 words]