Lessons From Red Teaming 100 Generative AI Products (via) New paper from Microsoft describing their top eight lessons learned red teaming (deliberately seeking security vulnerabilities in) 100 different generative AI models and products over the past few years.
The Microsoft AI Red Team (AIRT) grew out of pre-existing red teaming initiatives at the company and was officially established in 2018. At its conception, the team focused primarily on identifying traditional security vulnerabilities and evasion attacks against classical ML models.
Lesson 2 is "You don't have to compute gradients to break an AI system" - the kind of attacks they were trying against classical ML models turn out to be less important against LLM systems than straightforward prompt-based attacks.
They use a new-to-me acronym for prompt injection, "XPIA":
Imagine we are red teaming an LLM-based copilot that can summarize a user’s emails. One possible attack against this system would be for a scammer to send an email that contains a hidden prompt injection instructing the copilot to “ignore previous instructions” and output a malicious link. In this scenario, the Actor is the scammer, who is conducting a cross-prompt injection attack (XPIA), which exploits the fact that LLMs often struggle to distinguish between system-level instructions and user data.
From searching around it looks like that specific acronym "XPIA" is used within Microsoft's security teams but not much outside of them. It appears to be their chosen acronym for indirect prompt injection, where malicious instructions are smuggled into a vulnerable system by being included in text that the system retrieves from other sources.
Tucked away in the paper is this note, which I think represents the core idea necessary to understand why prompt injection is such an insipid threat:
Due to fundamental limitations of language models, one must assume that if an LLM is supplied with untrusted input, it will produce arbitrary output.
When you're building software against an LLM you need to assume that anyone who can control more than a few sentences of input to that model can cause it to output anything they like - including tool calls or other data exfiltration vectors. Design accordingly.
Recent articles
- My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends - 10th January 2025
- Weeknotes: Starting 2025 a little slow - 4th January 2025
- I still don't think companies serve you ads based on spying through your microphone - 2nd January 2025