Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (via) NIST—the National Institute of Standards and Technology, a US government agency, released a 106 page report on attacks against modern machine learning models, mostly covering LLMs.
Prompt injection gets two whole sections, one on direct prompt injection (which incorporates jailbreaking as well, which they misclassify as a subset of prompt injection) and one on indirect prompt injection.
They talk a little bit about mitigations, but for both classes of attack conclude: “Unfortunately, there is no comprehensive or foolproof solution for protecting models against adversarial prompting, and future work will need to be dedicated to investigating suggested defenses for their efficacy.”
Recent articles
- Your job is to deliver code you have proven to work - 18th December 2025
- Gemini 3 Flash - 17th December 2025
- I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in 4.5 hours - 15th December 2025