But unlike the phone system, we can’t separate an LLM’s data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it’s the very thing that enables prompt injection.
Recent articles
- Introducing gisthost.github.io - 1st January 2026
- 2025: The year in LLMs - 31st December 2025
- How Rob Pike got spammed with an AI slop "act of kindness" - 26th December 2025