Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.
In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code
Recent articles
- Adding AI-generated descriptions to my tools collection - 13th March 2025
- Notes on Google's Gemma 3 - 12th March 2025
- Here's how I use LLMs to help me write code - 11th March 2025