Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.
In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code
Recent articles
- My Lethal Trifecta talk at the Bay Area AI Security Meetup - 9th August 2025
- The surprise deprecation of GPT-4o for ChatGPT consumers - 8th August 2025
- GPT-5: Key characteristics, pricing and model card - 7th August 2025