Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.
In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code
Recent articles
- Image segmentation using Gemini 2.5 - 18th April 2025
- GPT-4.1: Three new million token input models from OpenAI, including their cheapest model yet - 14th April 2025
- CaMeL offers a promising new direction for mitigating prompt injection attacks - 11th April 2025