Simon Willison’s Weblog

Subscribe

Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.

In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code

Ludwig Stumpp