Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.
In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code
Recent articles
- Weeknotes: the aftermath of NICAR - 16th March 2024
- The GPT-4 barrier has finally been broken - 8th March 2024
- Prompt injection and jailbreaking are not the same thing - 5th March 2024
- Interesting ideas in Observable Framework - 3rd March 2024
- Weeknotes: Getting ready for NICAR - 27th February 2024
- The killer app of Gemini Pro 1.5 is video - 21st February 2024