Quotations tagged promptinjection, security in 2023
Filters: Type: quotation × Year: 2023 × promptinjection × security × Sorted by date
A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We’re working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections.
— Sam Altman, via Marvin von Hagen # 25th May 2023, 11:03 pm
Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.
In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code
— Ludwig Stumpp # 3rd February 2023, 1:52 am