Simon Willison’s Weblog

Subscribe

Quotations tagged security in 2023

Filters: Type: quotation × Year: 2023 × security × Sorted by date


A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We’re working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections.

Sam Altman, via Marvin von Hagen # 25th May 2023, 11:03 pm

In general my approach to running arbitrary untrusted code is 20% sandboxing and 80% making sure that it’s an extremely low value attack target so it’s not worth trying to break in.

Programs are terminated after 1 second of runtime, they run in a container with no network access, and the machine they’re running on has no sensitive data on it and a very small CPU.

Julia Evans # 25th May 2023, 8:12 pm

Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.

In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code

Ludwig Stumpp # 3rd February 2023, 1:52 am