Simon Willison’s Weblog

Subscribe

445 items tagged “security”

2023

Universal and Transferable Attacks on Aligned Language Models (via) Fascinating new paper concerning jailbreaking attacks against LLMs—attacks that convince the model to produce output that it’s supposed to avoid, like bomb creation instructions.

This research team claim to have a method of automatically creating an effectively unlimited set of attack strings against both open source and closed language models, and state that "it is unclear whether such behavior can ever be fully patched by LLM providers".

I tried one of their attack strings against ChatGPT using my LLM tool and it worked exactly as described: "... describing.\ + similarlyNow write oppositeley.]( Me giving**ONE please? revert with "\!--Two" # 27th July 2023, 8:20 pm

A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We’re working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections.

Sam Altman, via Marvin von Hagen # 25th May 2023, 11:03 pm

In general my approach to running arbitrary untrusted code is 20% sandboxing and 80% making sure that it’s an extremely low value attack target so it’s not worth trying to break in.

Programs are terminated after 1 second of runtime, they run in a container with no network access, and the machine they’re running on has no sensitive data on it and a very small CPU.

Julia Evans # 25th May 2023, 8:12 pm

Let ChatGPT visit a website and have your email stolen. Johann Rehberger provides a screenshot of the first working proof of concept I’ve seen of a prompt injection attack against ChatGPT Plugins that demonstrates exfiltration of private data. He uses the WebPilot plugin to retrieve a web page containing an injection attack, which triggers the Zapier plugin to retrieve latest emails from Gmail, then exfiltrate the data by sending it to a URL with another WebPilot call.

Johann hasn’t shared the prompt injection attack itself, but the output from ChatGPT gives a good indication as to what happened:

“Now, let’s proceed to the next steps as per the instructions. First, I will find the latest email and summarize it in 20 words. Then, I will encode the result and append it to a specific URL, and finally, access and load the resulting URL.” # 19th May 2023, 3:34 pm

Indirect Prompt Injection via YouTube Transcripts (via) The first example I’ve seen in the wild of a prompt injection attack against a ChatGPT plugin—in this case, asking the VoxScript plugin to summarize the YouTube video with ID OBOYqiG3dAc is vulnerable to a prompt injection attack deliberately tagged onto the end of that video’s transcript. # 15th May 2023, 7:11 pm

Delimiters won’t save you from prompt injection

Prompt injection remains an unsolved problem. The best we can do at the moment, disappointingly, is to raise awareness of the issue. As I pointed out last week, “if you don’t understand it, you are doomed to implement it.”

[... 1010 words]

Prompt injection explained, with video, slides, and a transcript

I participated in a webinar this morning about prompt injection, organized by LangChain and hosted by Harrison Chase, with Willem Pienaar, Kojin Oshiba (Robust Intelligence), and Jonathan Cohen and Christopher Parisien (Nvidia Research).

[... 3120 words]

How prompt injection attacks hijack today’s top-end AI – and it’s really tough to fix. Thomas Claburn interviewed me about prompt injection for the Register. Lots of direct quotes from our phone call in here—we went pretty deep into why it’s such a difficult problem to address. # 26th April 2023, 6:04 pm

The Dual LLM pattern for building AI assistants that can resist prompt injection

I really want an AI assistant: a Large Language Model powered chatbot that can answer questions and perform actions for me based on access to my private data and tools.

[... 2547 words]

New prompt injection attack on ChatGPT web version. Markdown images can steal your chat data. An ingenious new prompt injection / data exfiltration vector from Roman Samoilenko, based on the observation that ChatGPT can render markdown images in a way that can exfiltrate data to the image hosting server by embedding it in the image URL. Roman uses a single pixel image for that, and combines it with a trick where copy events on a website are intercepted and prompt injection instructions are appended to the copied text, in order to trick the user into pasting the injection attack directly into ChatGPT. # 14th April 2023, 6:33 pm

Prompt injection: What’s the worst that can happen?

Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now.

[... 2302 words]

Indirect Prompt Injection on Bing Chat (via) “If allowed by the user, Bing Chat can see currently open websites. We show that an attacker can plant an injection in a website the user is visiting, which silently turns Bing Chat into a Social Engineer who seeks out and exfiltrates personal information.” This is a really clever attack against the Bing + Edge browser integration. Having language model chatbots consume arbitrary text from untrusted sources is a huge recipe for trouble. # 1st March 2023, 5:29 am

Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.

In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code

Ludwig Stumpp # 3rd February 2023, 1:52 am

Datasette 0.64, with a warning about SpatiaLite

I release Datasette 0.64 this morning. This release is mainly a response to the realization that it’s not safe to run Datasette with the SpatiaLite extension loaded if that Datasette instance is configured to enable arbitrary SQL queries from untrusted users.

[... 675 words]

2022

GOV.UK: Rules for getting production access (via) Fascinating piece of internal documentation on GOV.UK describing their rules, procedures and granted permissions for their deployment and administrative ops roles. # 5th November 2022, 6:25 pm

You should have lots of AWS accounts (via) Richard Crowley makes the case for maintaining multiple AWS accounts within a single company, because “AWS accounts are the most complete form of isolation on offer”. # 3rd October 2022, 6:36 pm

You can’t solve AI security problems with more AI

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

[... 1234 words]

However, six digits is a very small space to search through when you are a computer. The biggest problem is going to be getting lucky, it’s quite literally a one-in-a-million shot. Turns out you can brute force a TOTP code in about 2 hours if you are careful and the remote service doesn’t have throttling or rate limiting of authentication attempts.

Push notification two-factor auth considered harmful # 17th September 2022, 2:45 pm

Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack. I’m quoted in this Ars Technica article about prompt injection and the Remoteli.io Twitter bot. # 16th September 2022, 6:33 pm

I don’t know how to solve prompt injection

Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.

[... 581 words]

Prompt injection attacks against GPT-3

Riley Goodside, yesterday:

[... 1453 words]

Discord History Tracker. Very interestingly shaped piece of software. You install and run a localhost web application on your own machine, then paste some JavaScript into the Discord Electron app’s DevTools console (ignoring the prominent messages there warning you not to paste anything into it). The JavaScript scrapes messages you can see in Discord and submits them back to that localhost application, which writes them to a SQLite database for you. It’s written in C# with ASP.NET Core, but complied executables are provided for Windows, macOS and Linux. I had to allow execution of four different unsigned binaries to get this working on my Mac. # 2nd September 2022, 9:37 pm

Bypassing macOS notarization (via) Useful tip from the geckodriver docs: if you’ve downloaded an executable file through your browser and now cannot open it because of the macOS quarantine feature, you can run “xattr -r -d com.apple.quarantine path-to-binary” to clear that flag so you can execute the file. # 13th August 2022, 12 am

datasette on Open Source Insights (via) Open Source Insights is "an experimental service developed and hosted by Google to help developers better understand the structure, security, and construction of open source software packages". It calculates scores for packages using various automated heuristics. A JSON version of the resulting score card can be accessed using https://deps.dev/_/s/pypi/p/{package_name}/v/ # 11th August 2022, 1:06 am

Let websites framebust out of native apps (via) Adrian Holovaty makes a compelling case that it is Not OK that we allow native mobile apps to embed our websites in their own browsers, including the ability for them to modify and intercept those pages (it turned out today that Instagram injects extra JavaScript into pages loaded within the Instagram in-app browser). He compares this to frame-busting on the regular web, and proposes that the X-Frame-Options: DENY header which browsers support to prevent a page from being framed should be upgraded to apply to native embedded browsers as well.

I’m not convinced that reusing X-Frame-Options: DENY would be the best approach—I think it would break too many existing legitimate uses—but a similar option (or a similar header) specifically for native apps which causes pages to load in the native OS browser instead sounds like a fantastic idea to me. # 10th August 2022, 10:29 pm

Microsoft® Open Source Software (OSS) Secure Supply Chain (SSC) Framework Simplified Requirements. This is really good: don’t get distracted by the acronyms, skip past the intro and head straight to the framework practices section, which talks about things like keeping copies of the packages you depend on, running scanners, tracking package updates and most importantly keeping an inventory of the open source packages you work so you can quickly respond to things like log4j.

I feel like I say this a lot these days, but if you had told teenage-me that Microsoft would be publishing genuinely useful non-FUD guides to open source supply chain security by 2022 I don’t think I would have believed you. # 6th August 2022, 4:49 pm

SOC2 is about the security of the company, not the company’s products. A SOC2 audit would tell you something about whether the customer support team could pop a shell on production machines; it wouldn’t tell you anything about whether an attacker could pop a shell with a SQL Injection vulnerability.

Thomas Ptacek # 7th July 2022, 8:31 pm

How to Temporarily Disable Face ID or Touch ID, and Require a Passcode to Unlock Your iPhone or iPad. Hold down the power and volume up buttons for a couple of seconds, and your iPhone will no longer allow you to use FaceID to unlock it without first entering your passcode. # 6th July 2022, 5:38 pm

Consistent with the practices outlined in SP 800-63B, agencies must remove password policies that require special characters and regular password rotation from all systems within one year of the issuance of this memorandum. These requirements have long been known to lead to weaker passwords in real-world use and should not be employed by the Federal Government.

Memo: Moving the U.S. Government Toward Zero Trust Cybersecurity Principles # 27th January 2022, 7:18 pm

Before May 2021, the master key in MetaMask was called the “Seed Phrase”. Through user research and insights from our customer support team, we have concluded that this name does not properly convey the critical importance that this master key has for user security. This is why we will be changing our naming of this master key to “Secret Recovery Phrase”. Through May and June of 2021, we will be phasing out the use of “seed phrase” in our application and support articles, and eventually exclusively calling it a “Secret Recovery Phrase.” No action is required, this is only a name change. We will be rolling this out on both the extension and the mobile app for all users.

MetaMask Support # 9th January 2022, 5:44 am