Simon Willison’s Weblog

Subscribe
Atom feed for exfiltration-attacks

33 posts tagged “exfiltration-attacks”

Exfiltration attacks are prompt injection attacks against chatbots that have access to private information, where that information is exfiltrated by the attacker. One common form of this is Markdown exfiltration where an attacker tricks the bot into rendering a Markdown image that leaks data encoded in the URL to an external server.

2023

The Dual LLM pattern for building AI assistants that can resist prompt injection

I really want an AI assistant: a Large Language Model powered chatbot that can answer questions and perform actions for me based on access to my private data and tools.

[... 2,632 words]

New prompt injection attack on ChatGPT web version. Markdown images can steal your chat data. An ingenious new prompt injection / data exfiltration vector from Roman Samoilenko, based on the observation that ChatGPT can render markdown images in a way that can exfiltrate data to the image hosting server by embedding it in the image URL. Roman uses a single pixel image for that, and combines it with a trick where copy events on a website are intercepted and prompt injection instructions are appended to the copied text, in order to trick the user into pasting the injection attack directly into ChatGPT.

Update: They finally started mitigating this in December 2023.

# 14th April 2023, 6:33 pm / security, ai, prompt-engineering, prompt-injection, generative-ai, chatgpt, llms, exfiltration-attacks

Prompt injection: What’s the worst that can happen?

Visit Prompt injection: What's the worst that can happen?

Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now.

[... 2,302 words]