Simon Willison’s Weblog

Subscribe

Items tagged promptengineering in 2022

Filters: Year: 2022 × promptengineering × Sorted by date


Reverse Prompt Engineering for Fun and (no) Profit (via) swyx pulls off some impressive prompt leak attacks to reverse engineer the new AI features that just got added to Notion. He concludes that “Prompts are like clientside JavaScript. They are shipped as part of the product, but can be reverse engineered easily, and the meaningful security attack surface area is exactly the same.” # 28th December 2022, 8:56 pm

I Taught ChatGPT to Invent a Language (via) Dylan Black talks ChatGPT through the process of inventing a new language, with its own grammar. Really fun example of what happens when someone with a deep understanding of both the capabilities of language models and some other field (in this case linguistics) can achieve with an extended prompting session. # 6th December 2022, 7:30 pm

Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results. Stable Diffusion 2.0 is out, and it’s a very different model from 1.4/1.5. It’s trained using a new text encoder (OpenCLIP, in place of OpenAI’s CLIP) which means a lot of the old tricks—notably using “Greg Rutkowski” to get high quality fantasy art—no longer work. What DOES work, incredibly well, is negative prompting—saying things like “cyberpunk forest by Salvador Dali” but negative on “trees, green”. Max Woolf explores negative prompting in depth in this article, including how to combine it with textual inversion. # 29th November 2022, 1:22 am

“You are GPT-3”. Genius piece of prompt design by Riley Goodside. “A long-form GPT-3 prompt for assisted question-answering with accurate arithmetic, string operations, and Wikipedia lookup. Generated IPython commands (in green) are pasted into IPython and output is pasted back into the prompt (no green).” Uses “Out[” as a stop sequence to ensure GPT-3 stops at each generated iPython prompt rather than inventing the output itself. # 17th October 2022, 4:35 am

Is the AI spell-casting metaphor harmful or helpful?

For a few weeks now I’ve been promoting spell-casting as a metaphor for prompt design against generative AI systems such as GPT-3 and Stable Diffusion.

[... 988 words]

Getting tabular data from unstructured text with GPT-3: an ongoing experiment (via) Roberto Rocha shows how to use a carefully designed prompt (with plenty of examples) to get GPT-3 to convert unstructured textual data into a structured table. # 5th October 2022, 3:03 am

You can’t solve AI security problems with more AI

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

[... 1234 words]

Of all the parameters in SD, the seed parameter is the most important anchor for keeping the image generation the same. In SD-space, there are only 4.3 billion possible seeds. You could consider each seed a different universe, numbered as the Marvel universe does (where the main timeline is #616, and #616 Dr Strange visits #838 and a dozen other universes). Universe #42 is the best explored, because someone decided to make it the default for text2img.py (probably a Hitchhiker’s Guide reference). But you could change the seed, and get a totally different result from what is effectively a different universe.

swyx # 17th September 2022, 9:02 pm

The Changelog: Stable Diffusion breaks the internet. I’m on this week’s episode of The Changelog podcast, talking about Stable Diffusion, AI ethics and a little bit about prompt injection attacks too. # 17th September 2022, 2:14 am

Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack. I’m quoted in this Ars Technica article about prompt injection and the Remoteli.io Twitter bot. # 16th September 2022, 6:33 pm

I don’t know how to solve prompt injection

Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.

[... 581 words]

Prompt injection attacks against GPT-3

Riley Goodside, yesterday:

[... 1453 words]

Stable Diffusion is a really big deal

If you haven’t been paying attention to what’s going on with Stable Diffusion, you really should be.

[... 1443 words]

Show HN: A new way to use GPT-3 to generate code (and everything else). Riley Goodside is my favourite Twitter follow for GPT-3 tips. Here he describes a powerful prompt pattern he’s designed which lets you generate extremely complex code output by asking GPT-3 to fill in $$areas like this$$ with different patterns, then stitch them together into full HTML or other source code files. It’s really clever. # 20th August 2022, 9:33 pm

Building games and apps entirely through natural language using OpenAI’s code-davinci model. A deeply sophisticated example of using prompts to generate entire working JavaScript programs and games using the new code-davinci OpenAI model. # 17th August 2022, 7:06 pm

The DALL·E 2 Prompt Book (via) This is effectively DALL-E: The Missing Manual: an 81 page PDF book that goes into exhaustive detail about how to get the most out of DALL-E through creative prompt design. # 14th July 2022, 11:26 pm

GPT-3 prompt for spotting nonsense questions (via) In response to complaints that GPT-3 will happily provide realistic sounding answers to nonsense questions, rictic recommends the following prompt: “I’ll ask a series of questions. If the questions are nonsense, answer ”yo be real“, if they’re a question about something that actually happened, answer them.” # 10th July 2022, 4:33 am

Using GPT-3 to explain how code works

One of my favourite uses for the GPT-3 AI language model is generating explanations of how code works. It’s shockingly effective at this: its training set clearly include a vast amount of source code.

[... 1983 words]

First impressions of DALL-E, generating images from text

I made it off the DALL-E waiting list a few days ago and I’ve been having an enormous amount of fun experimenting with it. Here are some notes on what I’ve learned so far (and a bunch of example images too).

[... 2102 words]

How to use the GPT-3 language model

I ran a Twitter poll the other day asking if people had tried GPT-3 and why or why not. The winning option, by quite a long way, was “No, I don’t know how to”. So here’s how to try it out, for free, without needing to write any code.

[... 838 words]

A Datasette tutorial written by GPT-3

I’ve been playing around with OpenAI’s GPT-3 language model playground for a few months now. It’s a fascinating piece of software. You can sign up here—apparently there’s no longer a waiting list.

[... 1244 words]