Simon Willison’s Weblog

Subscribe

Items tagged openai, gpt3 in 2023

Filters: Year: 2023 × openai × gpt3 × Sorted by date


OpenAI: Function calling and other API updates. Huge set of announcements from OpenAI today. A bunch of price reductions, but the things that most excite me are the new gpt-3.5-turbo-16k model which offers a 16,000 token context limit (4x the existing 3.5 turbo model) at a price of $0.003 per 1K input tokens and $0.004 per 1K output tokens—1/10th the price of GPT-4 8k.

The other big new feature: functions! You can now send JSON schema defining one or more functions to GPT 3.5 and GPT-4—those models will then return a blob of JSON describing a function they want you to call (if they determine that one should be called). Your code executes the function and passes the results back to the model to continue the execution flow.

This is effectively an implementation of the ReAct pattern, with models that have been fine-tuned to execute it.

They acknowledge the risk of prompt injection (though not by name) in the post: “We are working to mitigate these and other risks. Developers can protect their applications by only consuming information from trusted tools and by including user confirmation steps before performing actions with real-world impact, such as sending an email, posting online, or making a purchase.” # 13th June 2023, 5:34 pm

Understanding GPT tokenizers

Large language models such as GPT-3/4, LLaMA and PaLM work in terms of tokens. They take text, convert it into tokens (integers), then predict which tokens should come next.

[... 1570 words]

GPT-3 token encoder and decoder. I built an Observable notebook with an interface to encode, decode and search through GPT-3 tokens, building on top of a notebook by EJ Fox and Ian Johnson. # 27th April 2023, 11:48 pm

Although fine-tuning can feel like the more natural option—training on data is how GPT learned all of its other knowledge, after all—we generally do not recommend it as a way to teach the model knowledge. Fine-tuning is better suited to teaching specialized tasks or styles, and is less reliable for factual recall. [...] In contrast, message inputs are like short-term memory. When you insert knowledge into a message, it’s like taking an exam with open notes. With notes in hand, the model is more likely to arrive at correct answers.

Ted Sanders, OpenAI # 15th April 2023, 1:44 pm

The Age of AI has begun. Bill Gates calls GPT-class large language models “the most important advance in technology since the graphical user interface”. His essay here focuses on the philanthropy angle, mostly from the point of view of AI applications in healthcare, education and concerns about keeping access to these new technologies as equitable as possible. # 21st March 2023, 9:14 pm

OpenAI to discontinue support for the Codex API (via) OpenAI shutting off access to their Codex model—a GPT3 variant fine-tuned for code related tasks, but that was being used for all sorts of other purposes—partly because it had been in a beta phase for over a year where OpenAI didn’t charge anything for it. This feels to me like a major strategic misstep for OpenAI: they’re only giving three days notice, which is shaking people’s confidence in them as a stable platform for building on at the very moment when competition from other vendors (and open source alternatives) is heating up. # 21st March 2023, 5:04 pm

GPT-4 Developer Livestream. 25 minutes of live demos from OpenAI co-founder Greg Brockman at the GPT-4 launch. These demos are all fascinating, including code writing and multimodal vision inputs. The one that really struck me is when Greg pasted in a copy of the tax code and asked GPT-4 to answer some sophisticated tax questions, involving step-by-step calculations that cited parts of the tax code it was working with. # 15th March 2023, 12:20 am

GPT-4 Technical Report (PDF). 98 pages of much more detailed information about GPT-4. The appendices are particularly interesting, including examples of advanced prompt engineering as well as examples of harmful outputs before and after tuning attempts to try and suppress them. # 14th March 2023, 9:39 pm

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. [...] We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

OpenAI # 14th March 2023, 5:02 pm

ChatGPT’s API is So Good and Cheap, It Makes Most Text Generating AI Obsolete (via) Max Woolf on the quite frankly weird economics of the ChatGPT API: it’s 1/10th the price of GPT-3 Da Vinci and appears to be equivalent (if not more) capable. “But it is very hard to economically justify not using ChatGPT as a starting point for a business need and migrating to a more bespoke infrastructure later as needed, and that’s what OpenAI is counting on. [...] I don’t envy startups whose primary business is text generation right now.” # 11th March 2023, 11:05 pm

Large language models are having their Stable Diffusion moment

The open release of the Stable Diffusion image generation model back in August 2022 was a key moment. I wrote how Stable Diffusion is a really big deal at the time.

[... 1810 words]

ChatGPT can’t access the internet, even though it really looks like it can

A really common misconception about ChatGPT is that it can access URLs. I’ve seen many different examples of people pasting in a URL and asking for a summary, or asking it to make use of the content on that page in some way.

[... 1678 words]

OpenAI: Introducing ChatGPT and Whisper APIs. The ChatGPT API is a new model called “gpt-3.5-turbo” and is priced at 1/10th of the price of text-davinci-003, previously the most powerful GPT-3 model. Whisper (speech to text transcription) is now available via an API as well, priced at 36 cents per hour of audio. # 1st March 2023, 7:36 pm

I’ve been thinking how Sydney can be so different from ChatGPT. Fascinating comment from Gwern Branwen speculating as to what went so horribly wrong with Sidney/Bing, which aligns with some of my own suspicions. Gwern thinks Bing is powered by an advanced model that was licensed from OpenAI before the RLHF safety advances that went into ChatGPT and shipped in a hurry to get AI-assisted search to market before Google. “What if Sydney wasn’t trained on OA RLHF at all, because OA wouldn’t share the crown jewels of years of user feedback and its very expensive hired freelance programmers & whatnot generating data to train on?” # 19th February 2023, 3:48 pm

Bing: “I will not harm you unless you harm me first”

Last week, Microsoft announced the new AI-powered Bing: a search interface that incorporates a language model powered chatbot that can run searches for you and summarize the results, plus do all of the other fun things that engines like GPT-3 and ChatGPT have been demonstrating over the past few months: the ability to generate poetry, and jokes, and do creative writing, and so much more.

[... 4922 words]

Browse the BBC In Our Time archive by Dewey decimal code. Matt Webb built Braggoscope, an alternative interface for browsing the 1,000 episodes of the BBC’s In Our Time dating back to 1998, organized by Dewey decimal system and with related episodes calculated using OpenAI embeddings and guests and reading lists extracted using GPT-3. “Using GitHub Copilot to write code and calling out to GPT-3 programmatically to dodge days of graft actually brought tears to my eyes.” # 13th February 2023, 4:03 pm

OpenAI Cookbook: Techniques to improve reliability (via) “Let’s think step by step” is a notoriously successful way of getting large language models to solve problems, but it turns out that’s just the tip of the iceberg: this article includes a wealth of additional examples and techniques that can be used to trick GPT-3 into being a whole lot more effective. # 21st January 2023, 5:15 am

Weeknotes: AI hacking and a SpatiaLite tutorial

Short weeknotes this time because the key things I worked on have already been covered here:

[... 477 words]