Simon Willison’s Weblog

Subscribe

August 2025

43 posts: 8 entries, 20 links, 11 quotes, 4 notes

Aug. 9, 2025

You know what else we noticed in the interviews? Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition. We believe that means that we should update how we talk about (and measure) success when using these tools, and we should expect that after the initial efficiency gains our focus will be on raising the ceiling of the work and outcomes we can accomplish, which is a very different way of interpreting tool investments.

Thomas Dohmke, CEO, GitHub

# 6:37 am / careers, coding-agents, ai-assisted-programming, generative-ai, ai, github, llms

The issue with GPT-5 in a nutshell is that unless you pay for model switching & know to use GPT-5 Thinking or Pro, when you ask “GPT-5” you sometimes get the best available AI & sometimes get one of the worst AIs available and it might even switch within a single conversation.

Ethan Mollick, highlighting that GPT-5 (high) ranks top on Artificial Analysis, GPT-5 (minimal) ranks lower than GPT-4.1

# 4:13 pm / gpt-5, ethan-mollick, generative-ai, ai, llms

Aug. 10, 2025

the percentage of users using reasoning models each day is significantly increasing; for example, for free users we went from <1% to 7%, and for plus users from 7% to 24%.

Sam Altman, revealing quite how few people used the old model picker to upgrade from GPT-4o

# 11:09 pm / openai, llm-reasoning, ai, llms, gpt-5, sam-altman, generative-ai, chatgpt

Qwen3-4B-Thinking: “This is art—pelicans don’t ride bikes!”

Visit Qwen3-4B-Thinking: "This is art - pelicans don't ride bikes!"

I’ve fallen a few days behind keeping up with Qwen. They released two new 4B models last week: Qwen3-4B-Instruct-2507 and its thinking equivalent Qwen3-4B-Thinking-2507.

[... 991 words]

Aug. 11, 2025

Chromium Docs: The Rule Of 2. Alex Russell pointed me to this principle in the Chromium security documentation as similar to my description of the lethal trifecta. First added in 2019, the Chromium guideline states:

When you write code to parse, evaluate, or otherwise handle untrustworthy inputs from the Internet — which is almost everything we do in a web browser! — we like to follow a simple rule to make sure it's safe enough to do so. The Rule Of 2 is: Pick no more than 2 of

  • untrustworthy inputs;
  • unsafe implementation language; and
  • high privilege.

Venn diagram showing you should always use
a safe language, a sandbox, or not be processing untrustworthy inputs in the first
place.

Chromium uses this design pattern to help try to avoid the high severity memory safety bugs that come when untrustworthy inputs are handled by code running at high privilege.

Chrome Security Team will generally not approve landing a CL or new feature that involves all 3 of untrustworthy inputs, unsafe language, and high privilege. To solve this problem, you need to get rid of at least 1 of those 3 things.

# 4:02 am / alex-russell, browsers, chrome, security, lethal-trifecta

AI for data engineers with Simon Willison. I recorded an episode last week with Claire Giordano for the Talking Postgres podcast. The topic was "AI for data engineers" but we ended up covering an enjoyable range of different topics.

  • How I got started programming with a Commodore 64 - the tape drive for which inspired the name Datasette
  • Selfish motivations for TILs (force me to write up my notes) and open source (help me never have to solve the same problem twice)
  • LLMs have been good at SQL for a couple of years now. Here's how I used them for a complex PostgreSQL query that extracted alt text from my blog's images using regular expressions
  • Structured data extraction as the most economically valuable application of LLMs for data work
  • 2025 has been the year of tool calling a loop ("agentic" if you like)
  • Thoughts on running MCPs securely - read-only database access, think about sandboxes, use PostgreSQL permissions, watch out for the lethal trifecta
  • Jargon guide: Agents, MCP, RAG, Tokens
  • How to get started learning to prompt: play with the models and "bring AI to the table" even for tasks that you don't think it can handle
  • "It's always a good day if you see a pelican"

# 5:16 am / postgresql, ai, generative-ai, llms, podcast-appearances

qwen-image-mps (via) Ivan Fioravanti built this Python CLI script for running the Qwen/Qwen-Image image generation model on an Apple silicon Mac, optionally using the Qwen-Image-Lightning LoRA to dramatically speed up generation.

Ivan has tested it this on 512GB and 128GB machines and it ran really fast - 42 seconds on his M3 Ultra. I've run it on my 64GB M2 MacBook Pro - after quitting almost everything else - and it just about manages to output images after pegging my GPU (fans whirring, keyboard heating up) and occupying 60GB of my available RAM. With the LoRA option running the script to generate an image took 9m7s on my machine.

Ivan merged my PR adding inline script dependencies for uv which means you can now run it like this:

uv run https://raw.githubusercontent.com/ivanfioravanti/qwen-image-mps/refs/heads/main/qwen-image-mps.py \
-p 'A vintage coffee shop full of raccoons, in a neon cyberpunk city' -f

The first time I ran this it downloaded the 57.7GB model from Hugging Face and stored it in my ~/.cache/huggingface/hub/models--Qwen--Qwen-Image directory. The -f option fetched an extra 1.7GB Qwen-Image-Lightning-8steps-V1.0.safetensors file to my working directory that sped up the generation.

Here's the resulting image:

Digital artwork of a cyberpunk-style coffee shop populated entirely by raccoons as customers, with illegible neon signs visible in the windows, pendant lighting over the counter, menu boards on the wall, bottles on shelves behind the bar, and raccoons sitting at tables and the counter with coffee cups

# 6:19 am / macos, python, ai, generative-ai, uv, qwen, text-to-image, ai-in-china, ivan-fioravanti

If you've been experimenting with OpenAI's Codex CLI and have been frustrated that it's not possible to select text and copy it to the clipboard, at least when running in the Mac terminal (I genuinely didn't know it was possible to build a terminal app that disabled copy and paste) you should know that they fixed that in this issue last week.

The new 0.20.0 version from three days ago also completely removes the old TypeScript codebase in favor of Rust. Even installations via NPM now get the Rust version.

I originally installed Codex via Homebrew, so I had to run this command to get the updated version:

brew upgrade codex

Another Codex tip: to use GPT-5 (or any other specific OpenAI model) you can run it like this:

export OPENAI_DEFAULT_MODEL="gpt-5"
codex

This no longer works, see update below.

I've been using a codex-5 script on my PATH containing this, because sometimes I like to live dangerously!

#!/usr/bin/env zsh
# Usage: codex-5 [additional args passed to `codex`]
export OPENAI_DEFAULT_MODEL="gpt-5"
exec codex --dangerously-bypass-approvals-and-sandbox "$@"

Update: It looks like GPT-5 is the default model in v0.20.0 already.

Also the environment variable I was using no longer does anything, it was removed in this commit (I used Codex Web to help figure that out). You can use the -m model_id command-line option instead.

# 4:06 pm / openai, ai, llms, gpt-5, rust, ai-assisted-programming, coding-agents, generative-ai

Reddit will block the Internet Archive. Well this sucks. Jay Peters for the Verge:

Reddit says that it has caught AI companies scraping its data from the Internet Archive’s Wayback Machine, so it’s going to start blocking the Internet Archive from indexing the vast majority of Reddit. The Wayback Machine will no longer be able to crawl post detail pages, comments, or profiles; instead, it will only be able to index the Reddit.com homepage, which effectively means Internet Archive will only be able to archive insights into which news headlines and posts were most popular on a given day.

# 6:11 pm / internet-archive, reddit, scraping, ai, training-data, ai-ethics

LLM 0.27, the annotated release notes: GPT-5 and improved tool calling

I shipped LLM 0.27 today (followed by a 0.27.1 with minor bug fixes), adding support for the new GPT-5 family of models from OpenAI plus a flurry of improvements to the tool calling features introduced in LLM 0.26. Here are the annotated release notes.

[... 1,174 words]

Aug. 12, 2025

I think there's been a lot of decisions over time that proved pretty consequential, but we made them very quickly as we have to. [...]

[On pricing] I had this kind of panic attack because we really needed to launch subscriptions because at the time we were taking the product down all the time. [...]

So what I did do is ship a Google Form to Discord with the four questions you're supposed to ask on how to price something.

But we got with the $20. We were debating something slightly higher at the time. I often wonder what would have happened because so many other companies ended up copying the $20 price point, so did we erase a bunch of market cap by pricing it this way?

Nick Turley, Head of ChatGPT, interviewed by Lenny Rachitsky

# 3:32 am / chatgpt, discord, generative-ai, openai, llm-pricing, ai, llms

Claude Sonnet 4 now supports 1M tokens of context (via) Gemini and OpenAI both have million token models, so it's good to see Anthropic catching up. This is 5x the previous 200,000 context length limit of the various Claude Sonnet models.

Anthropic have previously made 1 million tokens available to select customers. From the Claude 3 announcement in March 2024:

The Claude 3 family of models will initially offer a 200K context window upon launch. However, all three models are capable of accepting inputs exceeding 1 million tokens and we may make this available to select customers who need enhanced processing power.

This is also the first time I've seen Anthropic use prices that vary depending on context length:

  • Prompts ≤ 200K: $3/million input, $15/million output
  • Prompts > 200K: $6/million input, $22.50/million output

Gemini have been doing this for a while: Gemini 2.5 Pro is $1.25/$10 below 200,000 tokens and $2.50/$15 above 200,000.

Here's Anthropic's full documentation on the 1m token context window. You need to send a context-1m-2025-08-07 beta header in your request to enable it.

Note that this is currently restricted to "tier 4" users who have purchased at least $400 in API credits:

Long context support for Sonnet 4 is now in public beta on the Anthropic API for customers with Tier 4 and custom rate limits, with broader availability rolling out over the coming weeks.

# 6:14 pm / ai, generative-ai, llms, anthropic, claude, llm-pricing, long-context

Aug. 13, 2025

simonw/codespaces-llm. GitHub Codespaces provides full development environments in your browser, and is free to use with anyone with a GitHub account. Each environment has a full Linux container and a browser-based UI using VS Code.

I found out today that GitHub Codespaces come with a GITHUB_TOKEN environment variable... and that token works as an API key for accessing LLMs in the GitHub Models collection, which includes dozens of models from OpenAI, Microsoft, Mistral, xAI, DeepSeek, Meta and more.

Anthony Shaw's llm-github-models plugin for my LLM tool allows it to talk directly to GitHub Models. I filed a suggestion that it could pick up that GITHUB_TOKEN variable automatically and Anthony shipped v0.18.0 with that feature a few hours later.

... which means you can now run the following in any Python-enabled Codespaces container and get a working llm command:

pip install llm
llm install llm-github-models
llm install llm-github-models
llm models default github/gpt-4.1
llm "Fun facts about pelicans"

Setting the default model to github/gpt-4.1 means you get free (albeit rate-limited) access to that OpenAI model.

To save you from needing to even run that sequence of commands I've created a new GitHub repository, simonw/codespaces-llm, which pre-installs and runs those commands for you.

Anyone with a GitHub account can use this URL to launch a new Codespaces instance with a configured llm terminal command ready to use:

codespaces.new/simonw/codespaces-llm?quickstart=1

Screenshot of a GitHub Codespaces VS Code interface showing a README.md file for codespaces-llm repository. The file describes a GitHub Codespaces environment with LLM, Python 3.13, uv and the GitHub Copilot VS Code extension. It has a "Launch Codespace" button. Below shows a terminal tab with the command "llm 'Fun facts about pelicans'" which has generated output listing 5 pelican facts: 1. **Huge Beaks:** about their enormous beaks and throat pouches for scooping fish and water, some over a foot long; 2. **Fishing Technique:** about working together to herd fish into shallow water; 3. **Great Fliers:** about being strong fliers that migrate great distances and soar on thermals; 4. **Buoyant Bodies:** about having air sacs beneath skin and bones making them extra buoyant; 5. **Dive Bombing:** about Brown Pelicans diving dramatically from air into water to catch fish.

While putting this together I wrote up what I've learned about devcontainers so far as a TIL: Configuring GitHub Codespaces using devcontainers.

# 5:39 am / github, projects, ai, til, openai, generative-ai, llms, llm, github-codespaces, anthony-shaw

2025 » August

MTWTFSS
    123
45678910
11121314151617
18192021222324
25262728293031