Example dashboard

Various statistics from my blog.

Owned by simonw, visibility: Public

Entries

3242

SQL query
select 'Entries' as label, count(*) as big_number from blog_entry

Blogmarks

8173

SQL query
select 'Blogmarks' as label, count(*) as big_number from blog_blogmark

Quotations

1287

SQL query
select 'Quotations' as label, count(*) as big_number from blog_quotation

Chart of number of entries per month over time

SQL query
select '<h2>Chart of number of entries per month over time</h2>' as html
SQL query
select to_char(date_trunc('month', created), 'YYYY-MM') as bar_label,
count(*) as bar_quantity from blog_entry group by bar_label order by count(*) desc

Ten most recent blogmarks (of 8173 total)

SQL query
select '## Ten most recent blogmarks (of ' || count(*) || ' total)' as markdown from blog_blogmark
SQL query
select link_title, link_url, commentary, created from blog_blogmark order by created desc limit 10

10 rows

link_title link_url commentary created
llm-anthropic 0.23 https://github.com/simonw/llm-anthropic/releases/tag/0.23 New plugin release adding support for Claude Opus 4.5, including the new `thinking_effort` option: llm install -U llm-anthropic llm -m claude-opus-4.5 -o thinking_effort low 'muse on pelicans' This took longer to release than I had hoped because it was blocked on Anthropic shipping [0.75.0](https://github.com/anthropics/anthropic-sdk-python/releases/tag/v0.75.0) of their Python library with support for thinking effort. 2025-11-25 05:26:34+00:00
LLM SVG Generation Benchmark https://gally.net/temp/20251107pelican-alternatives/index.html Here's a delightful project by Tom Gally, inspired by my [pelican SVG benchmark](https://simonwillison.net/tags/pelican-riding-a-bicycle/). He [asked Claude](https://gally.net/temp/20251107pelican-alternatives/about.html) to help create more prompts of the form `Generate an SVG of [A] [doing] [B]` and then ran 30 creative prompts against 9 frontier models - prompts like "an octopus operating a pipe organ" or "a starfish driving a bulldozer". Here are some for "butterfly inspecting a steam engine": ![Gemini 3.0 Pro Preview drew the best steam engine with nice gradients and a butterfly hovering near the chimney. DeepSeek V3.2-Exp drew a floating brown pill with a hint of a chimney and a butterfly possibly on fire. GLM-4.6 did the second best steam engine with a butterfly nearby. Qwen3-VL-235B-A22B-Thinking did a steam engine that looks a bit like a chests on wheels and a weird purple circle.](https://static.simonwillison.net/static/2025/butterfly-inspecting-steam-engine.jpg) And for "sloth steering an excavator": ![Claude Sonnet 4.5 drew the best excavator with a blobby sloth driving it. Claude Opus 4.5 did quite a blocky excavator with a sloth that isn't quite recognizable as a sloth. Grok Code Fast 1 drew a green alien standing on a set of grey blocks. Gemini 2.5 Pro did a good excavator with another blobby sloth.](https://static.simonwillison.net/static/2025/sloth-driving-excavator.jpg) It's worth browsing the [whole collection](https://gally.net/temp/20251107pelican-alternatives/index.html), which gives a really good overall indication of which models are the best at SVG art. 2025-11-25 04:02:25+00:00
sqlite-utils 3.39 https://sqlite-utils.datasette.io/en/stable/changelog.html#v3-39 I got a report of [a bug](https://github.com/simonw/sqlite-utils/issues/687) in `sqlite-utils` concerning plugin installation - if you installed the package using `uv tool install` further attempts to install plugins with `sqlite-utils install X` would fail, because `uv` doesn't bundle `pip` by default. I had the same bug with Datasette [a while ago](https://github.com/simonw/sqlite-utils/issues/687), turns out I forgot to apply the fix to `sqlite-utils`. Since I was pushing a new dot-release I decided to integrate some of the non-breaking changes from the 4.0 alpha [I released last night](https://simonwillison.net/2025/Nov/24/sqlite-utils-40a1/). I tried to have Claude Code do the backporting for me: > create a new branch called 3.x starting with the 3.38 tag, then consult <https://github.com/simonw/sqlite-utils/issues/688> and cherry-pick the commits it lists in the second comment, then review each of the links in the first comment and cherry-pick those as well. After each cherry-pick run the command "just test" to confirm the tests pass and fix them if they don't. Look through the commit history on main since the 3.38 tag to help you with this task. This worked reasonably well - [here's the terminal transcript](https://gistpreview.github.io/?83c7a7ea96d6b7763ad5d72d251ce1a6). It successfully argued me out of two of the larger changes which would have added more complexity than I want in a small dot-release like this. I still had to do a bunch of manual work to get everything up to scratch, which I carried out in [this PR](https://github.com/simonw/sqlite-utils/pull/689) - including adding comments there and then telling Claude Code: > Apply changes from the review on this PR <https://github.com/simonw/sqlite-utils/pull/689> Here's [the transcript from that](https://gistpreview.github.io/?f4c89636cc58fc7bf9820c06f2488b91). The release is now out with the following release notes: > - Fixed a bug with `sqlite-utils install` when the tool had been installed using `uv`. ([#687](https://github.com/simonw/sqlite-utils/issues/687)) > - The `--functions` argument now optionally accepts a path to a Python file as an alternative to a string full of code, and can be specified multiple times - see [Defining custom SQL functions](https://sqlite-utils.datasette.io/en/stable/cli.html#cli-query-functions). ([#659](https://github.com/simonw/sqlite-utils/issues/659)) - `sqlite-utils` now requires on Python 3.10 or higher. 2025-11-24 18:59:14+00:00
"Good engineering management" is a fad https://lethain.com/good-eng-mgmt-is-a-fad/ Will Larson argues that the technology industry's idea of what makes a good engineering manager changes over time based on industry realities. ZIRP hypergrowth has been exchanged for a more cautious approach today, and expectations of managers has changed to match: > Where things get weird is that in each case a morality tale was subsequently superimposed on top of the transition [...] the industry will want different things from you as it evolves, and it will tell you that each of those shifts is because of some complex moral change, but it’s pretty much always about business realities changing. I particularly appreciated the section on core engineering management skills that stay constant no matter what: > 1. **Execution**: lead team to deliver expected tangible and intangible work. Fundamentally, management is about getting things done, and you’ll neither get an opportunity to begin managing, nor stay long as a manager, if your teams don’t execute. [...] > 2. **Team**: shape the team and the environment such that they succeed. This is *not* working for the team, nor is it working for your leadership, it is finding the balance between the two that works for both. [...] > 3. **Ownership**: navigate reality to make consistent progress, even when reality is difficult Finding a way to get things done, rather than finding a way that it not getting done is someone else’s fault. [...] > 4. **Alignment**: build shared understanding across leadership, stakeholders, your team, and the problem space. Finding a realistic plan that meets the moment, without surprising or being surprised by those around you. [...] Will goes on to list four additional growth skill "whose presence–or absence–determines how far you can go in your career". 2025-11-23 21:29:09+00:00
Agent design is still hard https://lucumr.pocoo.org/2025/11/21/agents-are-hard/ Armin Ronacher presents a cornucopia of lessons learned from building agents over the past few months. There are several agent abstraction libraries available now (my own [LLM library](https://llm.datasette.io/) is edging into that territory with its [tools feature](https://simonwillison.net/2025/May/27/llm-tools/)) but Armin has found that the abstractions are not worth adopting yet: > […] the differences between models are significant enough that you will need to build your own agent abstraction. We have not found any of the solutions from these SDKs that build the right abstraction for an agent. I think this is partly because, despite the basic agent design being just a loop, there are subtle differences based on the tools you provide. These differences affect how easy or hard it is to find the right abstraction (cache control, different requirements for reinforcement, tool prompts, provider-side tools, etc.). Because the right abstraction is not yet clear, using the original SDKs from the dedicated platforms keeps you fully in control. […] > > This might change, but right now we would probably not use an abstraction when building an agent, at least until things have settled down a bit. The benefits do not yet outweigh the costs for us. Armin introduces the new-to-me term **reinforcement**, where you remind the agent of things as it goes along: > Every time the agent runs a tool you have the opportunity to not just return data that the tool produces, but also to feed more information back into the loop. For instance, you can remind the agent about the overall objective and the status of individual tasks. […] Another use of reinforcement is to inform the system about state changes that happened in the background. Claude Code’s TODO list is another example of this pattern in action. Testing and evals remains the single hardest problem in AI engineering: > We find testing and evals to be the hardest problem here. This is not entirely surprising, but the agentic nature makes it even harder. Unlike prompts, you cannot just do the evals in some external system because there’s too much you need to feed into it. This means you want to do evals based on observability data or instrumenting your actual test runs. So far none of the solutions we have tried have convinced us that they found the right approach here. Armin also has a follow-up post, [LLM APIs are a Synchronization Problem](https://lucumr.pocoo.org/2025/11/22/llm-apis/), which argues that the shape of current APIs hides too many details from us as developers, and the core challenge here is in synchronizing state between the tokens fed through the GPUs and our client applications - something that may benefit from alternative approaches developed by the local-first movement. 2025-11-23 00:49:39+00:00
We should all be using dependency cooldowns https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns William Woodruff gives a name to a sensible strategy for managing dependencies while reducing the chances of a surprise supply chain attack: **dependency cooldowns**. Supply chain attacks happen when an attacker compromises a widely used open source package and publishes a new version with an exploit. These are usually spotted *very* quickly, so an attack often only has a few hours of effective window before the problem is identified and the compromised package is pulled. You are most at risk if you're automatically applying upgrades the same day they are released. William says: > I **love** cooldowns for several reasons: > > - They're empirically effective, per above. They won't stop *all* attackers, but they *do* stymie the majority of high-visibiity, mass-impact supply chain attacks that have become more common. > - They're *incredibly* easy to implement. Moreover, they're **literally free** to implement in most cases: most people can use [Dependabot's functionality](https://docs.github.com/en/code-security/dependabot/working-with-dependabot/dependabot-options-reference#cooldown-), [Renovate's functionality](https://docs.renovatebot.com/key-concepts/minimum-release-age/), or the functionality build directly into their package manager The one counter-argument to this is that sometimes an upgrade fixes a security vulnerability, and in those cases every hour of delay in upgrading as an hour when an attacker could exploit the new issue against your software. I see that as an argument for carefully monitoring the release notes of your dependencies, and paying special attention to security advisories. I'm a big fan of the [GitHub Advisory Database](https://github.com/advisories) for that kind of information. 2025-11-21 17:27:33+00:00
Building more with GPT-5.1-Codex-Max https://openai.com/index/gpt-5-1-codex-max/ Hot on the heels of yesterday's [Gemini 3 Pro release](https://simonwillison.net/2025/Nov/18/gemini-3/) comes a new model from OpenAI called GPT-5.1-Codex-Max. (Remember when GPT-5 was meant to bring in a new era of less confusing model names? That didn't last!) It's currently only available through their [Codex CLI coding agent](https://developers.openai.com/codex/cli/), where it's the new default model: > Starting today, GPT‑5.1-Codex-Max will replace GPT‑5.1-Codex as the default model in Codex surfaces. Unlike GPT‑5.1, which is a general-purpose model, we recommend using GPT‑5.1-Codex-Max and the Codex family of models only for agentic coding tasks in Codex or Codex-like environments. It's not available via the API yet but should be shortly. The timing of this release is interesting given that Gemini 3 Pro appears to have [aced almost all of the benchmarks](https://simonwillison.net/2025/Nov/18/gemini-3/#benchmarks) just yesterday. It's reminiscent of the period in 2024 when OpenAI consistently made big announcements that happened to coincide with Gemini releases. OpenAI's self-reported [SWE-Bench Verified](https://openai.com/index/introducing-swe-bench-verified/) score is particularly notable: 76.5% for thinking level "high" and 77.9% for the new "xhigh". That was the one benchmark where Gemini 3 Pro was out-performed by Claude Sonnet 4.5 - Gemini 3 Pro got 76.2% and Sonnet 4.5 got 77.2%. OpenAI now have the highest scoring model there by a full .7 of a percentage point! They also report a score of 58.1% on [Terminal Bench 2.0](https://www.tbench.ai/leaderboard/terminal-bench/2.0), beating Gemini 3 Pro's 54.2% (and Sonnet 4.5's 42.8%.) The most intriguing part of this announcement concerns the model's approach to long context problems: > GPT‑5.1-Codex-Max is built for long-running, detailed work. It’s our first model natively trained to operate across multiple context windows through a process called *compaction*, coherently working over millions of tokens in a single task. [...] > > Compaction enables GPT‑5.1-Codex-Max to complete tasks that would have previously failed due to context-window limits, such as complex refactors and long-running agent loops by pruning its history while preserving the most important context over long horizons. In Codex applications, GPT‑5.1-Codex-Max automatically compacts its session when it approaches its context window limit, giving it a fresh context window. It repeats this process until the task is completed. There's a lot of confusion [on Hacker News](https://news.ycombinator.com/item?id=45982649) about what this actually means. Claude Code already does a version of compaction, automatically summarizing previous turns when the context runs out. Does this just mean that Codex-Max is better at that process? I had it draw me a couple of pelicans by typing "Generate an SVG of a pelican riding a bicycle" directly into the Codex CLI tool. Here's thinking level medium: ![A flat-style illustration shows a white, round-bodied bird with an orange beak pedaling a red-framed bicycle with thin black wheels along a sandy beach, with a calm blue ocean and clear sky in the background.](https://static.simonwillison.net/static/2025/codex-max-medium.jpg) And here's thinking level "xhigh": ![A plump white bird with an orange beak and small black eyes crouches low on a blue bicycle with oversized dark wheels, shown racing forward with motion lines against a soft gradient blue sky.](https://static.simonwillison.net/static/2025/codex-max-xhigh.jpg) I also tried xhigh on the my [longer pelican test prompt](https://simonwillison.net/2025/Nov/18/gemini-3/#and-a-new-pelican-benchmark), which came out like this: <p id="advanced-pelican-codex-max"><img alt="A stylized dark gray bird with layered wings, a yellow head crest, and a long brown beak leans forward in a racing pose on a black-framed bicycle, riding across a glossy blue surface under a pale sky." src="https://static.simonwillison.net/static/2025/codex-breeding-max-xhigh.jpg"></p> Also today: [GPT-5.1 Pro is rolling out today to all Pro users](https://x.com/openai/status/1991266192905179613). According to the [ChatGPT release notes](https://help.openai.com/en/articles/6825453-chatgpt-release-notes): > GPT-5.1 Pro is rolling out today for all ChatGPT Pro users and is available in the model picker. GPT-5 Pro will remain available as a legacy model for 90 days before being retired. That's a pretty fast deprecation cycle for the GPT-5 Pro model that was released just three months ago. 2025-11-19 23:15:10+00:00
llm-gemini 0.27 https://github.com/simonw/llm-gemini/releases/tag/0.27 New release of my LLM plugin for Google's Gemini models: > - Support for nested schemas in Pydantic, thanks [Bill Pugh](https://github.com/billpugh). [#107](https://github.com/simonw/llm-gemini/pull/107) > - Now tests against Python 3.14. > - Support for YouTube URLs as attachments and the `media_resolution` option. Thanks, [Duane Milne](https://github.com/shuane). [#112](https://github.com/simonw/llm-gemini/pull/112) > - New model: `gemini-3-pro-preview`. [#113](https://github.com/simonw/llm-gemini/issues/113) The YouTube URL feature is particularly neat, taking advantage of [this API feature](https://ai.google.dev/gemini-api/docs/video-understanding#youtube). I used it against the [Google Antigravity launch video](https://simonwillison.net/2025/Nov/18/google-antigravity/): llm -m gemini-3-pro-preview \ -a 'https://www.youtube.com/watch?v=nTOVIGsqCuY' \ 'Summary, with detailed notes about what this thing is and how it differs from regular VS Code, then a complete detailed transcript with timestamps' Here's [the result](https://gist.github.com/simonw/9f30318ab47e0d177b4b523bb71d9540). A spot-check of the timestamps against points in the video shows them to be exactly right. 2025-11-18 23:00:40+00:00
Google Antigravity https://antigravity.google/ Google's other major release today to accompany [Gemini 3 Pro](https://simonwillison.net/2025/Nov/18/gemini-3/). At first glance Antigravity is yet another VS Code fork Cursor clone - it's a desktop application you install that then signs in to your Google account and provides an IDE for agentic coding against their Gemini models. When you look closer it's actually a fair bit more interesting than that. The best introduction right now is the official 14 minute [Learn the basics of Google Antigravity](https://www.youtube.com/watch?v=nTOVIGsqCuY) video on YouTube, where product engineer Kevin Hou (who previously worked at Windsurf) walks through the process of building an app. There are some interesting new ideas in Antigravity. The application itself has three "surfaces" - an agent manager dashboard, a traditional VS Code style editor and deep integration with a browser via a new Chrome extension. This plays a similar role to Playwright MCP, allowing the agent to directly test the web applications it is building. Antigravity also introduces the concept of "artifacts" (confusingly not at all similar to [Claude Artifacts](https://simonwillison.net/tags/claude-artifacts/)) These are Markdown documents that are automatically created as the agent works, for things like task lists, implementation plans and a "walkthrough" report showing what the agent has done once it finishes. I tried using Antigravity to help [add support for Gemini 3](https://github.com/simonw/llm-gemini/issues/113) to by `llm-gemini` plugin. ![Screenshot of the VS Code interface showing an implementation plan to update the llm-gemini library to support the thinking_level parameter for Gemini 3 Pro Preview, with the Open Agent Manager sidebar active on the right.](https://static.simonwillison.net/static/2025/antigravity.jpg) It worked OK at first then gave me an "Agent execution terminated due to model provider overload. Please try again later" error. I'm going to give it another go after they've had a chance to work through those initial launch jitters. 2025-11-18 20:52:35+00:00
The fate of “small” open source https://nolanlawson.com/2025/11/16/the-fate-of-small-open-source/ Nolan Lawson asks if LLM assistance means that the category of tiny open source libraries like his own [blob-util](https://github.com/nolanlawson/blob-util) is destined to fade away. Why take on additional supply chain risks adding another dependency when an LLM can likely kick out the subset of functionality needed by your own code to-order? > I still believe in open source, and I’m still doing it (in fits and starts). But one thing has become clear to me: the era of small, low-value libraries like `blob-util` is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see `node:glob`, `structuredClone`, etc.), but LLMs are the final nail in the coffin. I've been thinking about a similar issue myself recently as well. Quite a few of my own open source projects exist to solve problems that are frustratingly hard to figure out. [s3-credentials](https://github.com/simonw/s3-credentials) is a great example of this: it solves the problem of creating read-only or read-write credentials for an S3 bucket - something that I've always found infuriatingly difficult since you need to know to craft an IAM policy that looks something [like this](https://s3-credentials.readthedocs.io/en/stable/policy-documents.html#read-only): { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::my-s3-bucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging" ], "Resource": [ "arn:aws:s3:::my-s3-bucket/*" ] } ] } Modern LLMs are very good at S3 IAM polices, to the point that if I needed to solve this problem today I doubt I would find it frustrating enough to justify finding or creating a reusable library to help. 2025-11-17 23:24:44+00:00
Copy and export data

Duration: 4.31ms