<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: datasette</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/datasette.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2026-02-20T00:36:39+00:00</updated><author><name>Simon Willison</name></author><entry><title>datasette-files 0.1a1</title><link href="https://simonwillison.net/2026/Feb/20/datasette-files/#atom-tag" rel="alternate"/><published>2026-02-20T00:36:39+00:00</published><updated>2026-02-20T00:36:39+00:00</updated><id>https://simonwillison.net/2026/Feb/20/datasette-files/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-files/releases/tag/0.1a1"&gt;datasette-files 0.1a1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Upload files to Datasette&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-files-s3 0.1a0</title><link href="https://simonwillison.net/2026/Feb/20/datasette-files-s3/#atom-tag" rel="alternate"/><published>2026-02-20T00:32:25+00:00</published><updated>2026-02-20T00:32:25+00:00</updated><id>https://simonwillison.net/2026/Feb/20/datasette-files-s3/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-files-s3/releases/tag/0.1a0"&gt;datasette-files-s3 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;datasette-files S3 backend&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-files 0.1a0</title><link href="https://simonwillison.net/2026/Feb/19/datasette-files/#atom-tag" rel="alternate"/><published>2026-02-19T22:25:14+00:00</published><updated>2026-02-19T22:25:14+00:00</updated><id>https://simonwillison.net/2026/Feb/19/datasette-files/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-files/releases/tag/0.1a0"&gt;datasette-files 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Upload files to Datasette&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-showboat 0.1a1</title><link href="https://simonwillison.net/2026/Feb/18/datasette-showboat/#atom-tag" rel="alternate"/><published>2026-02-18T12:53:00+00:00</published><updated>2026-02-18T12:53:00+00:00</updated><id>https://simonwillison.net/2026/Feb/18/datasette-showboat/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-showboat/releases/tag/0.1a1"&gt;datasette-showboat 0.1a1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Datasette plugin for SHOWBOAT_REMOTE_URL&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/showboat"&gt;showboat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="showboat"/><category term="datasette"/></entry><entry><title>Two new Showboat tools: Chartroom and datasette-showboat</title><link href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#atom-tag" rel="alternate"/><published>2026-02-17T00:43:45+00:00</published><updated>2026-02-17T00:43:45+00:00</updated><id>https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#atom-tag</id><summary type="html">
    &lt;p&gt;I &lt;a href="https://simonwillison.net/2026/Feb/10/showboat-and-rodney/"&gt;introduced Showboat&lt;/a&gt; a week ago - my CLI tool that helps coding agents create Markdown documents that demonstrate the code that they have created. I've been finding new ways to use it on a daily basis, and I've just released two new tools to help get the best out of the Showboat pattern. &lt;a href="https://github.com/simonw/chartroom"&gt;Chartroom&lt;/a&gt; is a CLI charting tool that works well with Showboat, and &lt;a href="https://github.com/simonw/datasette-showboat"&gt;datasette-showboat&lt;/a&gt; lets Showboat's new remote publishing feature incrementally push documents to a Datasette instance.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#showboat-remote-publishing"&gt;Showboat remote publishing&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#datasette-showboat"&gt;datasette-showboat&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#chartroom"&gt;Chartroom&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#how-i-built-chartroom"&gt;How I built Chartroom&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#the-burgeoning-showboat-ecosystem"&gt;The burgeoning Showboat ecosystem&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="showboat-remote-publishing"&gt;Showboat remote publishing&lt;/h4&gt;
&lt;p&gt;I normally use Showboat in Claude Code for web (see &lt;a href="https://simonwillison.net/2026/Feb/16/rodney-claude-code/"&gt;note from this morning&lt;/a&gt;). I've used it in several different projects in the past few days, each of them with a prompt that looks something like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Use "uvx showboat --help" to perform a very thorough investigation of what happens if you use the Python sqlite-chronicle and sqlite-history-json libraries against the same SQLite database table&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's &lt;a href="https://github.com/simonw/research/blob/main/sqlite-chronicle-vs-history-json/demo.md"&gt;the resulting document&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Just telling Claude Code to run &lt;code&gt;uvx showboat --help&lt;/code&gt; is enough for it to learn how to use the tool - the &lt;a href="https://github.com/simonw/showboat/blob/main/help.txt"&gt;help text&lt;/a&gt; is designed to work as a sort of ad-hoc Skill document.&lt;/p&gt;
&lt;p&gt;The one catch with this approach is that I can't &lt;em&gt;see&lt;/em&gt; the new Showboat document until it's finished. I have to wait for Claude to commit the document plus embedded screenshots and push that to a branch in my GitHub repo - then I can view it through the GitHub interface.&lt;/p&gt;
&lt;p&gt;For a while I've been thinking it would be neat to have a remote web server of my own which Claude instances can submit updates to while they are working. Then this morning I realized Showboat might be the ideal mechanism to set that up...&lt;/p&gt;
&lt;p&gt;Showboat &lt;a href="https://github.com/simonw/showboat/releases/tag/v0.6.0"&gt;v0.6.0&lt;/a&gt; adds a new "remote" feature. It's almost invisible to users of the tool itself, instead being configured by an environment variable.&lt;/p&gt;
&lt;p&gt;Set a variable like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;export&lt;/span&gt; SHOWBOAT_REMOTE_URL=https://www.example.com/submit&lt;span class="pl-k"&gt;?&lt;/span&gt;token=xyz&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;And every time you run a &lt;code&gt;showboat init&lt;/code&gt; or &lt;code&gt;showboat note&lt;/code&gt; or &lt;code&gt;showboat exec&lt;/code&gt; or &lt;code&gt;showboat image&lt;/code&gt; command the resulting document fragments will be POSTed to that API endpoint, in addition to the Showboat Markdown file itself being updated.&lt;/p&gt;
&lt;p&gt;There are &lt;a href="https://github.com/simonw/showboat/blob/v0.6.0/README.md#remote-document-streaming"&gt;full details in the Showboat README&lt;/a&gt; - it's a very simple API format, using regular POST form variables or a multipart form upload for the image attached to &lt;code&gt;showboat image&lt;/code&gt;.&lt;/p&gt;
&lt;h4 id="datasette-showboat"&gt;datasette-showboat&lt;/h4&gt;
&lt;p&gt;It's simple enough to build a webapp to receive these updates from Showboat, but I needed one that I could easily deploy and would work well with the rest of my personal ecosystem.&lt;/p&gt;
&lt;p&gt;So I had Claude Code write me a Datasette plugin that could act as a Showboat remote endpoint. I actually had this building at the same time as the Showboat remote feature, a neat example of running &lt;a href="https://simonwillison.net/2025/Oct/5/parallel-coding-agents/"&gt;parallel agents&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/simonw/datasette-showboat"&gt;datasette-showboat&lt;/a&gt;&lt;/strong&gt; is a Datasette plugin that adds a &lt;code&gt;/-/showboat&lt;/code&gt; endpoint to Datasette for viewing documents and a &lt;code&gt;/-/showboat/receive&lt;/code&gt; endpoint for receiving updates from Showboat.&lt;/p&gt;
&lt;p&gt;Here's a very quick way to try it out:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uvx --with datasette-showboat --prerelease=allow \
  datasette showboat.db --create \
  -s plugins.datasette-showboat.database showboat \
  -s plugins.datasette-showboat.token secret123 \
  --root --secret cookie-secret-123&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Click on the sign in as root link that shows up in the console, then navigate to &lt;a href="http://127.0.0.1:8001/-/showboat"&gt;http://127.0.0.1:8001/-/showboat&lt;/a&gt; to see the interface.&lt;/p&gt;
&lt;p&gt;Now set your environment variable to point to this instance:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;export&lt;/span&gt; SHOWBOAT_REMOTE_URL=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;http://127.0.0.1:8001/-/showboat/receive?token=secret123&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;And run Showboat like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uvx showboat init demo.md &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;Showboat Feature Demo&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Refresh that page and you should see this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2026/datasette-showboat-documents.jpg" alt="Title: Showboat. Remote viewer for Showboat documents. Showboat Feature Demo 2026-02-17 00:06 · 6 chunks, UUID. To send showboat output to this server, set the SHOWBOAT_REMOTE_URL environment variable: export SHOWBOAT_REMOTE_URL=&amp;quot;http://127.0.0.1:8001/-/showboat/receive?token=your-token&amp;quot;" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;Click through to the document, then start Claude Code or Codex or your agent of choice and prompt:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Run 'uvx showboat --help' and then use showboat to add to the existing demo.md document with notes and exec and image to demonstrate the tool - fetch a placekitten for the image demo.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The &lt;code&gt;init&lt;/code&gt; command assigns a UUID and title and sends those up to Datasette.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2026/datasette-showboat.gif" alt="Animated demo - in the foreground a terminal window runs Claude Code, which executes various Showboat commands. In the background a Firefox window where the Showboat Feature Demo adds notes then some bash commands, then a placekitten image." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;The best part of this is that it works in Claude Code for web. Run the plugin on a server somewhere (an exercise left up to the reader - I use &lt;a href="https://fly.io/"&gt;Fly.io&lt;/a&gt; to host mine) and set that &lt;code&gt;SHOWBOAT_REMOTE_URL&lt;/code&gt; environment variable in your Claude environment, then any time you tell it to use Showboat the document it creates will be transmitted to your server and viewable in real time.&lt;/p&gt;
&lt;p&gt;I built &lt;a href="https://simonwillison.net/2026/Feb/10/showboat-and-rodney/#rodney-cli-browser-automation-designed-to-work-with-showboat"&gt;Rodney&lt;/a&gt;, a CLI browser automation tool, specifically to work with Showboat. It makes it easy to have a Showboat document load up web pages, interact with them via clicks or injected JavaScript and captures screenshots to embed in the Showboat document and show the effects.&lt;/p&gt;
&lt;p&gt;This is wildly useful for hacking on web interfaces using Claude Code for web, especially when coupled with the new remote publishing feature. I only got this stuff working this morning and I've already had several sessions where Claude Code has published screenshots of its work in progress, which I've then been able to provide feedback on directly in the Claude session while it's still working.&lt;/p&gt;
&lt;h3 id="chartroom"&gt;Chartroom&lt;/h3&gt;
&lt;p&gt;A few days ago I had another idea for a way to extend the Showboat ecosystem: what if Showboat documents could easily include charts?&lt;/p&gt;
&lt;p&gt;I sometimes fire up Claude Code for data analysis tasks, often telling it to download a SQLite database and then run queries against it to figure out interesting things from the data.&lt;/p&gt;
&lt;p&gt;With a simple CLI tool that produced PNG images I could have Claude use Showboat to build a document with embedded charts to help illustrate its findings.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/simonw/chartroom"&gt;Chartroom&lt;/a&gt;&lt;/strong&gt; is exactly that. It's effectively a thin wrapper around the excellent &lt;a href="https://matplotlib.org/"&gt;matplotlib&lt;/a&gt; Python library, designed to be used by coding agents to create charts that can be embedded in Showboat documents.&lt;/p&gt;
&lt;p&gt;Here's how to render a simple bar chart:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-c1"&gt;echo&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;name,value&lt;/span&gt;
&lt;span class="pl-s"&gt;Alice,42&lt;/span&gt;
&lt;span class="pl-s"&gt;Bob,28&lt;/span&gt;
&lt;span class="pl-s"&gt;Charlie,35&lt;/span&gt;
&lt;span class="pl-s"&gt;Diana,51&lt;/span&gt;
&lt;span class="pl-s"&gt;Eve,19&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;|&lt;/span&gt; uvx chartroom bar --csv \
  --title &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Sales by Person&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --ylabel &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Sales&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/simonw/chartroom/8812afc02e1310e9eddbb56508b06005ff2c0ed5/demo/1f6851ec-2026-02-14.png"&gt;&lt;img src="https://raw.githubusercontent.com/simonw/chartroom/8812afc02e1310e9eddbb56508b06005ff2c0ed5/demo/1f6851ec-2026-02-14.png" alt="A chart of those numbers, with a title and y-axis label" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It can also do line charts, bar charts, scatter charts, and histograms - as seen in &lt;a href="https://github.com/simonw/chartroom/blob/0.2.1/demo/README.md"&gt;this demo document&lt;/a&gt; that was built using Showboat.&lt;/p&gt;
&lt;p&gt;Chartroom can also generate alt text. If you add &lt;code&gt;-f alt&lt;/code&gt; to the above it will output the alt text for the chart instead of the image:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-c1"&gt;echo&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;name,value&lt;/span&gt;
&lt;span class="pl-s"&gt;Alice,42&lt;/span&gt;
&lt;span class="pl-s"&gt;Bob,28&lt;/span&gt;
&lt;span class="pl-s"&gt;Charlie,35&lt;/span&gt;
&lt;span class="pl-s"&gt;Diana,51&lt;/span&gt;
&lt;span class="pl-s"&gt;Eve,19&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;|&lt;/span&gt; uvx chartroom bar --csv \
  --title &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Sales by Person&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --ylabel &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Sales&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; -f alt&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Outputs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Sales by Person. Bar chart of value by name — Alice: 42, Bob: 28, Charlie: 35, Diana: 51, Eve: 19
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or you can use &lt;code&gt;-f html&lt;/code&gt; or &lt;code&gt;-f markdown&lt;/code&gt; to get the image tag with alt text directly:&lt;/p&gt;
&lt;div class="highlight highlight-text-md"&gt;&lt;pre&gt;&lt;span class="pl-s"&gt;![&lt;/span&gt;Sales by Person. Bar chart of value by name — Alice: 42, Bob: 28, Charlie: 35, Diana: 51, Eve: 19&lt;span class="pl-s"&gt;]&lt;/span&gt;&lt;span class="pl-s"&gt;(&lt;/span&gt;&lt;span class="pl-corl"&gt;/Users/simon/chart-7.png&lt;/span&gt;&lt;span class="pl-s"&gt;)&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I added support for Markdown images with alt text to Showboat in &lt;a href="https://github.com/simonw/showboat/releases/tag/v0.5.0"&gt;v0.5.0&lt;/a&gt;, to complement this feature of Chartroom.&lt;/p&gt;
&lt;p&gt;Finally, Chartroom has support for different &lt;a href="https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html"&gt;matplotlib styles&lt;/a&gt;. I had Claude build a Showboat document to demonstrate these all in one place - you can see that at &lt;a href="https://github.com/simonw/chartroom/blob/main/demo/styles.md"&gt;demo/styles.md&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="how-i-built-chartroom"&gt;How I built Chartroom&lt;/h4&gt;
&lt;p&gt;I started the Chartroom repository with my &lt;a href="https://github.com/simonw/click-app"&gt;click-app&lt;/a&gt; cookiecutter template, then told a fresh Claude Code for web session:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We are building a Python CLI tool which uses matplotlib to generate a PNG image containing a chart. It will have multiple sub commands for different chart types, controlled by command line options. Everything you need to know to use it will be available in the single "chartroom --help" output.&lt;/p&gt;
&lt;p&gt;It will accept data from files or standard input as CSV or TSV or JSON, similar to how sqlite-utils accepts data - clone simonw/sqlite-utils to /tmp for reference there. Clone matplotlib/matplotlib for reference as well&lt;/p&gt;
&lt;p&gt;It will also accept data from --sql path/to/sqlite.db "select ..." which runs in read-only mode&lt;/p&gt;
&lt;p&gt;Start by asking clarifying questions - do not use the ask user tool though it is broken - and generate a spec for me to approve&lt;/p&gt;
&lt;p&gt;Once approved proceed using red/green TDD running tests with "uv run pytest"&lt;/p&gt;
&lt;p&gt;Also while building maintain a demo/README.md document using the "uvx showboat --help" tool - each time you get a new chart type working commit the tests, implementation, root level
README update and a new version of that demo/README.md document with an inline image demo of the new chart type (which should be a UUID image filename managed by the showboat image command and should be stored in the demo/ folder&lt;/p&gt;
&lt;p&gt;Make sure "uv build" runs cleanly without complaining about extra directories but also ensure dist/ and uv.lock are in gitignore&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This got most of the work done. You can see the rest &lt;a href="https://github.com/simonw/chartroom/pulls?q=is%3Apr+is%3Aclosed"&gt;in the PRs&lt;/a&gt; that followed.&lt;/p&gt;
&lt;h4 id="the-burgeoning-showboat-ecosystem"&gt;The burgeoning Showboat ecosystem&lt;/h4&gt;
&lt;p&gt;The Showboat family of tools now consists of &lt;a href="https://github.com/simonw/showboat"&gt;Showboat&lt;/a&gt; itself, &lt;a href="https://github.com/simonw/rodney"&gt;Rodney&lt;/a&gt; for browser automation, &lt;a href="https://github.com/simonw/chartroom"&gt;Chartroom&lt;/a&gt; for charting and &lt;a href="https://github.com/simonw/datasette-showboat"&gt;datasette-showboat&lt;/a&gt; for streaming remote Showboat documents to Datasette.&lt;/p&gt;
&lt;p&gt;I'm enjoying how these tools can operate together based on a very loose set of conventions. If a tool can output a path to an image Showboat can include that image in a document. Any tool that can output text can be used with Showboat.&lt;/p&gt;
&lt;p&gt;I'll almost certainly be building more tools that fit this pattern. They're very quick to knock out!&lt;/p&gt;
&lt;p&gt;The environment variable mechanism for Showboat's remote streaming is a fun hack too - so far I'm just using it to stream documents somewhere else, but it's effectively a webhook extension mechanism that could likely be used for all sorts of things I haven't thought of yet.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-code"&gt;claude-code&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/showboat"&gt;showboat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/charting"&gt;charting&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="ai"/><category term="claude-code"/><category term="llms"/><category term="coding-agents"/><category term="ai-assisted-programming"/><category term="datasette"/><category term="generative-ai"/><category term="projects"/><category term="showboat"/><category term="charting"/></entry><entry><title>datasette-showboat 0.1a0</title><link href="https://simonwillison.net/2026/Feb/16/datasette-showboat/#atom-tag" rel="alternate"/><published>2026-02-16T19:43:11+00:00</published><updated>2026-02-16T19:43:11+00:00</updated><id>https://simonwillison.net/2026/Feb/16/datasette-showboat/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-showboat/releases/tag/0.1a0"&gt;datasette-showboat 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Datasette plugin for SHOWBOAT_REMOTE_URL&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/showboat"&gt;showboat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="showboat"/><category term="datasette"/></entry><entry><title>datasette-pins 0.1a7</title><link href="https://simonwillison.net/2026/Feb/5/datasette-pins/#atom-tag" rel="alternate"/><published>2026-02-05T16:34:33+00:00</published><updated>2026-02-05T16:34:33+00:00</updated><id>https://simonwillison.net/2026/Feb/5/datasette-pins/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-pins/releases/tag/0.1a7"&gt;datasette-pins 0.1a7&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Pin databases, tables, and other items to the Datasette homepage&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-youtube-embed 0.2</title><link href="https://simonwillison.net/2026/Feb/4/datasette-youtube-embed/#atom-tag" rel="alternate"/><published>2026-02-04T16:46:34+00:00</published><updated>2026-02-04T16:46:34+00:00</updated><id>https://simonwillison.net/2026/Feb/4/datasette-youtube-embed/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-youtube-embed/releases/tag/0.2"&gt;datasette-youtube-embed 0.2&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Turn YouTube URLs into embedded players in Datasette&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>Distributing Go binaries like sqlite-scanner through PyPI using go-to-wheel</title><link href="https://simonwillison.net/2026/Feb/4/distributing-go-binaries/#atom-tag" rel="alternate"/><published>2026-02-04T14:59:47+00:00</published><updated>2026-02-04T14:59:47+00:00</updated><id>https://simonwillison.net/2026/Feb/4/distributing-go-binaries/#atom-tag</id><summary type="html">
    &lt;p&gt;I've been exploring Go for building small, fast and self-contained binary applications recently. I'm enjoying how there's generally one obvious way to do things and the resulting code is boring and readable - and something that LLMs are very competent at writing. The one catch is distribution, but it turns out publishing Go binaries to PyPI means any Go binary can be just a &lt;code&gt;uvx package-name&lt;/code&gt; call away.&lt;/p&gt;
&lt;h4 id="sqlite-scanner"&gt;sqlite-scanner&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/sqlite-scanner"&gt;sqlite-scanner&lt;/a&gt; is my new Go CLI tool for scanning a filesystem for SQLite database files.&lt;/p&gt;
&lt;p&gt;It works by checking if the first 16 bytes of the file exactly match the SQLite magic number sequence &lt;code&gt;SQLite format 3\x00&lt;/code&gt;. It can search one or more folders recursively, spinning up concurrent goroutines to accelerate the scan. It streams out results as it finds them in plain text, JSON or newline-delimited JSON. It can optionally display the file sizes as well.&lt;/p&gt;
&lt;p&gt;To try it out you can download a release from the &lt;a href="https://github.com/simonw/sqlite-scanner/releases"&gt;GitHub releases&lt;/a&gt; - and then &lt;a href="https://support.apple.com/en-us/102445"&gt;jump through macOS hoops&lt;/a&gt; to execute an "unsafe" binary. Or you can clone the repo and compile it with Go. Or... you can run the binary like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uvx sqlite-scanner
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By default this will search your current directory for SQLite databases. You can pass one or more directories as arguments:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uvx sqlite-scanner ~ /tmp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add &lt;code&gt;--json&lt;/code&gt; for JSON output, &lt;code&gt;--size&lt;/code&gt; to include file sizes or &lt;code&gt;--jsonl&lt;/code&gt; for newline-delimited JSON. Here's a demo:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uvx sqlite-scanner ~ --jsonl --size
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/sqlite-scanner-demo.gif" alt="running that command produces a sequence of JSON objects, each with a path and a size key" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;If you haven't been uv-pilled yet you can instead install &lt;code&gt;sqlite-scanner&lt;/code&gt; using &lt;code&gt;pip install sqlite-scanner&lt;/code&gt; and then run &lt;code&gt;sqlite-scanner&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To get a permanent copy with &lt;code&gt;uv&lt;/code&gt; use &lt;code&gt;uv tool install sqlite-scanner&lt;/code&gt;.&lt;/p&gt;
&lt;h4 id="how-the-python-package-works"&gt;How the Python package works&lt;/h4&gt;
&lt;p&gt;The reason this is worth doing is that &lt;code&gt;pip&lt;/code&gt;, &lt;code&gt;uv&lt;/code&gt; and &lt;a href="https://pypi.org/"&gt;PyPI&lt;/a&gt; will work together to identify the correct compiled binary for your operating system and architecture.&lt;/p&gt;
&lt;p&gt;This is driven by file names. If you visit &lt;a href="https://pypi.org/project/sqlite-scanner/#files"&gt;the PyPI downloads for sqlite-scanner&lt;/a&gt; you'll see the following files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-win_arm64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-win_amd64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-musllinux_1_2_x86_64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-musllinux_1_2_aarch64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-manylinux_2_17_x86_64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-manylinux_2_17_aarch64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-macosx_11_0_arm64.whl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sqlite_scanner-0.1.1-py3-none-macosx_10_9_x86_64.whl&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When I run &lt;code&gt;pip install sqlite-scanner&lt;/code&gt; or &lt;code&gt;uvx sqlite-scanner&lt;/code&gt; on my Apple Silicon Mac laptop Python's packaging magic ensures I get that &lt;code&gt;macosx_11_0_arm64.whl&lt;/code&gt; variant.&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://tools.simonwillison.net/zip-wheel-explorer?url=https%3A%2F%2Ffiles.pythonhosted.org%2Fpackages%2F88%2Fb1%2F17a716635d2733fec53ba0a8267f85bd6b6cf882c6b29301bc711fba212c%2Fsqlite_scanner-0.1.1-py3-none-macosx_11_0_arm64.whl#sqlite_scanner/__init__.py"&gt;what's in the wheel&lt;/a&gt;, which is a zip file with a &lt;code&gt;.whl&lt;/code&gt; extension.&lt;/p&gt;
&lt;p&gt;In addition to the &lt;code&gt;bin/sqlite-scanner&lt;/code&gt; the most important file is &lt;code&gt;sqlite_scanner/__init__.py&lt;/code&gt; which includes the following:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;def&lt;/span&gt; &lt;span class="pl-en"&gt;get_binary_path&lt;/span&gt;():
    &lt;span class="pl-s"&gt;"""Return the path to the bundled binary."""&lt;/span&gt;
    &lt;span class="pl-s1"&gt;binary&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;os&lt;/span&gt;.&lt;span class="pl-c1"&gt;path&lt;/span&gt;.&lt;span class="pl-c1"&gt;join&lt;/span&gt;(&lt;span class="pl-s1"&gt;os&lt;/span&gt;.&lt;span class="pl-c1"&gt;path&lt;/span&gt;.&lt;span class="pl-c1"&gt;dirname&lt;/span&gt;(&lt;span class="pl-s1"&gt;__file__&lt;/span&gt;), &lt;span class="pl-s"&gt;"bin"&lt;/span&gt;, &lt;span class="pl-s"&gt;"sqlite-scanner"&lt;/span&gt;)
 
    &lt;span class="pl-c"&gt;# Ensure binary is executable on Unix&lt;/span&gt;
    &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-s1"&gt;sys&lt;/span&gt;.&lt;span class="pl-c1"&gt;platform&lt;/span&gt; &lt;span class="pl-c1"&gt;!=&lt;/span&gt; &lt;span class="pl-s"&gt;"win32"&lt;/span&gt;:
        &lt;span class="pl-s1"&gt;current_mode&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;os&lt;/span&gt;.&lt;span class="pl-c1"&gt;stat&lt;/span&gt;(&lt;span class="pl-s1"&gt;binary&lt;/span&gt;).&lt;span class="pl-c1"&gt;st_mode&lt;/span&gt;
        &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-c1"&gt;not&lt;/span&gt; (&lt;span class="pl-s1"&gt;current_mode&lt;/span&gt; &lt;span class="pl-c1"&gt;&amp;amp;&lt;/span&gt; &lt;span class="pl-s1"&gt;stat&lt;/span&gt;.&lt;span class="pl-c1"&gt;S_IXUSR&lt;/span&gt;):
            &lt;span class="pl-s1"&gt;os&lt;/span&gt;.&lt;span class="pl-c1"&gt;chmod&lt;/span&gt;(&lt;span class="pl-s1"&gt;binary&lt;/span&gt;, &lt;span class="pl-s1"&gt;current_mode&lt;/span&gt; &lt;span class="pl-c1"&gt;|&lt;/span&gt; &lt;span class="pl-s1"&gt;stat&lt;/span&gt;.&lt;span class="pl-c1"&gt;S_IXUSR&lt;/span&gt; &lt;span class="pl-c1"&gt;|&lt;/span&gt; &lt;span class="pl-s1"&gt;stat&lt;/span&gt;.&lt;span class="pl-c1"&gt;S_IXGRP&lt;/span&gt; &lt;span class="pl-c1"&gt;|&lt;/span&gt; &lt;span class="pl-s1"&gt;stat&lt;/span&gt;.&lt;span class="pl-c1"&gt;S_IXOTH&lt;/span&gt;)
 
    &lt;span class="pl-k"&gt;return&lt;/span&gt; &lt;span class="pl-s1"&gt;binary&lt;/span&gt;
 
 
&lt;span class="pl-k"&gt;def&lt;/span&gt; &lt;span class="pl-en"&gt;main&lt;/span&gt;():
    &lt;span class="pl-s"&gt;"""Execute the bundled binary."""&lt;/span&gt;
    &lt;span class="pl-s1"&gt;binary&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-en"&gt;get_binary_path&lt;/span&gt;()
 
    &lt;span class="pl-k"&gt;if&lt;/span&gt; &lt;span class="pl-s1"&gt;sys&lt;/span&gt;.&lt;span class="pl-c1"&gt;platform&lt;/span&gt; &lt;span class="pl-c1"&gt;==&lt;/span&gt; &lt;span class="pl-s"&gt;"win32"&lt;/span&gt;:
        &lt;span class="pl-c"&gt;# On Windows, use subprocess to properly handle signals&lt;/span&gt;
        &lt;span class="pl-s1"&gt;sys&lt;/span&gt;.&lt;span class="pl-c1"&gt;exit&lt;/span&gt;(&lt;span class="pl-s1"&gt;subprocess&lt;/span&gt;.&lt;span class="pl-c1"&gt;call&lt;/span&gt;([&lt;span class="pl-s1"&gt;binary&lt;/span&gt;] &lt;span class="pl-c1"&gt;+&lt;/span&gt; &lt;span class="pl-s1"&gt;sys&lt;/span&gt;.&lt;span class="pl-c1"&gt;argv&lt;/span&gt;[&lt;span class="pl-c1"&gt;1&lt;/span&gt;:]))
    &lt;span class="pl-k"&gt;else&lt;/span&gt;:
        &lt;span class="pl-c"&gt;# On Unix, exec replaces the process&lt;/span&gt;
        &lt;span class="pl-s1"&gt;os&lt;/span&gt;.&lt;span class="pl-c1"&gt;execvp&lt;/span&gt;(&lt;span class="pl-s1"&gt;binary&lt;/span&gt;, [&lt;span class="pl-s1"&gt;binary&lt;/span&gt;] &lt;span class="pl-c1"&gt;+&lt;/span&gt; &lt;span class="pl-s1"&gt;sys&lt;/span&gt;.&lt;span class="pl-c1"&gt;argv&lt;/span&gt;[&lt;span class="pl-c1"&gt;1&lt;/span&gt;:])&lt;/pre&gt;
&lt;p&gt;That &lt;code&gt;main()&lt;/code&gt; method - also called from &lt;code&gt;sqlite_scanner/__main__.py&lt;/code&gt; - locates the binary and executes it when the Python package itself is executed, using the &lt;code&gt;sqlite-scanner = sqlite_scanner:main&lt;/code&gt; entry point defined in the wheel.&lt;/p&gt;
&lt;h4 id="which-means-we-can-use-it-as-a-dependency"&gt;Which means we can use it as a dependency&lt;/h4&gt;
&lt;p&gt;Using PyPI as a distribution platform for Go binaries feels a tiny bit abusive, albeit &lt;a href="https://simonwillison.net/2022/May/23/bundling-binary-tools-in-python-wheels/"&gt;there is plenty of precedent&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I’ll justify it by pointing out that this means &lt;strong&gt;we can use Go binaries as dependencies&lt;/strong&gt; for other Python packages now.&lt;/p&gt;
&lt;p&gt;That's genuinely useful! It means that any functionality which is available in a cross-platform Go binary can now be subsumed into a Python package. Python is really good at running subprocesses so this opens up a whole world of useful tricks that we can bake into our Python tools.&lt;/p&gt;
&lt;p&gt;To demonstrate this, I built &lt;a href="https://github.com/simonw/datasette-scan"&gt;datasette-scan&lt;/a&gt; - a new Datasette plugin which depends on &lt;code&gt;sqlite-scanner&lt;/code&gt; and then uses that Go binary to scan a folder for SQLite databases and attach them to a Datasette instance.&lt;/p&gt;
&lt;p&gt;Here's how to use that (without even installing anything first, thanks &lt;code&gt;uv&lt;/code&gt;) to explore any SQLite databases in your Downloads folder:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uv run --with datasette-scan datasette scan &lt;span class="pl-k"&gt;~&lt;/span&gt;/Downloads&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you peek at the code you'll see it &lt;a href="https://github.com/simonw/datasette-scan/blob/1a2b6d1e6b04c8cd05f5676ff7daa877efd99f08/pyproject.toml#L14"&gt;depends on sqlite-scanner&lt;/a&gt; in &lt;code&gt;pyproject.toml&lt;/code&gt; and calls it using &lt;code&gt;subprocess.run()&lt;/code&gt; against &lt;code&gt;sqlite_scanner.get_binary_path()&lt;/code&gt; in its own &lt;a href="https://github.com/simonw/datasette-scan/blob/1a2b6d1e6b04c8cd05f5676ff7daa877efd99f08/datasette_scan/__init__.py#L38-L58"&gt;scan_directories() function&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I've been exploring this pattern for other, non-Go binaries recently - here's &lt;a href="https://github.com/simonw/tools/blob/main/python/livestream-gif.py"&gt;a recent script&lt;/a&gt; that depends on &lt;a href="https://pypi.org/project/static-ffmpeg/"&gt;static-ffmpeg&lt;/a&gt; to ensure that &lt;code&gt;ffmpeg&lt;/code&gt; is available for the script to use.&lt;/p&gt;
&lt;h4 id="building-python-wheels-from-go-packages-with-go-to-wheel"&gt;Building Python wheels from Go packages with go-to-wheel&lt;/h4&gt;
&lt;p&gt;After trying this pattern myself a couple of times I realized it would be useful to have a tool to automate the process.&lt;/p&gt;
&lt;p&gt;I first &lt;a href="https://claude.ai/share/2d9ced56-b3e8-4651-83cc-860b9b419187"&gt;brainstormed with Claude&lt;/a&gt; to check that there was no existing tool to do this. It pointed me to &lt;a href="https://www.maturin.rs/bindings.html#bin"&gt;maturin bin&lt;/a&gt; which helps distribute Rust projects using Python wheels, and &lt;a href="https://github.com/Bing-su/pip-binary-factory"&gt;pip-binary-factory&lt;/a&gt; which bundles all sorts of other projects, but did not identify anything that addressed the exact problem I was looking to solve.&lt;/p&gt;
&lt;p&gt;So I &lt;a href="https://gisthost.github.io/?41f04e4eb823b1ceb888d9a28c2280dd/index.html"&gt;had Claude Code for web build the first version&lt;/a&gt;, then refined the code locally on my laptop with the help of more Claude Code and a little bit of OpenAI Codex too, just to mix things up.&lt;/p&gt;
&lt;p&gt;The full documentation is in the &lt;a href="https://github.com/simonw/go-to-wheel"&gt;simonw/go-to-wheel&lt;/a&gt; repository. I've published that tool to PyPI so now you can run it using:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uvx go-to-wheel --help&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;sqlite-scanner&lt;/code&gt; package you can &lt;a href="https://pypi.org/project/sqlite-scanner/"&gt;see on PyPI&lt;/a&gt; was built using &lt;code&gt;go-to-wheel&lt;/code&gt; like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uvx go-to-wheel &lt;span class="pl-k"&gt;~&lt;/span&gt;/dev/sqlite-scanner \
  --set-version-var main.version \
  --version 0.1.1 \
  --readme README.md \
  --author &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Simon Willison&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; \
  --url https://github.com/simonw/sqlite-scanner \
  --description &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;Scan directories for SQLite databases&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This created a set of wheels in the &lt;code&gt;dist/&lt;/code&gt; folder. I tested one of them like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uv run --with dist/sqlite_scanner-0.1.1-py3-none-macosx_11_0_arm64.whl \
  sqlite-scanner --version&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;When that spat out the correct version number I was confident everything had worked as planned, so I pushed the whole set of wheels to PyPI using &lt;code&gt;twine upload&lt;/code&gt; like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uvx twine upload dist/&lt;span class="pl-k"&gt;*&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I had to paste in a PyPI API token I had saved previously.&lt;/p&gt;
&lt;h4 id="i-expect-to-use-this-pattern-a-lot"&gt;I expect to use this pattern a lot&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;sqlite-scanner&lt;/code&gt; is very clearly meant as a proof-of-concept for this wider pattern - Python is very much capable of recursively crawling a directory structure looking for files that start with a specific byte prefix on its own!&lt;/p&gt;
&lt;p&gt;That said, I think there's a &lt;em&gt;lot&lt;/em&gt; to be said for this pattern. Go is a great complement to Python - it's fast, compiles to small self-contained binaries, has excellent concurrency support and a rich ecosystem of libraries.&lt;/p&gt;
&lt;p&gt;Go is similar to Python in that it has a strong standard library. Go is particularly good for HTTP tooling - I've built several HTTP proxies in the past using Go's excellent &lt;code&gt;net/http/httputil.ReverseProxy&lt;/code&gt; handler.&lt;/p&gt;
&lt;p&gt;I've also been experimenting with &lt;a href="https://github.com/wazero/wazero"&gt;wazero&lt;/a&gt;, Go's robust and mature zero dependency WebAssembly runtime as part of my ongoing quest for the ideal sandbox for running untrusted code. &lt;a href="https://github.com/simonw/research/tree/main/wasm-repl-cli"&gt;Here's my latest experiment&lt;/a&gt; with that library.&lt;/p&gt;
&lt;p&gt;Being able to seamlessly integrate Go binaries into Python projects without the end user having to think about Go at all - they &lt;code&gt;pip install&lt;/code&gt; and everything Just Works - feels like a valuable addition to my toolbox.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/go"&gt;go&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pypi"&gt;pypi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/packaging"&gt;packaging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="uv"/><category term="go"/><category term="pypi"/><category term="packaging"/><category term="ai-assisted-programming"/><category term="python"/><category term="datasette"/><category term="projects"/><category term="sqlite"/></entry><entry><title>datasette-scan 0.1.1</title><link href="https://simonwillison.net/2026/Feb/4/datasette-scan/#atom-tag" rel="alternate"/><published>2026-02-04T05:26:45+00:00</published><updated>2026-02-04T05:26:45+00:00</updated><id>https://simonwillison.net/2026/Feb/4/datasette-scan/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-scan/releases/tag/0.1.1"&gt;datasette-scan 0.1.1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Datasette plugin for scanning for SQLite files&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-scan 0.1</title><link href="https://simonwillison.net/2026/Feb/4/datasette-scan-2/#atom-tag" rel="alternate"/><published>2026-02-04T04:52:46+00:00</published><updated>2026-02-04T04:52:46+00:00</updated><id>https://simonwillison.net/2026/Feb/4/datasette-scan-2/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-scan/releases/tag/0.1"&gt;datasette-scan 0.1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Datasette plugin for scanning for SQLite files&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>Introducing the Codex app</title><link href="https://simonwillison.net/2026/Feb/2/introducing-the-codex-app/#atom-tag" rel="alternate"/><published>2026-02-02T19:54:36+00:00</published><updated>2026-02-02T19:54:36+00:00</updated><id>https://simonwillison.net/2026/Feb/2/introducing-the-codex-app/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://openai.com/index/introducing-the-codex-app/"&gt;Introducing the Codex app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
OpenAI just released a new macOS app for their Codex coding agent. I've had a few days of preview access - it's a solid app that provides a nice UI over the capabilities of the Codex CLI agent and adds some interesting new features, most notably first-class support for &lt;a href="https://developers.openai.com/codex/skills"&gt;Skills&lt;/a&gt;, and &lt;a href="https://developers.openai.com/codex/app/automations"&gt;Automations&lt;/a&gt; for running scheduled tasks.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Screenshot of a macOS desktop application with a dark sidebar and light main content area. Left sidebar shows navigation items &amp;quot;New thread&amp;quot;, &amp;quot;Automations&amp;quot;, &amp;quot;Skills&amp;quot;, and a &amp;quot;Threads&amp;quot; section containing two project folders: &amp;quot;local-codex-scratch&amp;quot; with tasks &amp;quot;Reply to greeting task 2h&amp;quot; and &amp;quot;List Codex.app contents 3h&amp;quot;, and &amp;quot;shot-scraper&amp;quot; with tasks &amp;quot;Validate tests on dependency... 3h&amp;quot;, &amp;quot;Document shot-scraper CLI usage 18h&amp;quot; (highlighted), and &amp;quot;Run this: uv run pytest -x 1d&amp;quot;. Bottom left shows &amp;quot;Personal&amp;quot;. Top bar displays &amp;quot;Document shot-scraper CLI usage shot-scraper&amp;quot; with &amp;quot;Open&amp;quot; and &amp;quot;Commit&amp;quot; dropdown buttons. Main content shows a conversation: user message &amp;quot;Build a skill describing how to use the shot-scraper CLI tool&amp;quot; followed by assistant responses: &amp;quot;I'll use the skill-creator skill, scan the repo for existing shot-scraper docs/commands, then draft a concise skill in this workspace and sanity-check it.&amp;quot; then &amp;quot;I've confirmed the skill-creator scripts exist. Next I'll inspect the CLI entrypoint and docs, then scaffold a skill under skills/ in this repo.&amp;quot; then &amp;quot;I've scaffolded skills/shot-scraper-cli/. I'll now replace the template SKILL.md with a task-focused skill and add a compact CLI reference under references/.&amp;quot; then &amp;quot;I'll run the skill packager/validator next to ensure the new skill structure is valid.&amp;quot; then &amp;quot;Done — I created and packaged the skill. Key outputs: • Skill definition: SKILL.md • Compact reference: cli-reference.md • Packaged skill: shot-scraper-cli.skill&amp;quot;. Bottom shows input field with placeholder &amp;quot;Ask for follow-up changes&amp;quot;, &amp;quot;+ Custom Medium&amp;quot; dropdown, and &amp;quot;Local&amp;quot; and &amp;quot;main&amp;quot; branch indicators." src="https://static.simonwillison.net/static/2026/codex-app.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The app is built with Electron and Node.js. Automations track their state in a SQLite database - here's what that looks like if you explore it with &lt;code&gt;uvx datasette ~/.codex/sqlite/codex-dev.db&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Database schema documentation on light gray background showing three tables: &amp;quot;automation_runs&amp;quot; (teal underlined link) with italic columns &amp;quot;thread_id, automation_id, status, read_at, thread_title, source_cwd, inbox_title, inbox_summary, created_at, updated_at, archived_user_message, archived_assistant_message, archived_reason&amp;quot; and &amp;quot;1 row&amp;quot;; &amp;quot;automations&amp;quot; (teal underlined link) with italic columns &amp;quot;id, name, prompt, status, next_run_at, last_run_at, cwds, rrule, created_at, updated_at&amp;quot; and &amp;quot;1 row&amp;quot;; &amp;quot;inbox_items&amp;quot; (teal underlined link) with italic columns &amp;quot;id, title, description, thread_id, read_at, created_at&amp;quot; and &amp;quot;0 rows&amp;quot;." src="https://static.simonwillison.net/static/2026/codex-dev-sqlite.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Here’s an interactive copy of that database &lt;a href="https://lite.datasette.io/?url=https%3A%2F%2Fgist.githubusercontent.com%2Fsimonw%2F274c4ecfaf959890011810e6881864fe%2Fraw%2F51fdf25c9426b76e9693ccc0d9254f64ceeef819%2Fcodex-dev.db#/codex-dev"&gt;in Datasette Lite&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The announcement gives us a hint at some usage numbers for Codex overall - the holiday spike is notable:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Since the launch of GPT‑5.2-Codex in mid-December, overall Codex usage has doubled, and in the past month, more than a million developers have used Codex.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Automations are currently restricted in that they can only run when your laptop is powered on. OpenAI promise that cloud-based automations are coming soon, which will resolve this limitation.&lt;/p&gt;
&lt;p&gt;They chose Electron so they could target other operating systems in the future, with Windows “&lt;a href="https://news.ycombinator.com/item?id=46859054#46859673"&gt;coming very soon&lt;/a&gt;”. OpenAI’s Alexander Embiricos noted &lt;a href="https://news.ycombinator.com/item?id=46859054#46859693"&gt;on the Hacker News thread&lt;/a&gt; that:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;it's taking us some time to get really solid sandboxing working on Windows, where there are fewer OS-level primitives for it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Like Claude Code, Codex is really a general agent harness disguised as a tool for programmers. OpenAI acknowledge that here:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Codex is built on a simple premise: everything is controlled by code. The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work. [...] We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Claude Code had to &lt;a href="https://simonwillison.net/2026/Jan/12/claude-cowork/"&gt;rebrand to Cowork&lt;/a&gt; to better cover the general knowledge work case. OpenAI can probably get away with keeping the Codex name for both.&lt;/p&gt;
&lt;p&gt;OpenAI have made Codex available to free and &lt;a href="https://simonwillison.net/2026/Jan/16/chatgpt-ads/"&gt;Go&lt;/a&gt; plans for "a limited time" (update: Sam Altman &lt;a href="https://x.com/sama/status/2018437537103269909"&gt;says two months&lt;/a&gt;) during which they are also doubling the rate limits for paying users.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-agents"&gt;ai-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/codex-cli"&gt;codex-cli&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/electron"&gt;electron&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sandboxing"&gt;sandboxing&lt;/a&gt;&lt;/p&gt;



</summary><category term="sqlite"/><category term="ai-agents"/><category term="openai"/><category term="ai"/><category term="llms"/><category term="codex-cli"/><category term="coding-agents"/><category term="datasette"/><category term="generative-ai"/><category term="electron"/><category term="sandboxing"/></entry><entry><title>Datasette 1.0a24</title><link href="https://simonwillison.net/2026/Jan/29/datasette-10a24/#atom-tag" rel="alternate"/><published>2026-01-29T17:21:51+00:00</published><updated>2026-01-29T17:21:51+00:00</updated><id>https://simonwillison.net/2026/Jan/29/datasette-10a24/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29"&gt;Datasette 1.0a24&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
New Datasette alpha this morning. Key new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Datasette's &lt;code&gt;Request&lt;/code&gt; object can now handle &lt;code&gt;multipart/form-data&lt;/code&gt; file uploads via the new &lt;a href="https://docs.datasette.io/en/latest/internals.html#internals-formdata"&gt;await request.form(files=True)&lt;/a&gt;  method. I plan to use this for a &lt;code&gt;datasette-files&lt;/code&gt; plugin to support attaching files to rows of data.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://docs.datasette.io/en/latest/contributing.html#setting-up-a-development-environment"&gt;recommended development environment&lt;/a&gt; for hacking on Datasette itself now uses &lt;a href="https://github.com/astral-sh/uv"&gt;uv&lt;/a&gt;. Crucially, you can clone Datasette and run &lt;code&gt;uv run pytest&lt;/code&gt; to run the tests without needing to manually create a virtual environment or install dependencies first, thanks to the &lt;a href="https://til.simonwillison.net/uv/dependency-groups"&gt;dev dependency group pattern&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A new &lt;code&gt;?_extra=render_cell&lt;/code&gt; parameter for both table and row JSON pages to return the results of executing the &lt;a href="https://docs.datasette.io/en/latest/plugin_hooks.html#render-cell-row-value-column-table-database-datasette-request"&gt;render_cell() plugin hook&lt;/a&gt;. This should unlock new JavaScript UI features in the future.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More details &lt;a href="https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29"&gt;in the release notes&lt;/a&gt;. I also invested a bunch of work in eliminating flaky tests that were intermittently failing in CI - I &lt;em&gt;think&lt;/em&gt; those are all handled now.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/annotated-release-notes"&gt;annotated-release-notes&lt;/a&gt;&lt;/p&gt;



</summary><category term="projects"/><category term="datasette"/><category term="python"/><category term="uv"/><category term="annotated-release-notes"/></entry><entry><title>datasette 1.0a24</title><link href="https://simonwillison.net/2026/Jan/29/datasette/#atom-tag" rel="alternate"/><published>2026-01-29T17:04:33+00:00</published><updated>2026-01-29T17:04:33+00:00</updated><id>https://simonwillison.net/2026/Jan/29/datasette/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette/releases/tag/1.0a24"&gt;datasette 1.0a24&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;An open source multi-tool for exploring and publishing data&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-visible-internal-db 0.1.1a2</title><link href="https://simonwillison.net/2026/Jan/26/datasette-visible-internal-db/#atom-tag" rel="alternate"/><published>2026-01-26T03:39:42+00:00</published><updated>2026-01-26T03:39:42+00:00</updated><id>https://simonwillison.net/2026/Jan/26/datasette-visible-internal-db/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-visible-internal-db/releases/tag/0.1.1a2"&gt;datasette-visible-internal-db 0.1.1a2&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Datasette plugin for making the internal database visible for debugging&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-llm 0.1a0</title><link href="https://simonwillison.net/2026/Jan/22/datasette-llm/#atom-tag" rel="alternate"/><published>2026-01-22T17:10:06+00:00</published><updated>2026-01-22T17:10:06+00:00</updated><id>https://simonwillison.net/2026/Jan/22/datasette-llm/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm/releases/tag/0.1a0"&gt;datasette-llm 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;LLM integration plugin for other plugins to depend on&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/><category term="llm"/></entry><entry><title>datasette-scale-to-zero 0.3.2</title><link href="https://simonwillison.net/2026/Jan/19/datasette-scale-to-zero/#atom-tag" rel="alternate"/><published>2026-01-19T20:46:15+00:00</published><updated>2026-01-19T20:46:15+00:00</updated><id>https://simonwillison.net/2026/Jan/19/datasette-scale-to-zero/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-scale-to-zero/releases/tag/0.3.2"&gt;datasette-scale-to-zero 0.3.2&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Quit Datasette if it has not received traffic for a specified time period&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-public 0.4a0</title><link href="https://simonwillison.net/2026/Jan/6/datasette-public/#atom-tag" rel="alternate"/><published>2026-01-06T16:44:15+00:00</published><updated>2026-01-06T16:44:15+00:00</updated><id>https://simonwillison.net/2026/Jan/6/datasette-public/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-public/releases/tag/0.4a0"&gt;datasette-public 0.4a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Make selected Datasette databases and tables visible to the public&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>The most popular blogs of Hacker News in 2025</title><link href="https://simonwillison.net/2026/Jan/2/most-popular-blogs-of-hacker-news/#atom-tag" rel="alternate"/><published>2026-01-02T19:10:43+00:00</published><updated>2026-01-02T19:10:43+00:00</updated><id>https://simonwillison.net/2026/Jan/2/most-popular-blogs-of-hacker-news/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://refactoringenglish.com/blog/2025-hn-top-5/"&gt;The most popular blogs of Hacker News in 2025&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Michael Lynch maintains &lt;a href="https://refactoringenglish.com/tools/hn-popularity/"&gt;HN Popularity Contest&lt;/a&gt;, a site that tracks personal blogs on Hacker News and scores them based on how well they perform on that platform.&lt;/p&gt;
&lt;p&gt;The engine behind the project is the &lt;a href="https://github.com/mtlynch/hn-popularity-contest-data/blob/master/data/domains-meta.csv"&gt;domain-meta.csv&lt;/a&gt; CSV on GiHub, a hand-curated list of known personal blogs with author and bio and tag metadata, which Michael uses to separate out personal blog posts from other types of content.&lt;/p&gt;
&lt;p&gt;I came top of the rankings in 2023, 2024 and 2025 but I'm listed &lt;a href="https://refactoringenglish.com/tools/hn-popularity/"&gt;in third place&lt;/a&gt; for all time behind Paul Graham and Brian Krebs.&lt;/p&gt;
&lt;p&gt;I dug around in the browser inspector and was delighted to find that the data powering the site is served with open CORS headers, which means you can easily explore it with external services like Datasette Lite.&lt;/p&gt;
&lt;p&gt;Here's a convoluted window function query Claude Opus 4.5 &lt;a href="https://claude.ai/share/8e1cb294-0ff0-4d5b-b83f-58e4c7fdb0d2"&gt;wrote for me&lt;/a&gt; which, for a given domain, shows where that domain ranked for each year since it first appeared in the dataset:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-s"&gt;with yearly_scores as (&lt;/span&gt;
&lt;span class="pl-s"&gt;  select &lt;/span&gt;
&lt;span class="pl-s"&gt;    domain,&lt;/span&gt;
&lt;span class="pl-s"&gt;    strftime('%Y', date) as year,&lt;/span&gt;
&lt;span class="pl-s"&gt;    sum(score) as total_score,&lt;/span&gt;
&lt;span class="pl-s"&gt;    count(distinct date) as days_mentioned&lt;/span&gt;
&lt;span class="pl-s"&gt;  from "hn-data"&lt;/span&gt;
&lt;span class="pl-s"&gt;  group by domain, strftime('%Y', date)&lt;/span&gt;
&lt;span class="pl-s"&gt;),&lt;/span&gt;
&lt;span class="pl-s"&gt;ranked as (&lt;/span&gt;
&lt;span class="pl-s"&gt;  select &lt;/span&gt;
&lt;span class="pl-s"&gt;    domain,&lt;/span&gt;
&lt;span class="pl-s"&gt;    year,&lt;/span&gt;
&lt;span class="pl-s"&gt;    total_score,&lt;/span&gt;
&lt;span class="pl-s"&gt;    days_mentioned,&lt;/span&gt;
&lt;span class="pl-s"&gt;    rank() over (partition by year order by total_score desc) as rank&lt;/span&gt;
&lt;span class="pl-s"&gt;  from yearly_scores&lt;/span&gt;
&lt;span class="pl-s"&gt;)&lt;/span&gt;
&lt;span class="pl-s"&gt;select &lt;/span&gt;
&lt;span class="pl-s"&gt;  r.year,&lt;/span&gt;
&lt;span class="pl-s"&gt;  r.total_score,&lt;/span&gt;
&lt;span class="pl-s"&gt;  r.rank,&lt;/span&gt;
&lt;span class="pl-s"&gt;  r.days_mentioned&lt;/span&gt;
&lt;span class="pl-s"&gt;from ranked r&lt;/span&gt;
&lt;span class="pl-s"&gt;where r.domain = :domain&lt;/span&gt;
&lt;span class="pl-s"&gt;  and r.year &amp;gt;= (&lt;/span&gt;
&lt;span class="pl-s"&gt;    select min(strftime('%Y', date)) &lt;/span&gt;
&lt;span class="pl-s"&gt;    from "hn-data"&lt;/span&gt;
&lt;span class="pl-s"&gt;    where domain = :domain&lt;/span&gt;
&lt;span class="pl-s"&gt;  )&lt;/span&gt;
&lt;span class="pl-s"&gt;order by r.year desc&lt;/span&gt;&lt;/pre&gt;

&lt;p&gt;(I just noticed that the last &lt;code&gt;and r.year &amp;gt;= (&lt;/code&gt; clause isn't actually needed here.)&lt;/p&gt;
&lt;p&gt;My &lt;a href="https://lite.datasette.io/?csv=https://hn-popularity.cdn.refactoringenglish.com/hn-data.csv#/data?sql=with+yearly_scores+as+%28%0A++select+%0A++++domain%2C%0A++++strftime%28%27%25Y%27%2C+date%29+as+year%2C%0A++++sum%28score%29+as+total_score%2C%0A++++count%28distinct+date%29+as+days_mentioned%0A++from+%22hn-data%22%0A++group+by+domain%2C+strftime%28%27%25Y%27%2C+date%29%0A%29%2C%0Aranked+as+%28%0A++select+%0A++++domain%2C%0A++++year%2C%0A++++total_score%2C%0A++++days_mentioned%2C%0A++++rank%28%29+over+%28partition+by+year+order+by+total_score+desc%29+as+rank%0A++from+yearly_scores%0A%29%0Aselect+%0A++r.year%2C%0A++r.total_score%2C%0A++r.rank%2C%0A++r.days_mentioned%0Afrom+ranked+r%0Awhere+r.domain+%3D+%3Adomain%0A++and+r.year+%3E%3D+%28%0A++++select+min%28strftime%28%27%25Y%27%2C+date%29%29+%0A++++from+%22hn-data%22%0A++++where+domain+%3D+%3Adomain%0A++%29%0Aorder+by+r.year+desc&amp;amp;domain=simonwillison.net"&gt;simonwillison.net results&lt;/a&gt; show me ranked 3rd in 2022, 30th in 2021 and 85th back in 2007 - though I expect there are many personal blogs from that year which haven't yet been manually added to Michael's list.&lt;/p&gt;
&lt;p&gt;Also useful is that every domain gets its own CORS-enabled CSV file with details of the actual Hacker News submitted from that domain, e.g. &lt;code&gt;https://hn-popularity.cdn.refactoringenglish.com/domains/simonwillison.net.csv&lt;/code&gt;. Here's &lt;a href="https://lite.datasette.io/?csv=https://hn-popularity.cdn.refactoringenglish.com/domains/simonwillison.net.csv#/data/simonwillison"&gt;that one in Datasette Lite&lt;/a&gt;.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=46465819"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/sql"&gt;sql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette-lite"&gt;datasette-lite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cors"&gt;cors&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/hacker-news"&gt;hacker-news&lt;/a&gt;&lt;/p&gt;



</summary><category term="sql"/><category term="sqlite"/><category term="datasette"/><category term="datasette-lite"/><category term="cors"/><category term="hacker-news"/></entry><entry><title>datasette-turnstile 0.1a0</title><link href="https://simonwillison.net/2025/Dec/30/datasette-turnstile/#atom-tag" rel="alternate"/><published>2025-12-30T19:23:09+00:00</published><updated>2025-12-30T19:23:09+00:00</updated><id>https://simonwillison.net/2025/Dec/30/datasette-turnstile/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette-turnstile/releases/tag/0.1a0"&gt;datasette-turnstile 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Configurable CAPTCHAs for Datasette paths using Cloudflare Turnstile&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>Under the hood of Canada Spends with Brendan Samek</title><link href="https://simonwillison.net/2025/Dec/9/canada-spends/#atom-tag" rel="alternate"/><published>2025-12-09T23:52:05+00:00</published><updated>2025-12-09T23:52:05+00:00</updated><id>https://simonwillison.net/2025/Dec/9/canada-spends/#atom-tag</id><summary type="html">
    &lt;p&gt;I talked to Brendan Samek about &lt;a href="https://canadaspends.com/"&gt;Canada Spends&lt;/a&gt;, a project from &lt;a href="https://www.buildcanada.com/"&gt;Build Canada&lt;/a&gt; that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, &lt;a href="https://sqlite-utils.datasette.io/"&gt;sqlite-utils&lt;/a&gt; and pieces of LLM-powered PDF extraction.&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po"&gt;the video on YouTube&lt;/a&gt;.&lt;/p&gt;
&lt;iframe style="margin-bottom: 1.5em;" width="560" height="315" src="https://www.youtube-nocookie.com/embed/T8xiMgmb8po" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="allowfullscreen"&gt; &lt;/iframe&gt;

&lt;p&gt;Sections within that video:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=177s"&gt;02:57&lt;/a&gt; Data sources and the PDF problem&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=351s"&gt;05:51&lt;/a&gt; Crowdsourcing financial data across Canada&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=447s"&gt;07:27&lt;/a&gt; Datasette demo: Search and facets&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=753s"&gt;12:33&lt;/a&gt; Behind the scenes: Ingestion code&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=1044s"&gt;17:24&lt;/a&gt; Data quality horror stories&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=1246s"&gt;20:46&lt;/a&gt; Using Gemini to extract PDF data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=T8xiMgmb8po&amp;amp;t=1524s"&gt;25:24&lt;/a&gt; Why SQLite is perfect for data distribution&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="build-canada-and-canada-spends"&gt;Build Canada and Canada Spends&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://www.buildcanada.com/"&gt;Build Canada&lt;/a&gt; is a volunteer-driven non-profit that launched in February 2025 - here's &lt;a href="https://www.canadianaffairs.news/2025/09/26/builders-at-the-gate-inside-the-civic-movement-to-jolt-canada-out-of-stagnation/"&gt;some background information&lt;/a&gt; on the organization, which has a strong pro-entrepreneurship and pro-technology angle.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://canadaspends.com/"&gt;Canada Spends&lt;/a&gt; is their project to make Canadian government financial data more accessible and explorable. It includes a tax sources and sinks visualizer and a searchable database of government contracts, plus a collection of tools covering financial data from different levels of government.&lt;/p&gt;
&lt;h4 id="datasette-for-data-exploration"&gt;Datasette for data exploration&lt;/h4&gt;
&lt;p&gt;The project maintains a Datasette instance at &lt;a href="https://api.canadasbuilding.com/"&gt;api.canadasbilding.com&lt;/a&gt; containing the data they have gathered and processed from multiple data sources - currently more than 2 million rows plus a combined search index across a denormalized copy of that data.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/api-canadasbuilding-com-canada-spends.jpg" alt="  Datasette UI for a canada-spends database.  aggregated-contracts-under-10k:  year, contract_goods_number_of, contracts_goods_original_value, contracts_goods_amendment_value, contract_service_number_of, contracts_service_original_value, contracts_service_amendment_value, contract_construction_number_of, contracts_construction_original_value, contracts_construction_amendment_value, acquisition_card_transactions_number_of, acquisition_card_transactions_total_value, owner_org, owner_org_title  487 rows cihr_grants  external_id, title, project_lead_name, co_researchers, institution, province, country, competition_year, award_amount, program, program_type, theme, research_subject, keywords, abstract, duration, source_url  53,420 rows contracts-over-10k:   reference_number, procurement_id, vendor_name, vendor_postal_code, buyer_name, contract_date, economic_object_code, description_en, description_fr, contract_period_start, delivery_date, contract_value, original_value, amendment_value, comments_en, comments_fr, additional_comments_en, additional_comments_fr, agreement_type_code, trade_agreement, land_claims, commodity_type, commodity_code, country_of_vendor, solicitation_procedure, limited_tendering_reason, trade_agreement_exceptions, indigenous_business, indigenous_business_excluding_psib, intellectual_property, potential_commercial_exploitation, former_public_servant, contracting_entity, standing_offer_number, instrument_type, ministers_office, number_of_bids, article_6_exceptions, award_criteria, socioeconomic_indicator, reporting_period, owner_org, owner_org_title  1,172,575 rows global_affairs_grants:   id, projectNumber, dateModified, title, description, status, start, end, countries, executingAgencyPartner, DACSectors, maximumContribution, ContributingOrganization, expectedResults, resultsAchieved, aidType, collaborationType, financeType, flowType, reportingOrganisation, programName, selectionMechanism, policyMarkers, regions, alternameImPositions, budgets, Locations, otherIdentifiers, participatingOrgs, programDataStructure, relatedActivities, transactions  2,378 rows nserc_grants:   title, award_summary, application_id, competition_year, fiscal_year, project_lead_name, institution, department, province, award_amount, installment, program, selection_committee, research_subject, area_of_application, co-researchers, partners, external_id, source_url  701,310 rows sshrc_grants:   id, title, program, fiscal_year, competition_year, applicant, organization, amount, discipline, area_of_research, co_applicant, keywords, source_url  213,085 rows transfers:   FSCL_YR, MINC, MINE, MINF, DepartmentNumber-Numéro-de-Ministère, DEPT_EN_DESC, DEPT_FR_DESC, RCPNT_CLS_EN_DESC, RCPNT_CLS_FR_DESC, RCPNT_NML_EN_DESC, RCPNT_NML_FR_DESC, CTY_EN_NM, CTY_FR_NM, PROVTER_EN, PROVTER_FR, CNTRY_EN_NM, CNTRY_FR_NM, TOT_CY_XPND_AMT, AGRG_PYMT_AMT  357,797 rows  Download SQLite DB: canada-spends.db 2.4 GB Powered by Datasette · Queries took 24.733ms " style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;h4 id="processing-pdfs"&gt;Processing PDFs&lt;/h4&gt;
&lt;p&gt;The highest quality government financial data comes from the audited financial statements that every Canadian government department is required to publish. As is so often the case with government data, these are usually published as PDFs.&lt;/p&gt;
&lt;p&gt;Brendan has been using Gemini to help extract data from those PDFs. Since this is accounting data the numbers can be summed and cross-checked to help validate the LLM didn't make any obvious mistakes.&lt;/p&gt;
&lt;h4 id="further-reading"&gt;Further reading&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://datasette.io/"&gt;datasette.io&lt;/a&gt;, the official website for Datasette&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sqlite-utils.datasette.io/"&gt;sqlite-utils.datasette.io&lt;/a&gt; for more on &lt;code&gt;sqlite-utils&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://canadaspends.com/"&gt;Canada Spends&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/BuildCanada/CanadaSpends"&gt;BuildCanada/CanadaSpends&lt;/a&gt; on GitHub&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/youtube"&gt;youtube&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/politics"&gt;politics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite-utils"&gt;sqlite-utils&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="youtube"/><category term="politics"/><category term="sqlite"/><category term="datasette"/><category term="data-journalism"/><category term="sqlite-utils"/></entry><entry><title>datasette-redirects 0.1</title><link href="https://simonwillison.net/2025/Dec/3/datasette-redirects/#atom-tag" rel="alternate"/><published>2025-12-03T05:29:25+00:00</published><updated>2025-12-03T05:29:25+00:00</updated><id>https://simonwillison.net/2025/Dec/3/datasette-redirects/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-redirects/releases/tag/0.1"&gt;datasette-redirects 0.1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Configure redirects for a Datasette instance&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-extract 0.2a1</title><link href="https://simonwillison.net/2025/Dec/3/datasette-extract/#atom-tag" rel="alternate"/><published>2025-12-03T05:13:57+00:00</published><updated>2025-12-03T05:13:57+00:00</updated><id>https://simonwillison.net/2025/Dec/3/datasette-extract/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-extract/releases/tag/0.2a1"&gt;datasette-extract 0.2a1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Import unstructured data (text and images) into structured tables&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette 1.0a23</title><link href="https://simonwillison.net/2025/Dec/3/datasette/#atom-tag" rel="alternate"/><published>2025-12-03T03:21:50+00:00</published><updated>2025-12-03T03:21:50+00:00</updated><id>https://simonwillison.net/2025/Dec/3/datasette/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/datasette/releases/tag/1.0a23"&gt;datasette 1.0a23&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;An open source multi-tool for exploring and publishing data&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>datasette-extract 0.2a0</title><link href="https://simonwillison.net/2025/Nov/26/datasette-extract/#atom-tag" rel="alternate"/><published>2025-11-26T16:50:47+00:00</published><updated>2025-11-26T16:50:47+00:00</updated><id>https://simonwillison.net/2025/Nov/26/datasette-extract/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-extract/releases/tag/0.2a0"&gt;datasette-extract 0.2a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Import unstructured data (text and images) into structured tables&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>Highlights from my appearance on the Data Renegades podcast with CL Kao and Dori Wilson</title><link href="https://simonwillison.net/2025/Nov/26/data-renegades-podcast/#atom-tag" rel="alternate"/><published>2025-11-26T00:29:11+00:00</published><updated>2025-11-26T00:29:11+00:00</updated><id>https://simonwillison.net/2025/Nov/26/data-renegades-podcast/#atom-tag</id><summary type="html">
    &lt;p&gt;I talked with CL Kao and Dori Wilson for an episode of their new &lt;a href="https://www.heavybit.com/library/podcasts/data-renegades"&gt;Data Renegades podcast&lt;/a&gt; titled &lt;a href="https://www.heavybit.com/library/podcasts/data-renegades/ep-2-data-journalism-unleashed-with-simon-willison"&gt;Data Journalism Unleashed with Simon Willison&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I fed the transcript into Claude Opus 4.5 to extract this list of topics with timestamps and illustrative quotes. It did such a good job I'm using what it produced almost verbatim here - I tidied it up a tiny bit and added a bunch of supporting links.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;What is data journalism and why it's the most interesting application of data analytics [02:03]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"There's this whole field of data journalism, which is using data and databases to try and figure out stories about the world. It's effectively data analytics, but applied to the world of news gathering. And I think it's fascinating. I think it is the single most interesting way to apply this stuff because everything is in scope for a journalist."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The origin story of Django at a small Kansas newspaper [02:31]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"We had a year's paid internship from university where we went to work &lt;a href="https://simonwillison.net/2025/Jul/13/django-birthday/"&gt;for this local newspaper&lt;/a&gt; in Kansas with this chap &lt;a href="https://holovaty.com/"&gt;Adrian Holovaty&lt;/a&gt;. And at the time we thought we were building a content management system."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Building the "Downloads Page" - a dynamic radio player of local bands [03:24]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Adrian built a feature of the site called &lt;a href="https://web.archive.org/web/20070320083540/https://www.lawrence.com/downloads/"&gt;the Downloads Page&lt;/a&gt;. And what it did is it said, okay, who are the bands playing at venues this week? And then we'll construct a little radio player of MP3s of music of bands who are playing in Lawrence in this week."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Working at The Guardian on data-driven reporting projects [04:44]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I just love that challenge of building tools that journalists can use to investigate stories and then that you can use to help tell those stories. Like if you give your audience a searchable database to back up the story that you're presenting, I just feel that's a great way of building more credibility in the reporting process."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Washington Post's opioid crisis data project and sharing with local newspapers [05:22]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Something the Washington Post did that I thought was extremely forward thinking is that they shared [&lt;a href="https://www.washingtonpost.com/national/2019/08/12/post-released-deas-data-pain-pills-heres-what-local-journalists-are-using-it/?utm_source=chatgpt.com"&gt;the opioid files&lt;/a&gt;] with other newspapers. They said, 'Okay, we're a big national newspaper, but these stories are at a local level. So what can we do so that the local newspaper and different towns can dive into that data for us?'"&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;NICAR conference and the collaborative, non-competitive nature of data journalism [07:00]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"It's all about trying to figure out what is the most value we can get out of this technology as an industry as a whole."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://www.ire.org/training/conferences/nicar-2026/"&gt;NICAR 2026&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ProPublica and the Baltimore Banner as examples of nonprofit newsrooms [09:02]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"The &lt;a href="https://www.thebanner.com/"&gt;Baltimore Banner&lt;/a&gt; are a nonprofit newsroom. They have a hundred employees now for the city of Baltimore. This is an enormously, it's a very healthy newsroom. They do amazing data reporting... And I believe they're almost breaking even on subscription revenue [correction, &lt;a href="https://localnewsinitiative.northwestern.edu/posts/2025/11/10/baltimore-local-media-resurgence/"&gt;not yet&lt;/a&gt;], which is astonishing."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The "shower revelation" that led to Datasette - SQLite on serverless hosting [10:31]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"It was literally a shower revelation. I was in the shower thinking about serverless and I thought, 'hang on a second. So you can't use Postgres on serverless hosting, but if it's a read-only database, could you use SQLite? Could you just take that data, bake it into a blob of a SQLite file, ship that as part of the application just as another asset, and then serve things on top of that?'"&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Datasette's plugin ecosystem and the vision of solving data publishing [12:36]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"In the past I've thought about it like how Pinterest solved scrapbooking and WordPress solved blogging, who's going to solve data like publishing tables full of data on the internet? So that was my original goal."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unexpected Datasette use cases: Copenhagen electricity grid, Brooklyn Cemetery [13:59]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Somebody was doing research on the Brooklyn Cemetery and they got hold of the original paper files of who was buried in the Brooklyn Cemetery. They digitized those, loaded the results into Datasette and now it tells the story of immigration to New York."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bellingcat using Datasette to investigate leaked Russian food delivery data [14:40]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"It turns out the Russian FSB, their secret police, have an office that's not near any restaurants and they order food all the time. And so this database could tell you what nights were the FSB working late and what were the names and phone numbers of the FSB agents who ordered food... And I'm like, 'Wow, that's going to get me thrown out of a window.'"&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://www.bellingcat.com/news/rest-of-world/2022/04/01/food-delivery-leak-unmasks-russian-security-agents/"&gt;Bellingcat: Food Delivery Leak Unmasks Russian Security Agents&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The frustration of open source: no feedback on how people use your software [16:14]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"An endless frustration in open source is that you really don't get the feedback on what people are actually doing with it."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open office hours on Fridays to learn how people use Datasette [16:49]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I have an &lt;a href="https://calendly.com/swillison/datasette-office-hours"&gt;open office hours Calendly&lt;/a&gt;, where the invitation is, if you use my software or want to use my software, grab 25 minutes to talk to me about it. And that's been a revelation. I've had hundreds of conversations in the past few years with people."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data cleaning as the universal complaint - 95% of time spent cleaning [17:34]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I know every single person I talk to in data complains about the cleaning that everyone says, 'I spend 95% of my time cleaning the data and I hate it.'"&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Version control problems in data teams - Python scripts on laptops without Git [17:43]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I used to work for a large company that had a whole separate data division and I learned at one point that they weren't using Git for their scripts. They had Python scripts, littering laptops left, right and center and lots of notebooks and very little version control, which upset me greatly."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Carpentries organization teaching scientists Git and software fundamentals [18:12]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"There's an organization called &lt;a href="https://carpentries.org/"&gt;The Carpentries&lt;/a&gt;. Basically they teach scientists to use Git. Their entire thing is scientists are all writing code these days. Nobody ever sat them down and showed them how to use the UNIX terminal or Git or version control or write tests. We should do that."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data documentation as an API contract problem [21:11]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"A coworker of mine said, you do realize that this should be a documented API interface, right? Your data warehouse view of your project is something that you should be responsible for communicating to the rest of the organization and we weren't doing it."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The importance of "view source" on business reports [23:21]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"If you show somebody a report, you need to have view source on those reports... somebody would say 25% of our users did this thing. And I'm thinking I need to see the query because I knew where all of the skeletons were buried and often that 25% was actually a 50%."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fact-checking process for data reporting [24:16]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Their stories are fact checked, no story goes out the door without someone else fact checking it and without an editor approving it. And it's the same for data. If they do a piece of data reporting, a separate data reporter has to audit those numbers and maybe even produce those numbers themselves in a separate way before they're confident enough to publish them."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Queries as first-class citizens with version history and comments [27:16]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I think the queries themselves need to be first class citizens where like I want to see a library of queries that my team are using and each one I want to know who built it and when it was built. And I want to see how that's changed over time and be able to post comments on it."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two types of documentation: official docs vs. temporal/timestamped notes [29:46]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"There's another type of documentation which I call temporal documentation where effectively it's stuff where you say, 'Okay, it's Friday, the 31st of October and this worked.' But the timestamp is very prominent and if somebody looks that in six months time, there's no promise that it's still going to be valid to them."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Starting an internal blog without permission - instant credibility [30:24]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"The key thing is you need to start one of these without having to ask permission first. You just one day start, you can do it in a Google Doc, right?... It gives you so much credibility really quickly because nobody else is doing it."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Building a search engine across seven documentation systems [31:35]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"It turns out, once you get a search engine over the top, it's good documentation. You just have to know where to look for it. And if you are the person who builds the search engine, you secretly control the company."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The TIL (Today I Learned) blog approach - celebrating learning basics [33:05]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I've done &lt;a href="https://til.simonwillison.net/"&gt;TILs&lt;/a&gt; about 'for loops' in Bash, right? Because okay, everyone else knows how to do that. I didn't... It's a value statement where I'm saying that if you've been a professional software engineer for 25 years, you still don't know everything. You should still celebrate figuring out how to learn 'for loops' in Bash."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Coding agents like Claude Code and their unexpected general-purpose power [34:53]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"They pretend to be programming tools but actually they're basically a sort of general agent because they can do anything that you can do by typing commands into a Unix shell, which is everything."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Skills for Claude - markdown files for census data, visualization, newsroom standards [36:16]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Imagine a markdown file for census data. Here's where to get census data from. Here's what all of the columns mean. Here's how to derive useful things from that. And then you have another skill for here's how to visualize things on a map using D3... At the Washington Post, our data standards are this and this and this."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://simonwillison.net/2025/Oct/16/claude-skills/"&gt;Claude Skills are awesome, maybe a bigger deal than MCP&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The absurd 2025 reality: cutting-edge AI tools use 1980s terminal interfaces [38:22]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"The terminal is now accessible to people who never learned the terminal before 'cause you don't have to remember all the commands because the LLM knows the commands for you. But isn't that fascinating that the cutting edge software right now is it's like 1980s style— I love that. It's not going to last. That's a current absurdity for 2025."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cursor for data? Generic agent loops vs. data-specific IDEs [38:18]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"More of a notebook interface makes a lot more sense than a Claude Code style terminal 'cause a Jupyter Notebook is effectively a terminal, it's just in your browser and it can show you charts."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Future of BI tools: prompt-driven, instant dashboard creation [39:54]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"You can copy and paste a big chunk of JSON data from somewhere into [an LLM] and say build me a dashboard. And they do such a good job. Like they will just decide, oh this is a time element so we'll do a bar chart over time and these numbers feel big so we'll put those in a big green box."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Three exciting LLM applications: text-to-SQL, data extraction, data enrichment [43:06]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"LLMs are stunningly good at outputting SQL queries. Especially if you give them extra metadata about the columns. Maybe a couple of example queries and stuff."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;LLMs extracting structured data from scanned PDFs at 95-98% accuracy [43:36]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"You file a freedom of information request and you get back horrifying scanned PDFs with slightly wonky angles and you have to get the data out of those. LLMs for a couple of years now have been so good at, 'here's a page of a police report, give me back JSON with the name of the arresting officer and the date of the incident and the description,' and they just do it."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data enrichment: running cheap models in loops against thousands of records [44:36]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"There's something really exciting about the cheaper models, Gemini Flash 2.5 Lite, things like that. Being able to run those in a loop against thousands of records feels very valuable to me as well."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://enrichments.datasette.io/"&gt;datasette-enrichments&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Multimodal LLMs for images, audio transcription, and video processing [45:42]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"At one point I calculated that using Google's least expensive model, if I wanted to generate captions for like 70,000 photographs in my personal photo library, it would cost me like $13 or something. Wildly inexpensive."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Correction: with Gemini 1.5 Flash 8B &lt;a href="https://simonwillison.net/2025/May/15/building-on-llms/#llm-tutorial-intro.009.jpeg"&gt;it would cost 173.25 cents&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;First programming language: hated C++, loved PHP and Commodore 64 BASIC [46:54]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I hated C++ 'cause I got my parents to buy me a book on it when I was like 15 and I did not make any progress with Borland C++ compiler... Actually, my first program language was Commodore 64 BASIC. And I did love that. Like I tried to build a database in Commodore 64 BASIC back when I was like six years old or something."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Biggest production bug: crashing The Guardian's MPs expenses site with a progress bar [47:46]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I tweeted a screenshot of that progress bar and said, 'Hey, look, we have a progress bar.' And 30 seconds later the site crashed because I was using SQL queries to count all 17,000 documents just for this one progress bar."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://simonwillison.net/2009/Dec/20/crowdsourcing/"&gt;Crowdsourced document analysis and MP expenses&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Favorite test dataset: San Francisco's tree list, updated several times a week [48:44]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"There's &lt;a href="https://data.sfgov.org/City-Infrastructure/Street-Tree-List/tkzw-k3nq"&gt;195,000 trees in this CSV file&lt;/a&gt; and it's got latitude and longitude and species and age when it was planted... and get this, it's updated several times a week... most working days, somebody at San Francisco City Hall updates their database of trees, and I can't figure out who."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Showrunning TV shows as a management model - transferring vision to lieutenants [50:07]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Your job is to transfer your vision into their heads so they can go and have the meetings with the props department and the set design and all of those kinds of things... I used to sniff at the idea of a vision when I was young and stupid. And now I'm like, no, the vision really is everything because if everyone understands the vision, they can make decisions you delegate to them."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://okbjgm.weebly.com/uploads/3/1/5/0/31506003/11_laws_of_showrunning_nice_version.pdf"&gt;The Eleven Laws of Showrunning&lt;/a&gt; by Javier Grillo-Marxuach&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hot take: all executable code with business value must be in version control [52:21]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I think it's inexcusable to have executable code that has business value that is not in version control somewhere."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hacker News automation: GitHub Actions scraping for notifications [52:45]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I've got &lt;a href="https://simonwillison.net/2022/Mar/14/scraping-web-pages-shot-scraper/"&gt;a GitHub actions thing&lt;/a&gt; that runs a piece of software I wrote called &lt;a href="https://shot-scraper.datasette.io/"&gt;shot-scraper&lt;/a&gt; that runs Playwright, that loads up a browser in GitHub actions to scrape that webpage and turn the results into JSON, which then get turned into an atom feed, which I subscribe to in NetNewsWire."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dream project: whale detection camera with Gemini AI [53:47]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"I want to point a camera at the ocean and take a snapshot every minute and feed it into Google Gemini or something and just say, is there a whale yes or no? That would be incredible. I want push notifications when there's a whale."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Favorite podcast: Mark Steel's in Town (hyperlocal British comedy) [54:23]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"Every episode he goes to a small town in England and he does a comedy set in a local venue about the history of the town. And so he does very deep research... I love that sort of like hyperlocal, like comedy, that sort of British culture thing."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://www.bbc.co.uk/programmes/b00rtbk8/episodes/player"&gt;Mark Steel's in Town&lt;/a&gt; available episodes&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Favorite fiction genre: British wizards caught up in bureaucracy [55:06]&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"My favorite genre of fiction is British wizards who get caught up in bureaucracy... I just really like that contrast of like magical realism and very clearly researched government paperwork and filings."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://www.antipope.org/charlie/blog-static/2020/10/the-laundry-files-an-updated-c.html"&gt;The Laundry Files&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Rivers_of_London_(book_series)"&gt;Rivers of London&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/The_Rook_(novel)"&gt;The Rook&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="podcast-colophon"&gt;Colophon&lt;/h4&gt;

&lt;p&gt;I used a Claude Project for the initial analysis, pasting in the HTML of the transcript since that included &lt;code&gt;&amp;lt;span data-timestamp="425"&amp;gt;&lt;/code&gt; elements. The project uses the following custom instructions&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You will be given a transcript of a podcast episode. Find the most interesting quotes in that transcript - quotes that best illustrate the overall themes, and quotes that introduce surprising ideas or express things in a particularly clear or engaging or spicy way. Answer just with those quotes - long quotes are fine.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I then added a follow-up prompt saying:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Now construct a bullet point list of key topics where each item includes the mm:ss in square braces at the end&lt;/p&gt;
&lt;p&gt;Then suggest a very comprehensive list of supporting links I could find&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Then one more follow-up:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;Add an illustrative quote to every one of those key topics you identified&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Here's &lt;a href="https://claude.ai/share/b2b83b99-c506-4865-8d40-dee290723ac9"&gt;the full Claude transcript&lt;/a&gt; of the analysis.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/podcast-appearances"&gt;podcast-appearances&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/data"&gt;data&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="data-journalism"/><category term="podcast-appearances"/><category term="data"/><category term="ai"/><category term="django"/><category term="datasette"/></entry><entry><title>Nano Banana Pro aka gemini-3-pro-image-preview is the best available image generation model</title><link href="https://simonwillison.net/2025/Nov/20/nano-banana-pro/#atom-tag" rel="alternate"/><published>2025-11-20T16:32:25+00:00</published><updated>2025-11-20T16:32:25+00:00</updated><id>https://simonwillison.net/2025/Nov/20/nano-banana-pro/#atom-tag</id><summary type="html">
    &lt;p&gt;Hot on the heels of Tuesday's &lt;a href="https://simonwillison.net/2025/Nov/18/gemini-3/"&gt;Gemini 3 Pro&lt;/a&gt; release, today it's &lt;a href="https://blog.google/technology/ai/nano-banana-pro/"&gt;Nano Banana Pro&lt;/a&gt;, also known as &lt;a href="https://deepmind.google/models/gemini-image/pro/"&gt;Gemini 3 Pro Image&lt;/a&gt;. I've had a few days of preview access and this is an &lt;em&gt;astonishingly&lt;/em&gt; capable image generation model.&lt;/p&gt;
&lt;p&gt;As is often the case, the most useful low-level details can be found in &lt;a href="https://ai.google.dev/gemini-api/docs/image-generation#gemini-3-capabilities"&gt;the API documentation&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Designed to tackle the most challenging workflows through advanced reasoning, it excels at complex, multi-turn creation and modification tasks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-resolution output&lt;/strong&gt;: Built-in generation capabilities for 1K, 2K, and 4K visuals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced text rendering&lt;/strong&gt;: Capable of generating legible, stylized text for infographics, menus, diagrams, and marketing assets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grounding with Google Search&lt;/strong&gt;: The model can use Google Search as a tool to verify facts and generate imagery based on real-time data (e.g., current weather maps, stock charts, recent events).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thinking mode&lt;/strong&gt;: The model utilizes a "thinking" process to reason through complex prompts. It generates interim "thought images" (visible in the backend but not charged) to refine the composition before producing the final high-quality output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Up to 14 reference images&lt;/strong&gt;: You can now mix up to 14 reference images to produce the final image.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;[...] These 14 images can include the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Up to 6 images of objects with high-fidelity to include in the final image&lt;/li&gt;
&lt;li&gt;Up to 5 images of humans to maintain character consistency&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;There is also a short (6 page) &lt;a href="https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-3-Pro-Image-Model-Card.pdf"&gt;model card PDF&lt;/a&gt; which lists the following as "new capabilities" compared to the previous Nano Banana: Multi character editing, Chart editing, Text editing, Factuality - Edu, Multi-input 1-3, Infographics, Doodle editing, Visual design.&lt;/p&gt;
&lt;h4 id="trying-out-some-detailed-instruction-image-prompts"&gt;Trying out some detailed instruction image prompts&lt;/h4&gt;
&lt;p&gt;Max Woolf published &lt;a href="https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nano-banana"&gt;the definitive guide to prompting Nano Banana&lt;/a&gt; just a few days ago. I decided to try his example prompts against the new model, requesting results in 4K.&lt;/p&gt;
&lt;p&gt;Here's what I got for his first test prompt, using Google's &lt;a href="https://aistudio.google.com/"&gt;AI Studio&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Create an image of a three-dimensional pancake in the shape of a skull, garnished on top with blueberries and maple syrup.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/pancake-skull-1.jpg" alt="A very detailed quality photo of a skull made of pancake batter, blueberries on top, maple syrup dripping down, maple syrup bottle in the background." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;The result came out as a 24.1MB, 5632 × 3072 pixel PNG file. I don't want to serve that on my own blog so here's &lt;a href="https://drive.google.com/file/d/1QV3pcW1KfbTRQscavNh6ld9PyqG4BRes/view?usp=drive_link"&gt;a Google Drive link for the original&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Then I ran his follow-up prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Make ALL of the following edits to the image:
- Put a strawberry in the left eye socket.
- Put a blackberry in the right eye socket.
- Put a mint garnish on top of the pancake.
- Change the plate to a plate-shaped chocolate-chip cookie.
- Add happy people to the background.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/pancake-skull-2.jpg" alt="It's the exact same skull with the requested edits made - mint garnish on the blueberries, a strawberry in the left hand eye socket (from our perspective, technically the skull's right hand socket), a blackberry in the other, the plate is now a plate-sized chocolate chip cookie (admittedly on a regular plate) and there are four happy peo ple in the background." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;I'll note that it did put the plate-sized cookie on a regular plate. Here's &lt;a href="https://drive.google.com/file/d/18AzhM-BUZAfLGoHWl6MQW_UW9ju4km-i/view?usp=drive_link"&gt;the 24.9MB PNG&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The new model isn't cheap. Here's &lt;a href="https://ai.google.dev/gemini-api/docs/pricing#gemini-3-pro-image-preview"&gt;the API pricing&lt;/a&gt;: it's 24 cents for a 4K image and 13.4 cents for a 1K or 2K image. Image inputs are 0.11 cents (just over 1/10th of a cent) each - an earlier version of their pricing page incorrectly said 6.7 cents each but that's now been fixed.&lt;/p&gt;
&lt;p&gt;Unlike most of Google's other models it also isn't available for free via AI Studio: you have to configure an API key with billing in order to use the model there.&lt;/p&gt;
&lt;h4 id="creating-an-infographic"&gt;Creating an infographic&lt;/h4&gt;
&lt;p&gt;So this thing is great at following instructions. How about rendering text?&lt;/p&gt;
&lt;p&gt;I tried this prompt, this time using the Gemini consumer app in "thinking" mode (which now uses Nano Banana Pro for image generation). &lt;a href="https://gemini.google.com/share/d40fe391f309"&gt;Here's a share link&lt;/a&gt; - my prompt was:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Infographic explaining how the Datasette open source project works&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a great opportunity to test its ability to run searches (aka "Grounding with Google Search"). Here's what it created based on that 9 word prompt:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/nano-banana-datasette.jpg" alt="Described by Gemini 3 Pro: A technical architecture diagram titled &amp;quot;How Datasette Works: From Raw Data to Explorable API,&amp;quot; illustrating a workflow from left to right. 1. &amp;quot;RAW DATA SOURCES&amp;quot; include &amp;quot;CSV&amp;quot;, &amp;quot;JSON&amp;quot;, &amp;quot;Excel (XLSX)&amp;quot;, and &amp;quot;Log Files&amp;quot;. 2. These flow into &amp;quot;DATA PREPARATION &amp;amp; CONVERSION&amp;quot; using tools &amp;quot;csvs-to-sqlite&amp;quot; and &amp;quot;sqlite-utils&amp;quot; to create a &amp;quot;SQLite DATABASE&amp;quot;. 3. This feeds into the central &amp;quot;DATASETTE APPLICATION CORE,&amp;quot; a stack comprising &amp;quot;Data Ingestion (Read-Only)&amp;quot;, &amp;quot;Query Engine (SQL)&amp;quot;, &amp;quot;API Layer (JSON)&amp;quot;, and &amp;quot;Web UI Rendering&amp;quot;. 4. A &amp;quot;PLUGIN ECOSYSTEM&amp;quot; connects below the core, listing &amp;quot;Vega Charts&amp;quot;, &amp;quot;Cluster Maps&amp;quot;, &amp;quot;Full-Text Search&amp;quot;, and &amp;quot;Custom Renderers&amp;quot;. 5. To the right, the core branches into &amp;quot;OUTPUT &amp;amp; INTERFACES&amp;quot;: an &amp;quot;Interactive Web Interface (Explore, Filter, Facet)&amp;quot;, a &amp;quot;JSON API (Programmatic Access)&amp;quot; showing a snippet {&amp;quot;data&amp;quot;: [...]}, and a &amp;quot;SQL EDITOR Custom SQL Queries&amp;quot; showing SELECT * FROM.... 6. The API output connects to &amp;quot;PUBLISHING &amp;amp; DEPLOYMENT&amp;quot; via a terminal command datasette publish cloudrun my.db leading to deployment targets &amp;quot;Heroku&amp;quot;, &amp;quot;Google Cloud Run&amp;quot;, &amp;quot;Fly.io&amp;quot;, and &amp;quot;Vercel&amp;quot;." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;That's a pretty extraordinary result for such a simple prompt. The text is all spelled correctly and rendered without glitches. The content is solid too - it even included logos for the most popular publish platforms, and a tiny thumbnail of the Datasette UI which is close-enough for an infographic.&lt;/p&gt;
&lt;p&gt;"Data Ingestion (Read-Only)" is a bit off. Everything else looks great to me.&lt;/p&gt;
&lt;p&gt;Here's the thinking trace for that image generation process:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Initiating Datasette Infographic&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I'm starting a detailed infographic about the Datasette project. I plan to use Google Search to gather specific information about its core processes, which I will then visualize in a multi-panel diagram.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Developing the Datasette workflow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I am visualizing the Datasette workflow as described in the search results. I'm focusing on the conversion of data from different formats into a SQLite database, which is then presented as a web application complete with a user interface and JSON output.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Visualizing Datasette Workflow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I've sketched out the initial structure for the infographic, focusing on a flowchart style. The key components include data input from diverse sources, conversion to SQLite, the Datasette core application, its main features (web UI, JSON API, SQL querying), ecosystem plugins, and finally, deployment options. I aim for clear icons and labels to represent each step. Now I will focus on refining the flow and clarifying the relationships between the parts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Defining Core Datasette Stages&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I've sketched out the initial infographic structure. I have identified the key components, focusing on visual representation of the data flow and the key benefits and features of the system. I have made updates to the flowchart with clear icons and labels. Now I'm working to expand the data input section and refine the visual representation of each step.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Constructing Datasette Infographic&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I am translating the current Datasette workflow outline into visual elements. The input section will show icons for different data formats. I've sketched the data conversion stage with clear illustrations of the necessary tools. I will add a depiction of the Datasette core application, showcasing its interfaces and functions. The plugin ecosystem and deployment options will follow. Now, I am refining the visual representation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Analyzing Datasette's Architecture&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I'm currently working on representing the Datasette open-source project's architecture visually. I've broken it down into key components: raw data processing, SQLite conversion, core application functionalities, plugin integration, and deployment methods. I'm focusing on creating a clear, linear representation to show the workflow.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mapping the Datasette Flow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I've crafted a comprehensive graphic to illustrate Datasette's operation. It spans the data ingestion, SQLite conversion, main application features, plugin interactions, and deployment stages, aiming for clarity and a logical flow.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4 id="detecting-generated-images-with-synthid"&gt;Detecting generated images with SynthID&lt;/h4&gt;
&lt;p&gt;This note from the &lt;a href="https://blog.google/technology/ai/nano-banana-pro/"&gt;announcement post&lt;/a&gt; caught my eye:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We believe it's critical to know when an image is AI-generated. This is why all media generated by Google's tools are embedded with our imperceptible &lt;a href="https://deepmind.google/models/synthid/"&gt;SynthID&lt;/a&gt; digital watermark.&lt;/p&gt;
&lt;p&gt;Today, we are putting a powerful verification tool directly in consumers' hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Last night I used Nano Banana Pro to generate a fake photograph of raccoons stealing our food delivery, then scrubbed out the little diamond icon using the Apple Photos "cleanup" tool. I uploaded that to the Gemini app and asked "Was this image created with AI?":&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/nano-banana-detected.jpg" alt="Screenshot of a mobile chat interface displaying a conversation about AI image detection. The user has uploaded a photo showing two raccoons on a porch; one raccoon reaches inside a paper bag a bench while the other stands on the ground looking up at it. The conversation title reads &amp;quot;AI Image Creation Confirmed&amp;quot;. The user asks, &amp;quot;Was this image created with ai?&amp;quot; The AI response, labeled &amp;quot;Analysis &amp;amp; 1 more&amp;quot;, states: &amp;quot;Yes, it appears that all or part of this image was created with Google AI. SynthID detected a watermark in 25-50% of the image.&amp;quot;" style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;It replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Yes, it appears that all or part of this image was created with Google Al. SynthID detected a watermark in 25-50% of the image.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Presumably that 25-50% figure is because the rest of the photo was taken by me - it was just the raccoons that were added by Nano Banana Pro.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/text-to-image"&gt;text-to-image&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nano-banana"&gt;nano-banana&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="gemini"/><category term="ai"/><category term="llms"/><category term="text-to-image"/><category term="llm-release"/><category term="datasette"/><category term="google"/><category term="nano-banana"/><category term="generative-ai"/></entry><entry><title>How I automate my Substack newsletter with content from my blog</title><link href="https://simonwillison.net/2025/Nov/19/how-i-automate-my-substack-newsletter/#atom-tag" rel="alternate"/><published>2025-11-19T22:00:34+00:00</published><updated>2025-11-19T22:00:34+00:00</updated><id>https://simonwillison.net/2025/Nov/19/how-i-automate-my-substack-newsletter/#atom-tag</id><summary type="html">
    &lt;p&gt;I sent out &lt;a href="https://simonw.substack.com/p/trying-out-gemini-3-pro-with-audio"&gt;my weekly-ish Substack newsletter&lt;/a&gt; this morning and took the opportunity to record &lt;a href="https://www.youtube.com/watch?v=BoPZltKDM-s"&gt;a YouTube video&lt;/a&gt; demonstrating my process and describing the different components that make it work. There's a &lt;em&gt;lot&lt;/em&gt; of digital duct tape involved, taking the content from Django+Heroku+PostgreSQL to GitHub Actions to SQLite+Datasette+Fly.io to JavaScript+Observable and finally to Substack.&lt;/p&gt;

&lt;p&gt;&lt;lite-youtube videoid="BoPZltKDM-s" js-api="js-api"
  title="How I automate my Substack newsletter with content from my blog"
  playlabel="Play: How I automate my Substack newsletter with content from my blog"
&gt; &lt;/lite-youtube&gt;&lt;/p&gt;

&lt;p&gt;The core process is the same as I described &lt;a href="https://simonwillison.net/2023/Apr/4/substack-observable/"&gt;back in 2023&lt;/a&gt;. I have an Observable notebook called &lt;a href="https://observablehq.com/@simonw/blog-to-newsletter"&gt;blog-to-newsletter&lt;/a&gt; which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2025/copy-to-newsletter.jpg" alt="Screenshot of the interface. An item in a list says 9080: Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark. A huge button reads Copy rich text newsletter to clipboard - below is a smaller button that says Copy just the links/quotes/TILs. A Last X days slider is set to 2. There are checkboxes for SKip content sent in prior newsletters and only include post content prior to the cutoff comment." style="max-width: 100%;" /&gt;&lt;/p&gt;
&lt;p&gt;I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes.&lt;/p&gt;
&lt;p&gt;I make very minor edits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I set the title and the subheading for the newsletter. This is often a direct copy of the title of the featured blog post.&lt;/li&gt;
&lt;li&gt;Substack turns YouTube URLs into embeds, which often isn't what I want - especially if I have a YouTube URL inside a code example.&lt;/li&gt;
&lt;li&gt;Blocks of preformatted text often have an extra blank line at the end, which I remove.&lt;/li&gt;
&lt;li&gt;Occasionally I'll make a content edit - removing a piece of content that doesn't fit the newsletter, or fixing a time reference like "yesterday" that doesn't make sense any more.&lt;/li&gt;
&lt;li&gt;I pick the featured image for the newsletter and add some tags.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That's the whole process!&lt;/p&gt;
&lt;h4 id="the-observable-notebook"&gt;The Observable notebook&lt;/h4&gt;
&lt;p&gt;The most important cell in the Observable notebook is this one:&lt;/p&gt;
&lt;div class="highlight highlight-source-js"&gt;&lt;pre&gt;&lt;span class="pl-s1"&gt;raw_content&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-kos"&gt;{&lt;/span&gt;
  &lt;span class="pl-c1"&gt;return&lt;/span&gt; &lt;span class="pl-s1"&gt;await&lt;/span&gt; &lt;span class="pl-kos"&gt;(&lt;/span&gt;
    &lt;span class="pl-k"&gt;await&lt;/span&gt; &lt;span class="pl-en"&gt;fetch&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;
      &lt;span class="pl-s"&gt;`https://datasette.simonwillison.net/simonwillisonblog.json?sql=&lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;${&lt;/span&gt;&lt;span class="pl-en"&gt;encodeURIComponent&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;span class="pl-s1"&gt;        &lt;span class="pl-s1"&gt;sql&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;span class="pl-s1"&gt;      &lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt;&amp;amp;_shape=array&amp;amp;numdays=&lt;span class="pl-s1"&gt;&lt;span class="pl-kos"&gt;${&lt;/span&gt;&lt;span class="pl-s1"&gt;numDays&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/span&gt;`&lt;/span&gt;
    &lt;span class="pl-kos"&gt;)&lt;/span&gt;
  &lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;json&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This uses the JavaScript &lt;code&gt;fetch()&lt;/code&gt; function to pull data from my blog's Datasette instance, using a very complex SQL query that is composed elsewhere in the notebook.&lt;/p&gt;
&lt;p&gt;Here's a link to &lt;a href="https://datasette.simonwillison.net/simonwillisonblog?sql=with+content+as+%28%0D%0A++select%0D%0A++++id%2C%0D%0A++++%27entry%27+as+type%2C%0D%0A++++title%2C%0D%0A++++created%2C%0D%0A++++slug%2C%0D%0A++++%27%3Ch3%3E%3Ca+href%3D%22%27+%7C%7C+%27https%3A%2F%2Fsimonwillison.net%2F%27+%7C%7C+strftime%28%27%25Y%2F%27%2C+created%29%0D%0A++++++%7C%7C+substr%28%27JanFebMarAprMayJunJulAugSepOctNovDec%27%2C+%28strftime%28%27%25m%27%2C+created%29+-+1%29+*+3+%2B+1%2C+3%29+%0D%0A++++++%7C%7C+%27%2F%27+%7C%7C+cast%28strftime%28%27%25d%27%2C+created%29+as+integer%29+%7C%7C+%27%2F%27+%7C%7C+slug+%7C%7C+%27%2F%27+%7C%7C+%27%22%3E%27+%0D%0A++++++%7C%7C+title+%7C%7C+%27%3C%2Fa%3E+-+%27+%7C%7C+date%28created%29+%7C%7C+%27%3C%2Fh3%3E%27+%7C%7C+body%0D%0A++++++as+html%2C%0D%0A++++%27null%27+as+json%2C%0D%0A++++%27%27+as+external_url%0D%0A++from+blog_entry%0D%0A++union+all%0D%0A++select%0D%0A++++id%2C%0D%0A++++%27blogmark%27+as+type%2C%0D%0A++++link_title%2C%0D%0A++++created%2C%0D%0A++++slug%2C%0D%0A++++%27%3Cp%3E%3Cstrong%3ELink%3C%2Fstrong%3E+%27+%7C%7C+date%28created%29+%7C%7C+%27+%3Ca+href%3D%22%27%7C%7C+link_url+%7C%7C+%27%22%3E%27%0D%0A++++++%7C%7C+link_title+%7C%7C+%27%3C%2Fa%3E%3A%3C%2Fp%3E%3Cp%3E%27+%7C%7C+%27+%27+%7C%7C+replace%28commentary%2C+%27%0D%0A%27%2C+%27%3Cbr%3E%27%29+%7C%7C+%27%3C%2Fp%3E%27%0D%0A++++++as+html%2C%0D%0A++++json_object%28%0D%0A++++++%27created%27%2C+date%28created%29%2C%0D%0A++++++%27link_url%27%2C+link_url%2C%0D%0A++++++%27link_title%27%2C+link_title%2C%0D%0A++++++%27commentary%27%2C+commentary%2C%0D%0A++++++%27use_markdown%27%2C+use_markdown%0D%0A++++%29+as+json%2C%0D%0A++link_url+as+external_url%0D%0A++from+blog_blogmark%0D%0A++union+all%0D%0A++select%0D%0A++++id%2C%0D%0A++++%27quotation%27+as+type%2C%0D%0A++++source%2C%0D%0A++++created%2C%0D%0A++++slug%2C%0D%0A++++%27%3Cstrong%3Equote%3C%2Fstrong%3E+%27+%7C%7C+date%28created%29+%7C%7C%0D%0A++++%27%3Cblockquote%3E%3Cp%3E%3Cem%3E%27+%7C%7C%0D%0A++++replace%28quotation%2C+%27%0D%0A%27%2C+%27%3Cbr%3E%27%29+%7C%7C+%0D%0A++++%27%3C%2Fem%3E%3C%2Fp%3E%3C%2Fblockquote%3E%3Cp%3E%3Ca+href%3D%22%27+%7C%7C%0D%0A++++coalesce%28source_url%2C+%27%23%27%29+%7C%7C+%27%22%3E%27+%7C%7C+source+%7C%7C+%27%3C%2Fa%3E%27+%7C%7C%0D%0A++++case+%0D%0A++++++++when+nullif%28trim%28context%29%2C+%27%27%29+is+not+null+%0D%0A++++++++then+%27%2C+%27+%7C%7C+context+%0D%0A++++++++else+%27%27+%0D%0A++++end+%7C%7C%0D%0A++++%27%3C%2Fp%3E%27+as+html%2C%0D%0A++++%27null%27+as+json%2C%0D%0A++++source_url+as+external_url%0D%0A++from+blog_quotation%0D%0A++union+all%0D%0A++select%0D%0A++++id%2C%0D%0A++++%27note%27+as+type%2C%0D%0A++++case%0D%0A++++++when+title+is+not+null+and+title+%3C%3E+%27%27+then+title%0D%0A++++++else+%27Note+on+%27+%7C%7C+date%28created%29%0D%0A++++end%2C%0D%0A++++created%2C%0D%0A++++slug%2C%0D%0A++++%27No+HTML%27%2C%0D%0A++++json_object%28%0D%0A++++++%27created%27%2C+date%28created%29%2C%0D%0A++++++%27link_url%27%2C+%27https%3A%2F%2Fsimonwillison.net%2F%27+%7C%7C+strftime%28%27%25Y%2F%27%2C+created%29%0D%0A++++++%7C%7C+substr%28%27JanFebMarAprMayJunJulAugSepOctNovDec%27%2C+%28strftime%28%27%25m%27%2C+created%29+-+1%29+*+3+%2B+1%2C+3%29+%0D%0A++++++%7C%7C+%27%2F%27+%7C%7C+cast%28strftime%28%27%25d%27%2C+created%29+as+integer%29+%7C%7C+%27%2F%27+%7C%7C+slug+%7C%7C+%27%2F%27%2C%0D%0A++++++%27link_title%27%2C+%27%27%2C%0D%0A++++++%27commentary%27%2C+body%2C%0D%0A++++++%27use_markdown%27%2C+1%0D%0A++++%29%2C%0D%0A++++%27%27+as+external_url%0D%0A++from+blog_note%0D%0A++union+all%0D%0A++select%0D%0A++++rowid%2C%0D%0A++++%27til%27+as+type%2C%0D%0A++++title%2C%0D%0A++++created%2C%0D%0A++++%27null%27+as+slug%2C%0D%0A++++%27%3Cp%3E%3Cstrong%3ETIL%3C%2Fstrong%3E+%27+%7C%7C+date%28created%29+%7C%7C+%27+%3Ca+href%3D%22%27%7C%7C+%27https%3A%2F%2Ftil.simonwillison.net%2F%27+%7C%7C+topic+%7C%7C+%27%2F%27+%7C%7C+slug+%7C%7C+%27%22%3E%27+%7C%7C+title+%7C%7C+%27%3C%2Fa%3E%3A%27+%7C%7C+%27+%27+%7C%7C+substr%28html%2C+1%2C+instr%28html%2C+%27%3C%2Fp%3E%27%29+-+1%29+%7C%7C+%27+%26%238230%3B%3C%2Fp%3E%27+as+html%2C%0D%0A++++%27null%27+as+json%2C%0D%0A++++%27https%3A%2F%2Ftil.simonwillison.net%2F%27+%7C%7C+topic+%7C%7C+%27%2F%27+%7C%7C+slug+as+external_url%0D%0A++from+til%0D%0A%29%2C%0D%0Acollected+as+%28%0D%0A++select%0D%0A++++id%2C%0D%0A++++type%2C%0D%0A++++title%2C%0D%0A++++case%0D%0A++++++when+type+%3D+%27til%27%0D%0A++++++then+external_url%0D%0A++++++else+%27https%3A%2F%2Fsimonwillison.net%2F%27+%7C%7C+strftime%28%27%25Y%2F%27%2C+created%29%0D%0A++++++%7C%7C+substr%28%27JanFebMarAprMayJunJulAugSepOctNovDec%27%2C+%28strftime%28%27%25m%27%2C+created%29+-+1%29+*+3+%2B+1%2C+3%29+%7C%7C+%0D%0A++++++%27%2F%27+%7C%7C+cast%28strftime%28%27%25d%27%2C+created%29+as+integer%29+%7C%7C+%27%2F%27+%7C%7C+slug+%7C%7C+%27%2F%27%0D%0A++++++end+as+url%2C%0D%0A++++created%2C%0D%0A++++html%2C%0D%0A++++json%2C%0D%0A++++external_url%2C%0D%0A++++case%0D%0A++++++when+type+%3D+%27entry%27+then+%28%0D%0A++++++++select+json_group_array%28tag%29%0D%0A++++++++from+blog_tag%0D%0A++++++++join+blog_entry_tags+on+blog_tag.id+%3D+blog_entry_tags.tag_id%0D%0A++++++++where+blog_entry_tags.entry_id+%3D+content.id%0D%0A++++++%29%0D%0A++++++when+type+%3D+%27blogmark%27+then+%28%0D%0A++++++++select+json_group_array%28tag%29%0D%0A++++++++from+blog_tag%0D%0A++++++++join+blog_blogmark_tags+on+blog_tag.id+%3D+blog_blogmark_tags.tag_id%0D%0A++++++++where+blog_blogmark_tags.blogmark_id+%3D+content.id%0D%0A++++++%29%0D%0A++++++when+type+%3D+%27quotation%27+then+%28%0D%0A++++++++select+json_group_array%28tag%29%0D%0A++++++++from+blog_tag%0D%0A++++++++join+blog_quotation_tags+on+blog_tag.id+%3D+blog_quotation_tags.tag_id%0D%0A++++++++where+blog_quotation_tags.quotation_id+%3D+content.id%0D%0A++++++%29%0D%0A++++++else+%27%5B%5D%27%0D%0A++++end+as+tags%0D%0A++from+content%0D%0A++where+created+%3E%3D+date%28%27now%27%2C+%27-%27+%7C%7C+%3Anumdays+%7C%7C+%27+days%27%29+++%0D%0A++order+by+created+desc%0D%0A%29%0D%0Aselect+id%2C+type%2C+title%2C+url%2C+created%2C+html%2C+json%2C+external_url%2C+tags%0D%0Afrom+collected+%0D%0Aorder+by+%0D%0A++case+type+%0D%0A++++when+%27entry%27+then+0+%0D%0A++++else+1+%0D%0A++end%2C%0D%0A++case+type+%0D%0A++++when+%27entry%27+then+created+%0D%0A++++else+-strftime%28%27%25s%27%2C+created%29+%0D%0A++end+desc%3B&amp;amp;numdays=7"&gt;see and execute that query&lt;/a&gt; directly in Datasette. It's 143 lines of convoluted SQL that assembles most of the HTML for the newsletter using SQLite string concatenation! An illustrative snippet:&lt;/p&gt;
&lt;div class="highlight highlight-source-sql"&gt;&lt;pre&gt;with content &lt;span class="pl-k"&gt;as&lt;/span&gt; (
  &lt;span class="pl-k"&gt;select&lt;/span&gt;
    id,
    &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;entry&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; type,
    title,
    created,
    slug,
    &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&amp;lt;h3&amp;gt;&amp;lt;a href="&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;https://simonwillison.net/&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; strftime(&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;%Y/&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;, created)
      &lt;span class="pl-k"&gt;||&lt;/span&gt; substr(&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;JanFebMarAprMayJunJulAugSepOctNovDec&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;, (strftime(&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;%m&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;, created) &lt;span class="pl-k"&gt;-&lt;/span&gt; &lt;span class="pl-c1"&gt;1&lt;/span&gt;) &lt;span class="pl-k"&gt;*&lt;/span&gt; &lt;span class="pl-c1"&gt;3&lt;/span&gt; &lt;span class="pl-k"&gt;+&lt;/span&gt; &lt;span class="pl-c1"&gt;1&lt;/span&gt;, &lt;span class="pl-c1"&gt;3&lt;/span&gt;) 
      &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;/&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; cast(strftime(&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;%d&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;, created) &lt;span class="pl-k"&gt;as&lt;/span&gt; &lt;span class="pl-k"&gt;integer&lt;/span&gt;) &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;/&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; slug &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;/&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;"&amp;gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; 
      &lt;span class="pl-k"&gt;||&lt;/span&gt; title &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&amp;lt;/a&amp;gt; - &lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-k"&gt;date&lt;/span&gt;(created) &lt;span class="pl-k"&gt;||&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&amp;lt;/h3&amp;gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;||&lt;/span&gt; body
      &lt;span class="pl-k"&gt;as&lt;/span&gt; html,
    &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;null&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; json,
    &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;as&lt;/span&gt; external_url
  &lt;span class="pl-k"&gt;from&lt;/span&gt; blog_entry
  &lt;span class="pl-k"&gt;union all&lt;/span&gt;
  &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; ...&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;My blog's URLs look like &lt;code&gt;/2025/Nov/18/gemini-3/&lt;/code&gt; - this SQL constructs that three letter month abbreviation from the month number using a substring operation.&lt;/p&gt;
&lt;p&gt;This is a &lt;em&gt;terrible&lt;/em&gt; way to assemble HTML, but I've stuck with it because it amuses me.&lt;/p&gt;
&lt;p&gt;The rest of the Observable notebook takes that data, filters out anything that links to content mentioned in the previous newsletters and composes it into a block of HTML that can be copied using that big button.&lt;/p&gt;
&lt;p&gt;Here's the recipe it uses to turn HTML into rich text content on a clipboard suitable for Substack. I can't remember how I figured this out but it's very effective:&lt;/p&gt;
&lt;div class="highlight highlight-source-js"&gt;&lt;pre&gt;&lt;span class="pl-v"&gt;Object&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;assign&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;
  &lt;span class="pl-en"&gt;html&lt;/span&gt;&lt;span class="pl-s"&gt;`&lt;span class="pl-kos"&gt;&amp;lt;&lt;/span&gt;&lt;span class="pl-ent"&gt;button&lt;/span&gt; &lt;span class="pl-c1"&gt;style&lt;/span&gt;="&lt;span class="pl-s"&gt;font-size: 1.4em; padding: 0.3em 1em; font-weight: bold;&lt;/span&gt;"&lt;span class="pl-kos"&gt;&amp;gt;&lt;/span&gt;Copy rich text newsletter to clipboard`&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt;
  &lt;span class="pl-kos"&gt;{&lt;/span&gt;
    &lt;span class="pl-en"&gt;onclick&lt;/span&gt;: &lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt; &lt;span class="pl-c1"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="pl-kos"&gt;{&lt;/span&gt;
      &lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;htmlContent&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;newsletterHTML&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-c"&gt;// Create a temporary element to hold the HTML content&lt;/span&gt;
      &lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;tempElement&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-smi"&gt;document&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;createElement&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"div"&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-s1"&gt;tempElement&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-c1"&gt;innerHTML&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;htmlContent&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-smi"&gt;document&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-c1"&gt;body&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;appendChild&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;tempElement&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-c"&gt;// Select the HTML content&lt;/span&gt;
      &lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;range&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-smi"&gt;document&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;createRange&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-s1"&gt;range&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;selectNode&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;tempElement&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-c"&gt;// Copy the selected HTML content to the clipboard&lt;/span&gt;
      &lt;span class="pl-k"&gt;const&lt;/span&gt; &lt;span class="pl-s1"&gt;selection&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-smi"&gt;window&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;getSelection&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-s1"&gt;selection&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;removeAllRanges&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-s1"&gt;selection&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;addRange&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;range&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-smi"&gt;document&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;execCommand&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"copy"&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-s1"&gt;selection&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;removeAllRanges&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
      &lt;span class="pl-smi"&gt;document&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-c1"&gt;body&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;removeChild&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;tempElement&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
    &lt;span class="pl-kos"&gt;}&lt;/span&gt;
  &lt;span class="pl-kos"&gt;}&lt;/span&gt;
&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="from-django-postgresql-to-datasette-sqlite"&gt;From Django+Postgresql to Datasette+SQLite&lt;/h4&gt;
&lt;p&gt;My blog itself is a Django application hosted on Heroku, with data stored in Heroku PostgreSQL. Here's &lt;a href="https://github.com/simonw/simonwillisonblog"&gt;the source code for that Django application&lt;/a&gt;. I use the Django admin as my CMS.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://datasette.io/"&gt;Datasette&lt;/a&gt; provides a JSON API over a SQLite database... which means something needs to convert that PostgreSQL database into a SQLite database that Datasette can use.&lt;/p&gt;
&lt;p&gt;My system for doing that lives in the &lt;a href="https://github.com/simonw/simonwillisonblog-backup"&gt;simonw/simonwillisonblog-backup&lt;/a&gt; GitHub repository. It uses GitHub Actions on a schedule that executes every two hours, fetching the latest data from PostgreSQL and converting that to SQLite.&lt;/p&gt;
&lt;p&gt;My &lt;a href="https://github.com/simonw/db-to-sqlite"&gt;db-to-sqlite&lt;/a&gt; tool is responsible for that conversion. I call it &lt;a href="https://github.com/simonw/simonwillisonblog-backup/blob/dc5b9df272134ce051a5280b4de6d4daa9b2a9fc/.github/workflows/backup.yml#L44-L62"&gt;like this&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;db-to-sqlite \
  &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;$(&lt;/span&gt;heroku config:get DATABASE_URL -a simonwillisonblog &lt;span class="pl-k"&gt;|&lt;/span&gt; sed s/postgres:/postgresql+psycopg2:/&lt;span class="pl-pds"&gt;)&lt;/span&gt;&lt;/span&gt; \
  simonwillisonblog.db \
  --table auth_permission \
  --table auth_user \
  --table blog_blogmark \
  --table blog_blogmark_tags \
  --table blog_entry \
  --table blog_entry_tags \
  --table blog_quotation \
  --table blog_quotation_tags \
  --table blog_note \
  --table blog_note_tags \
  --table blog_tag \
  --table blog_previoustagname \
  --table blog_series \
  --table django_content_type \
  --table redirects_redirect&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;That &lt;code&gt;heroku config:get DATABASE_URL&lt;/code&gt; command uses Heroku credentials in an environment variable to fetch the database connection URL for my blog's PostgreSQL database (and fixes a small difference in the URL scheme).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;db-to-sqlite&lt;/code&gt; can then export that data and write it to a SQLite database file called &lt;code&gt;simonwillisonblog.db&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--table&lt;/code&gt; options specify the tables that should be included in the export.&lt;/p&gt;
&lt;p&gt;The repository does more than just that conversion: it also exports the resulting data to JSON files that live in the repository, which gives me a &lt;a href="https://github.com/simonw/simonwillisonblog-backup/commits/main/simonwillisonblog"&gt;commit history&lt;/a&gt; of changes I make to my content. This is a cheap way to get a revision history of my blog content without having to mess around with detailed history tracking inside the Django application itself.&lt;/p&gt;
&lt;p&gt;At the &lt;a href="https://github.com/simonw/simonwillisonblog-backup/blob/dc5b9df272134ce051a5280b4de6d4daa9b2a9fc/.github/workflows/backup.yml#L200-L204"&gt;end of my GitHub Actions workflow&lt;/a&gt; is this code that publishes the resulting database to Datasette running on &lt;a href="https://fly.io/"&gt;Fly.io&lt;/a&gt; using the &lt;a href="https://datasette.io/plugins/datasette-publish-fly"&gt;datasette publish fly&lt;/a&gt; plugin:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;datasette publish fly simonwillisonblog.db \
  -m metadata.yml \
  --app simonwillisonblog-backup \
  --branch 1.0a2 \
  --extra-options &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;--setting sql_time_limit_ms 15000 --setting truncate_cells_html 10000 --setting allow_facet off&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; \
  --install datasette-block-robots \
  &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; ... more plugins&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;As you can see, there are a lot of moving parts! Surprisingly it all mostly just works - I rarely have to intervene in the process, and the cost of those different components is pleasantly low.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/newsletter"&gt;newsletter&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/youtube"&gt;youtube&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/blogging"&gt;blogging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/observable"&gt;observable&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sql"&gt;sql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/fly"&gt;fly&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/github-actions"&gt;github-actions&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/heroku"&gt;heroku&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/postgresql"&gt;postgresql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/substack"&gt;substack&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="newsletter"/><category term="youtube"/><category term="datasette"/><category term="blogging"/><category term="observable"/><category term="javascript"/><category term="sql"/><category term="sqlite"/><category term="fly"/><category term="github-actions"/><category term="heroku"/><category term="postgresql"/><category term="django"/><category term="substack"/></entry><entry><title>datasette-demo-for-llm-accountant 0.1a0</title><link href="https://simonwillison.net/2025/Nov/18/datasette-demo-for-llm-accountant/#atom-tag" rel="alternate"/><published>2025-11-18T01:27:10+00:00</published><updated>2025-11-18T01:27:10+00:00</updated><id>https://simonwillison.net/2025/Nov/18/datasette-demo-for-llm-accountant/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-demo-for-llm-accountant/releases/tag/0.1a0"&gt;datasette-demo-for-llm-accountant 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Demo app for datasette-llm-accountant&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-llm-accountant 0.1a0</title><link href="https://simonwillison.net/2025/Nov/18/datasette-llm-accountant/#atom-tag" rel="alternate"/><published>2025-11-18T01:23:03+00:00</published><updated>2025-11-18T01:23:03+00:00</updated><id>https://simonwillison.net/2025/Nov/18/datasette-llm-accountant/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm-accountant/releases/tag/0.1a0"&gt;datasette-llm-accountant 0.1a0&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;LLM accounting for Datasette&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry></feed>