Example dashboard

Various statistics from my blog.

Owned by simonw, visibility: Public

Entries

3290

SQL query
select 'Entries' as label, count(*) as big_number from blog_entry

Blogmarks

8341

SQL query
select 'Blogmarks' as label, count(*) as big_number from blog_blogmark

Quotations

1374

SQL query
select 'Quotations' as label, count(*) as big_number from blog_quotation

Chart of number of entries per month over time

SQL query
select '<h2>Chart of number of entries per month over time</h2>' as html
SQL query
select to_char(date_trunc('month', created), 'YYYY-MM') as bar_label,
count(*) as bar_quantity from blog_entry group by bar_label order by count(*) desc

Ten most recent blogmarks (of 8341 total)

SQL query
select '## Ten most recent blogmarks (of ' || count(*) || ' total)' as markdown from blog_blogmark
SQL query
select link_title, link_url, commentary, created from blog_blogmark order by created desc limit 10

10 rows

link_title link_url commentary created
Gemini 3.1 Flash TTS https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-tts/ Google released Gemini 3.1 Flash TTS today, a new text-to-speech model that can be directed using prompts. It's presented via the standard Gemini API using `gemini-3.1-flash-tts-preview` as the model ID, but can only output audio files. The [prompting guide](https://ai.google.dev/gemini-api/docs/speech-generation#transcript-tags) is surprising, to say the least. Here's their example prompt to generate just a few short sentences of audio: # AUDIO PROFILE: Jaz R. ## "The Morning Hype" ## THE SCENE: The London Studio It is 10:00 PM in a glass-walled studio overlooking the moonlit London skyline, but inside, it is blindingly bright. The red "ON AIR" tally light is blazing. Jaz is standing up, not sitting, bouncing on the balls of their heels to the rhythm of a thumping backing track. Their hands fly across the faders on a massive mixing desk. It is a chaotic, caffeine-fueled cockpit designed to wake up an entire nation. ### DIRECTOR'S NOTES Style: * The "Vocal Smile": You must hear the grin in the audio. The soft palate is always raised to keep the tone bright, sunny, and explicitly inviting. * Dynamics: High projection without shouting. Punchy consonants and elongated vowels on excitement words (e.g., "Beauuutiful morning"). Pace: Speaks at an energetic pace, keeping up with the fast music. Speaks with A "bouncing" cadence. High-speed delivery with fluid transitions — no dead air, no gaps. Accent: Jaz is from Brixton, London ### SAMPLE CONTEXT Jaz is the industry standard for Top 40 radio, high-octane event promos, or any script that requires a charismatic Estuary accent and 11/10 infectious energy. #### TRANSCRIPT [excitedly] Yes, massive vibes in the studio! You are locked in and it is absolutely popping off in London right now. If you're stuck on the tube, or just sat there pretending to work... stop it. Seriously, I see you. [shouting] Turn this up! We've got the project roadmap landing in three, two... let's go! Here's what I got using that example prompt: <audio controls style="width: 100%"> <source src="https://static.simonwillison.net/static/2026/gemini-flash-tts-london.wav" type="audio/wav"> Your browser does not support the audio element. </audio> Then I modified it to say "Jaz is from Newcastle" and "... requires a charismatic Newcastle accent" and got this result: <audio controls style="width: 100%"> <source src="https://static.simonwillison.net/static/2026/gemini-flash-tts-newcastle.wav" type="audio/wav"> Your browser does not support the audio element. </audio> Here's Exeter, Devon for good measure: <audio controls style="width: 100%"> <source src="https://static.simonwillison.net/static/2026/gemini-flash-tts-devon.wav" type="audio/wav"> Your browser does not support the audio element. </audio> I [had Gemini 3.1 Pro](https://gemini.google.com/share/dd0fba5a83c4) vibe code [this UI for trying it out](https://tools.simonwillison.net/gemini-flash-tts): ![Screenshot of a "Gemini 3.1 Flash TTS" web application interface. At the top is an "API Key" field with a masked password. Below is a "TTS Mode" section with a dropdown set to "Multi-Speaker (Conversation)". "Speaker 1 Name" is set to "Joe" with "Speaker 1 Voice" set to "Puck (Upbeat)". "Speaker 2 Name" is set to "Jane" with "Speaker 2 Voice" set to "Kore (Firm)". Under "Script / Prompt" is a tip reading "Tip: Format your text as a script using the Exact Speaker Names defined above." The script text area contains "TTS the following conversation between Joe and Jane:\n\nJoe: How's it going today Jane?\nJane: \[yawn\] Not too bad, how about you?" A blue "Generate Audio" button is below. At the bottom is a "Success!" message with an audio player showing 00:00 / 00:06 and a "Download WAV" link.](https://static.simonwillison.net/static/2026/gemini-flash-tts.jpg) 2026-04-15 17:13:14+00:00
Zig 0.16.0 release notes: "Juicy Main" https://ziglang.org/download/0.16.0/release-notes.html#Juicy-Main Zig has *really good* release notes - comprehensive, detailed, and with relevant usage examples for each of the new features. Of particular note in the newly released Zig 0.16.0 is what they are calling "Juicy Main" - a dependency injection feature for your program's `main()` function where accepting a `process.Init` parameter grants access to a struct of useful properties: <div class="highlight highlight-source-zig"><pre><span class="pl-k">const</span> <span class="pl-v">std</span> <span class="pl-k">=</span> <span class="pl-k">@import</span>(<span class="pl-s">"std"</span>); <span class="pl-k">pub</span> <span class="pl-k">fn</span> <span class="pl-en">main</span>(<span class="pl-v">init</span>: <span class="pl-k">std.process.Init</span>) <span class="pl-k">!</span><span class="pl-k">void</span> { <span class="pl-c">/// general purpose allocator for temporary heap allocations:</span> <span class="pl-k">const</span> <span class="pl-v">gpa</span> <span class="pl-k">=</span> <span class="pl-v">init</span>.<span class="pl-v">gpa</span>; <span class="pl-c">/// default Io implementation:</span> <span class="pl-k">const</span> <span class="pl-v">io</span> <span class="pl-k">=</span> <span class="pl-v">init</span>.<span class="pl-v">io</span>; <span class="pl-c">/// access to environment variables:</span> <span class="pl-v">std</span>.<span class="pl-v">log</span>.<span class="pl-v">info</span>(<span class="pl-s">"{d} env vars"</span>, .{<span class="pl-v">init</span>.<span class="pl-v">environ_map</span>.<span class="pl-v">count</span>()}); <span class="pl-c">/// access to CLI arguments</span> <span class="pl-k">const</span> <span class="pl-v">args</span> <span class="pl-k">=</span> <span class="pl-k">try</span> <span class="pl-v">init</span>.<span class="pl-v">minimal</span>.<span class="pl-v">args</span>.<span class="pl-v">toSlice</span>( <span class="pl-v">init</span>.<span class="pl-v">arena</span>.<span class="pl-v">allocator</span>() ); }</pre></div> 2026-04-15 01:59:21+00:00
datasette PR #2689: Replace token-based CSRF with Sec-Fetch-Site header protection https://github.com/simonw/datasette/pull/2689 Datasette has long protected against CSRF attacks using CSRF tokens, implemented using my [asgi-csrf](https://github.com/simonw/asgi-csrf) Python library. These are something of a pain to work with - you need to scatter forms in templates with `<input type="hidden" name="csrftoken" value="{{ csrftoken() }}">` lines and then selectively disable CSRF protection for APIs that are intended to be called from outside the browser. I've been following Filippo Valsorda's research here with interest, described in [this detailed essay from August 2025](https://words.filippo.io/csrf/) and shipped [as part of Go 1.25](https://tip.golang.org/doc/go1.25#nethttppkgnethttp) that same month. I've now landed the same change in Datasette. Here's the PR description - Claude Code did much of the work (across 10 commits, closely guided by me and cross-reviewed by GPT-5.4) but I've decided to start writing these PR descriptions by hand, partly to make them more concise and also as an exercise in keeping myself honest. > - New CSRF protection middleware inspired by Go 1.25 and [this research](https://words.filippo.io/csrf/) by Filippo Valsorda. This replaces the old CSRF token based protection. > - Removes all instances of `<input type="hidden" name="csrftoken" value="{{ csrftoken() }}">` in the templates - they are no longer needed. > - Removes the `def skip_csrf(datasette, scope):` plugin hook defined in `datasette/hookspecs.py` and its documentation and tests. > - Updated [CSRF protection documentation](https://docs.datasette.io/en/latest/internals.html#csrf-protection) to describe the new approach. > - Upgrade guide now [describes the CSRF change](https://docs.datasette.io/en/latest/upgrade_guide.html#csrf-protection-is-now-header-based). 2026-04-14 23:58:53+00:00
Trusted access for the next era of cyber defense https://openai.com/index/scaling-trusted-access-for-cyber-defense/ OpenAI's answer to [Claude Mythos](https://simonwillison.net/2026/Apr/7/project-glasswing/) appears to be a new model called GPT-5.4-Cyber: > In preparation for increasingly more capable models from OpenAI over the next few months, we are fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT‑5.4 trained to be cyber-permissive: GPT‑5.4‑Cyber. They're also extending a program they launched in February (which I had missed) called [Trusted Access for Cyber](https://openai.com/index/trusted-access-for-cyber/), where users can verify their identity (via a photo of a government-issued ID processed by [Persona](https://withpersona.com/)) to gain "reduced friction" access to OpenAI's models for cybersecurity work. Honestly, this OpenAI announcement is difficult to follow. Unsurprisingly they don't mention Anthropic at all, but much of the piece emphasizes their many years of existing cybersecurity work and their goal to "democratize access" to these tools, hence the emphasis on that self-service verification flow from February. If you want access to their best security tools you still need to go through an extra Google Form application process though, which doesn't feel particularly different to me from Anthropic's [Project Glasswing](https://www.anthropic.com/glasswing). 2026-04-14 21:23:59+00:00
Cybersecurity Looks Like Proof of Work Now https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html The UK's AI Safety Institute recently published [Our evaluation of Claude Mythos Preview’s cyber capabilities](https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities), their own independent analysis of [Claude Mythos](https://simonwillison.net/2026/Apr/7/project-glasswing/) which backs up Anthropic's claims that it is exceptionally effective at identifying security vulnerabilities. Drew Breunig notes that AISI's report shows that the more tokens (and hence money) they spent the better the result they got, which leads to a strong economic incentive to spend as much as possible on security reviews: > If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: **to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them**. An interesting result of this is that open source libraries become *more* valuable, since the tokens spent securing them can be shared across all of their users. This directly counters the idea that the low cost of vibe-coding up a replacement for an open source library makes those open source projects less attractive. 2026-04-14 19:41:48+00:00
SQLite 3.53.0 https://sqlite.org/releaselog/3_53_0.html SQLite 3.52.0 was withdrawn so this is a pretty big release with a whole lot of accumulated user-facing and internal improvements. Some that stood out to me: - `ALTER TABLE` can now add and remove `NOT NULL` and `CHECK` constraints - I've previously used my own [sqlite-utils transform() method](https://sqlite-utils.datasette.io/en/stable/python-api.html#changing-not-null-status) for this. - New [json_array_insert() function](https://sqlite.org/json1.html#jarrayins) and its `jsonb` equivalent. - Significant improvements to [CLI mode](https://sqlite.org/climode.html), including result formatting. The result formatting improvements come from a new library, the [Query Results Formatter](https://sqlite.org/src/file/ext/qrf). I [had Claude Code](https://github.com/simonw/tools/pull/266) (on my phone) compile that to WebAssembly and build [this playground interface](https://tools.simonwillison.net/sqlite-qrf) for trying that out. 2026-04-11 19:56:53+00:00
GLM-5.1: Towards Long-Horizon Tasks https://z.ai/blog/glm-5.1 Chinese AI lab Z.ai's latest model is a giant 754B parameter 1.51TB (on [Hugging Face](https://huggingface.co/zai-org/GLM-5.1)) MIT-licensed monster - the same size as their previous GLM-5 release, and sharing the [same paper](https://huggingface.co/papers/2602.15763). It's available [via OpenRouter](https://openrouter.ai/z-ai/glm-5.1) so I asked it to draw me a pelican: llm install llm-openrouter llm -m openrouter/z-ai/glm-5.1 'Generate an SVG of a pelican on a bicycle' And something new happened... unprompted, the model [decided to give me](https://gist.github.com/simonw/af7170f54256cc007ef28a8721564be8) an HTML page that included both the SVG and a separate set of CSS animations! The SVG was excellent, and might be my new favorite from an open weights model: ![The bicycle is red and has a frame the correct shape and wheels with spokes. The pelican is a perky little fella.](https://static.simonwillison.net/static/2026/glm-5.1-pelican.png) But the animation [broke it](https://gisthost.github.io/?73bb6808b18c2482f66e5f082c75f36e): ![Animation - the wheels and pedals rotate, the clouds move... and the pelican has vanished, but there is a little blob bobbing up and down in the top left corner.](https://static.simonwillison.net/static/2026/glm-5.1-broken-light-lossy.gif) That's the pelican, floating up in the top left corner. I usually don't do follow-up prompts for the pelican test, but in this case I made an exception: llm -c 'the animation is a bit broken, the pelican ends up positioned off the screen at the top right' GLM 5.1 replied: > The issue is that CSS `transform` animations on SVG elements override the SVG `transform` attribute used for positioning, causing the pelican to lose its placement and fly off to the top-right. The fix is to separate positioning (SVG attribute) from animation (inner group) and use `<animateTransform>` for SVG rotations since it handles coordinate systems correctly. And spat out [fresh HTML](https://static.simonwillison.net/static/2026/glm-5.1-pelican-fixed.html) which fixed the problem! ![Now everything is right - the bicycle rotates correctly, the pelican sits on it and bobs up and down, and its lower beak moves slightly as well.](https://static.simonwillison.net/static/2026/glm-5.1-pelican-fixed-medium-lossy.gif) I particularly like the animation of the beak, which is described in the SVG comments like so: <div class="highlight highlight-text-xml-svg"><pre><span class="pl-c"><span class="pl-c">&lt;!--</span> Pouch (lower beak) with wobble <span class="pl-c">--&gt;</span></span> &lt;<span class="pl-ent">g</span>&gt; &lt;<span class="pl-ent">path</span> <span class="pl-e">d</span>=<span class="pl-s"><span class="pl-pds">"</span>M42,-58 Q43,-50 48,-42 Q55,-35 62,-38 Q70,-42 75,-60 L42,-58 Z<span class="pl-pds">"</span></span> <span class="pl-e">fill</span>=<span class="pl-s"><span class="pl-pds">"</span>url(#pouchGrad)<span class="pl-pds">"</span></span> <span class="pl-e">stroke</span>=<span class="pl-s"><span class="pl-pds">"</span>#b06008<span class="pl-pds">"</span></span> <span class="pl-e">stroke-width</span>=<span class="pl-s"><span class="pl-pds">"</span>1<span class="pl-pds">"</span></span> <span class="pl-e">opacity</span>=<span class="pl-s"><span class="pl-pds">"</span>0.9<span class="pl-pds">"</span></span>/&gt; &lt;<span class="pl-ent">path</span> <span class="pl-e">d</span>=<span class="pl-s"><span class="pl-pds">"</span>M48,-50 Q55,-46 60,-52<span class="pl-pds">"</span></span> <span class="pl-e">fill</span>=<span class="pl-s"><span class="pl-pds">"</span>none<span class="pl-pds">"</span></span> <span class="pl-e">stroke</span>=<span class="pl-s"><span class="pl-pds">"</span>#c06a08<span class="pl-pds">"</span></span> <span class="pl-e">stroke-width</span>=<span class="pl-s"><span class="pl-pds">"</span>0.8<span class="pl-pds">"</span></span> <span class="pl-e">opacity</span>=<span class="pl-s"><span class="pl-pds">"</span>0.6<span class="pl-pds">"</span></span>/&gt; &lt;<span class="pl-ent">animateTransform</span> <span class="pl-e">attributeName</span>=<span class="pl-s"><span class="pl-pds">"</span>transform<span class="pl-pds">"</span></span> <span class="pl-e">type</span>=<span class="pl-s"><span class="pl-pds">"</span>scale<span class="pl-pds">"</span></span> <span class="pl-e">values</span>=<span class="pl-s"><span class="pl-pds">"</span>1,1; 1.03,0.97; 1,1<span class="pl-pds">"</span></span> <span class="pl-e">dur</span>=<span class="pl-s"><span class="pl-pds">"</span>0.75s<span class="pl-pds">"</span></span> <span class="pl-e">repeatCount</span>=<span class="pl-s"><span class="pl-pds">"</span>indefinite<span class="pl-pds">"</span></span> <span class="pl-e">additive</span>=<span class="pl-s"><span class="pl-pds">"</span>sum<span class="pl-pds">"</span></span>/&gt; &lt;/<span class="pl-ent">g</span>&gt;</pre></div> **Update**: On Bluesky [@charles.capps.me suggested](https://bsky.app/profile/charles.capps.me/post/3miwrn42mjc2t) a "NORTH VIRGINIA OPOSSUM ON AN E-SCOOTER" and... ![This is so great. It's dark, the possum is clearly a possum, it's riding an escooter, lovely animation, tail bobbing up and down, caption says NORTH VIRGINIA OPOSSUM, CRUISING THE COMMONWEALTH SINCE DUSK - only glitch is that it occasionally blinks and the eyes fall off the face](https://static.simonwillison.net/static/2026/glm-possum-escooter.gif.gif) The HTML+SVG comments on that one include `/* Earring sparkle */, <!-- Opossum fur gradient -->, <!-- Distant treeline silhouette - Virginia pines -->, <!-- Front paw on handlebar -->` - here's [the transcript](https://gist.github.com/simonw/1864b89f5304eba03c3ded4697e156c4) and the [HTML result](https://static.simonwillison.net/static/2026/glm-possum-escooter.html). 2026-04-07 21:25:14+00:00
Google AI Edge Gallery https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337 Terrible name, really great app: this is Google's official app for running their Gemma 4 models (the E2B and E4B sizes, plus some members of the Gemma 3 family) directly on your iPhone. It works *really* well. The E2B model is a 2.54GB download and is both fast and genuinely useful. The app also provides "ask questions about images" and audio transcription (up to 30s) with the two small Gemma 4 models, and has an interesting "skills" demo which demonstrates tool calling against eight different interactive widgets, each implemented as an HTML page (though sadly the source code is not visible): interactive-map, kitchen-adventure, calculate-hash, text-spinner, mood-tracker, mnemonic-password, query-wikipedia, and qr-code. <img src="https://static.simonwillison.net/static/2026/gemini-agent-skills.jpg" alt="Screenshot of an &quot;Agent Skills&quot; chat interface using the Gemma-4-E2B-it model. The user prompt reads &quot;Show me the Castro Theatre on a map.&quot; The model response, labeled &quot;Model on GPU,&quot; shows it &quot;Called JS skill &#39;interactive-map/index.html&#39;&quot; and displays an embedded Google Map centered on a red pin at The Castro Theatre in San Francisco, with nearby landmarks visible including Starbelly, Cliff&#39;s Variety, Blind Butcher, GLBT Historical Society Museum, and Fable. An &quot;Open in Maps&quot; link and &quot;View in full screen&quot; button are shown. Below the map, the model states &quot;The interactive map view for the Castro Theatre has been shown.&quot; with a response time of 2.4 s. A text input field with &quot;Type prompt...&quot; placeholder, a &quot;+&quot; button, and a &quot;Skills&quot; button appear at the bottom." style="max-width: min(400px, 100%); margin: 0 auto; display: block;"> (That demo did freeze the app when I tried to add a follow-up prompt though.) This is the first time I've seen a local model vendor release an official app for trying out their models on in iPhone. Sadly it's missing permanent logs - conversations with this app are ephemeral. 2026-04-06 05:18:26+00:00
Eight years of wanting, three months of building with AI https://lalitm.com/post/building-syntaqlite-ai/ Lalit Maganti provides one of my favorite pieces of long-form writing on agentic engineering I've seen in ages. They spent eight years thinking about and then three months building [syntaqlite](https://github.com/lalitMaganti/syntaqlite), which they describe as "[high-fidelity devtools that SQLite deserves](https://lalitm.com/post/syntaqlite/)". The goal was to provide fast, robust and comprehensive linting and verifying tools for SQLite, suitable for use in language servers and other development tools - a parser, formatter, and verifier for SQLite queries. I've found myself wanting this kind of thing in the past myself, hence my (far less production-ready) [sqlite-ast](https://simonwillison.net/2026/Jan/30/sqlite-ast-2/) project from a few months ago. Lalit had been procrastinating on this project for years, because of the inevitable tedium of needing to work through 400+ grammar rules to help build a parser. That's exactly the kind of tedious work that coding agents excel at! Claude Code helped get over that initial hump and build the first prototype: > AI basically let me put aside all my doubts on technical calls, my uncertainty of building the right thing and my reluctance to get started by giving me very concrete problems to work on. Instead of “I need to understand how SQLite’s parsing works”, it was “I need to get AI to suggest an approach for me so I can tear it up and build something better". I work so much better with concrete prototypes to play with and code to look at than endlessly thinking about designs in my head, and AI lets me get to that point at a pace I could not have dreamed about before. Once I took the first step, every step after that was so much easier. That first vibe-coded prototype worked great as a proof of concept, but they eventually made the decision to throw it away and start again from scratch. AI worked great for the low level details but did not produce a coherent high-level architecture: > I found that AI made me procrastinate on key design decisions. Because refactoring was cheap, I could always say “I’ll deal with this later.” And because AI could refactor at the same industrial scale it generated code, the cost of deferring felt low. But it wasn’t: deferring decisions corroded my ability to think clearly because the codebase stayed confusing in the meantime. The second attempt took a lot longer and involved a great deal more human-in-the-loop decision making, but the result is a robust library that can stand the test of time. It's worth setting aside some time to read this whole thing - it's full of non-obvious downsides to working heavily with AI, as well as a detailed explanation of how they overcame those hurdles. The key idea I took away from this concerns AI's weakness in terms of design and architecture: > When I was working on something where I didn’t even know what I wanted, AI was somewhere between unhelpful and harmful. The architecture of the project was the clearest case: I spent weeks in the early days following AI down dead ends, exploring designs that felt productive in the moment but collapsed under scrutiny. In hindsight, I have to wonder if it would have been faster just thinking it through without AI in the loop at all. > > But expertise alone isn’t enough. Even when I understood a problem deeply, AI still struggled if the task had no objectively checkable answer. Implementation has a right answer, at least at a local level: the code compiles, the tests pass, the output matches what you asked for. Design doesn’t. We’re still arguing about OOP decades after it first took off. 2026-04-05 23:54:18+00:00
A visual guide to Gemma 4 https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-gemma-4 Maarten Grootendorst joined Google DeepMind two months ago and has been working on the Gemma 4 release. 2026-04-04 16:08:19+00:00
Copy and export data

Duration: 5.21ms