<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: Notes</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/atom/notes/" rel="self"/><id>http://simonwillison.net/</id><updated>2026-04-13T20:59:00+00:00</updated><author><name>Simon Willison</name></author><entry><title>Steve Yegge</title><link href="https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-notes" rel="alternate"/><published>2026-04-13T20:59:00+00:00</published><updated>2026-04-13T20:59:00+00:00</updated><id>https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-notes</id><summary type="html">&lt;p&gt;&lt;a href="https://twitter.com/steve_yegge/status/2043747998740689171"&gt;Steve Yegge&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I was chatting with my buddy at Google, who's been a tech director there for about 20 years, about their AI adoption. Craziest convo I've had all year.&lt;/p&gt;
&lt;p&gt;The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too. [...]&lt;/p&gt;
&lt;p&gt;There has been an industry-wide hiring freeze for 18+ months, during which time nobody has been moving jobs. So there are no clued-in people coming in from the outside to tell Google how far behind they are, how utterly mediocre they have become as an eng org.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://twitter.com/addyosmani/status/2043812343508021460"&gt;Addy Osmani&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;On behalf of @Google, this post doesn't match the state of agentic coding at our company. Over 40K SWEs use agentic coding weekly here. Googlers have access to our own versions of @antigravity, @geminicli, custom models, skills, CLIs and MCPs for our daily work. Orchestrators, agent loops, virtual SWE teams and many other systems are actively available to folks. [...]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href="https://twitter.com/demishassabis/status/2043867486320222333"&gt;Demis Hassabis&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Maybe tell your buddy to do some actual work and to stop spreading absolute nonsense. This post is completely false and just pure clickbait.&lt;/p&gt;
&lt;/blockquote&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/addy-osmani"&gt;addy-osmani&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/steve-yegge"&gt;steve-yegge&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="addy-osmani"/><category term="steve-yegge"/><category term="google"/><category term="generative-ai"/><category term="agentic-engineering"/><category term="ai"/><category term="llms"/></entry><entry><title>Gemma 4 audio with MLX</title><link href="https://simonwillison.net/2026/Apr/12/mlx-audio/#atom-notes" rel="alternate"/><published>2026-04-12T23:57:53+00:00</published><updated>2026-04-12T23:57:53+00:00</updated><id>https://simonwillison.net/2026/Apr/12/mlx-audio/#atom-notes</id><summary type="html">&lt;p&gt;Thanks to a &lt;a href="https://twitter.com/RahimNathwani/status/2039961945613209852"&gt;tip from Rahim Nathwani&lt;/a&gt;, here's a &lt;code&gt;uv run&lt;/code&gt; recipe for transcribing an audio file on macOS using the 10.28 GB &lt;a href="https://huggingface.co/google/gemma-4-E2B"&gt;Gemma 4 E2B model&lt;/a&gt; with MLX and &lt;a href="https://github.com/Blaizzy/mlx-vlm"&gt;mlx-vlm&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uv run --python 3.13 --with mlx_vlm --with torchvision --with gradio \
  mlx_vlm.generate \
  --model google/gemma-4-e2b-it \
  --audio file.wav \
  --prompt "Transcribe this audio" \
  --max-tokens 500 \
  --temperature 1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;audio controls style="width: 100%"&gt;
  &lt;source src="https://static.simonwillison.net/static/2026/demo-audio-for-gemma.wav" type="audio/wav"&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;&lt;/p&gt;
&lt;p&gt;I tried it on &lt;a href="https://static.simonwillison.net/static/2026/demo-audio-for-gemma.wav"&gt;this 14 second &lt;code&gt;.wav&lt;/code&gt; file&lt;/a&gt; and it output the following:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This front here is a quick voice memo. I want to try it out with MLX VLM. Just going to see if it can be transcribed by Gemma and how that works.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;(That was supposed to be "This right here..." and "... how well that works" but I can hear why it misinterpreted that as "front" and "how that works".)&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mlx"&gt;mlx&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemma"&gt;gemma&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/speech-to-text"&gt;speech-to-text&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;&lt;/p&gt;

</summary><category term="uv"/><category term="mlx"/><category term="ai"/><category term="gemma"/><category term="llms"/><category term="speech-to-text"/><category term="python"/><category term="generative-ai"/></entry><entry><title>Kākāpō parrots</title><link href="https://simonwillison.net/2026/Apr/10/kakapo/#atom-notes" rel="alternate"/><published>2026-04-10T19:07:02+00:00</published><updated>2026-04-10T19:07:02+00:00</updated><id>https://simonwillison.net/2026/Apr/10/kakapo/#atom-notes</id><summary type="html">&lt;p&gt;Lenny &lt;a href="https://twitter.com/lennysan/status/2042615413494939943"&gt;posted&lt;/a&gt; another snippet from &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;our 1 hour 40 minute podcast recording&lt;/a&gt; and it's about kākāpō parrots!&lt;/p&gt;
&lt;p&gt;&lt;video
  src="https://static.simonwillison.net/static/2026/kakapo-lenny.mp4"
  poster="https://static.simonwillison.net/static/2026/kakapo-lenny.jpg"
  controls
  preload="none"
  playsinline
  style="display:block; max-width:400px; width:100%; height:auto; margin:0 auto"
&gt;&lt;track src="https://static.simonwillison.net/static/cors-allow/2026/kakapo-lenny.vtt" kind="captions" srclang="en" label="English"&gt;&lt;/video&gt;
&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/kakapo"&gt;kakapo&lt;/a&gt;&lt;/p&gt;

</summary><category term="kakapo"/></entry><entry><title>ChatGPT voice mode is a weaker model</title><link href="https://simonwillison.net/2026/Apr/10/voice-mode-is-weaker/#atom-notes" rel="alternate"/><published>2026-04-10T15:56:02+00:00</published><updated>2026-04-10T15:56:02+00:00</updated><id>https://simonwillison.net/2026/Apr/10/voice-mode-is-weaker/#atom-notes</id><summary type="html">&lt;p&gt;I think it's non-obvious to many people that the OpenAI voice mode runs on a much older, much weaker model - it feels like the AI that you can talk to should be the smartest AI but it really isn't.&lt;/p&gt;
&lt;p&gt;If you ask ChatGPT voice mode for its knowledge cutoff date it tells you April 2024 - it's a GPT-4o era model.&lt;/p&gt;
&lt;p&gt;This thought inspired by &lt;a href="https://twitter.com/karpathy/status/2042334451611693415"&gt;this Andrej Karpathy tweet&lt;/a&gt; about the growing gap in understanding of AI capability based on the access points and domains people are using the models with:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[...] It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and &lt;em&gt;at the same time&lt;/em&gt;, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems.&lt;/p&gt;
&lt;p&gt;This part really works and has made dramatic strides because 2 properties:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge),  but also&lt;/li&gt;
&lt;li&gt;they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/andrej-karpathy"&gt;andrej-karpathy&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/chatgpt"&gt;chatgpt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="andrej-karpathy"/><category term="generative-ai"/><category term="openai"/><category term="chatgpt"/><category term="ai"/><category term="llms"/></entry><entry><title>The cognitive impact of coding agents</title><link href="https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-notes" rel="alternate"/><published>2026-04-03T23:57:04+00:00</published><updated>2026-04-03T23:57:04+00:00</updated><id>https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-notes</id><summary type="html">&lt;p&gt;A fun thing about &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;recording a podcast&lt;/a&gt; with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's &lt;a href="https://x.com/lennysan/status/2039845666680176703"&gt;one he shared on Twitter today&lt;/a&gt; which ended up attracting over 1.1m views!&lt;/p&gt;
&lt;p&gt;&lt;video
  src="https://static.simonwillison.net/static/2026/cognitive-cost.mp4"
  poster="https://static.simonwillison.net/static/2026/cognitive-cost-poster.jpg"
  controls
  preload="none"
  playsinline
  style="display:block; max-width:400px; width:100%; height:auto; margin:0 auto"
&gt;&lt;track src="https://static.simonwillison.net/static/2026/cognitive-cost.vtt" kind="captions" srclang="en" label="English"&gt;&lt;/video&gt;
&lt;/p&gt;
&lt;p&gt;That was 48 seconds. Our &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;full conversation&lt;/a&gt; lasted 1 hour 40 minutes.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/podcast-appearances"&gt;podcast-appearances&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="coding-agents"/><category term="agentic-engineering"/><category term="generative-ai"/><category term="podcast-appearances"/><category term="ai"/><category term="llms"/><category term="cognitive-debt"/></entry><entry><title>March 2026 sponsors-only newsletter</title><link href="https://simonwillison.net/2026/Apr/2/march-newsletter/#atom-notes" rel="alternate"/><published>2026-04-02T05:15:04+00:00</published><updated>2026-04-02T05:15:04+00:00</updated><id>https://simonwillison.net/2026/Apr/2/march-newsletter/#atom-notes</id><summary type="html">&lt;p&gt;I just sent the March edition of my &lt;a href="https://github.com/sponsors/simonw/"&gt;sponsors-only monthly newsletter&lt;/a&gt;. If you are a sponsor (or if you start a sponsorship now) you can &lt;a href="https://github.com/simonw-private/monthly/blob/main/2026-03-march.md"&gt;access it here&lt;/a&gt;. In this month's newsletter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;More agentic engineering patterns&lt;/li&gt;
&lt;li&gt;Streaming experts with MoE models on a Mac&lt;/li&gt;
&lt;li&gt;Model releases in March&lt;/li&gt;
&lt;li&gt;Vibe porting&lt;/li&gt;
&lt;li&gt;Supply chain attacks against PyPI and NPM&lt;/li&gt;
&lt;li&gt;Stuff I shipped&lt;/li&gt;
&lt;li&gt;What I'm using, March 2026 edition&lt;/li&gt;
&lt;li&gt;And a couple of museums&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/8b5fa061937842659dbcd5bd676ce0e8"&gt;a copy of the February newsletter&lt;/a&gt; as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/newsletter"&gt;newsletter&lt;/a&gt;&lt;/p&gt;

</summary><category term="newsletter"/></entry><entry><title>Streaming experts</title><link href="https://simonwillison.net/2026/Mar/24/streaming-experts/#atom-notes" rel="alternate"/><published>2026-03-24T05:09:03+00:00</published><updated>2026-03-24T05:09:03+00:00</updated><id>https://simonwillison.net/2026/Mar/24/streaming-experts/#atom-notes</id><summary type="html">&lt;p&gt;I wrote about Dan Woods' experiments with &lt;strong&gt;streaming experts&lt;/strong&gt; &lt;a href="https://simonwillison.net/2026/Mar/18/llm-in-a-flash/"&gt;the other day&lt;/a&gt;, the trick where you run larger Mixture-of-Experts models on hardware that doesn't have enough RAM to fit the entire model by instead streaming the necessary expert weights from SSD for each token that you process.&lt;/p&gt;
&lt;p&gt;Five days ago Dan was running Qwen3.5-397B-A17B in 48GB of RAM. Today &lt;a href="https://twitter.com/seikixtc/status/2036246162936910322"&gt;@seikixtc reported&lt;/a&gt; running the colossal Kimi K2.5 - a 1 trillion parameter model with 32B active weights at any one time, in 96GB of RAM on an M2 Max MacBook Pro.&lt;/p&gt;
&lt;p&gt;And &lt;a href="https://twitter.com/anemll/status/2035901335984611412"&gt;@anemll showed&lt;/a&gt; that same Qwen3.5-397B-A17B model running on an iPhone, albeit at just 0.6 tokens/second - &lt;a href="https://github.com/Anemll/flash-moe/tree/iOS-App"&gt;iOS repo here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think this technique has legs. Dan and his fellow tinkerers are continuing to run &lt;a href="https://simonwillison.net/tags/autoresearch/"&gt;autoresearch loops&lt;/a&gt; in order to find yet more optimizations to squeeze more performance out of these models.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: Now Daniel Isaac &lt;a href="https://twitter.com/danpacary/status/2036480556045836603"&gt;got Kimi K2.5 working&lt;/a&gt; on a 128GB M4 Max at ~1.7 tokens/second.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/definitions"&gt;definitions&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/autoresearch"&gt;autoresearch&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/kimi"&gt;kimi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/qwen"&gt;qwen&lt;/a&gt;&lt;/p&gt;

</summary><category term="definitions"/><category term="llms"/><category term="ai"/><category term="autoresearch"/><category term="generative-ai"/><category term="kimi"/><category term="local-llms"/><category term="qwen"/></entry><entry><title>Beats now have notes</title><link href="https://simonwillison.net/2026/Mar/23/beats-now-have-notes/#atom-notes" rel="alternate"/><published>2026-03-23T02:13:13+00:00</published><updated>2026-03-23T02:13:13+00:00</updated><id>https://simonwillison.net/2026/Mar/23/beats-now-have-notes/#atom-notes</id><summary type="html">&lt;p&gt;Last month I &lt;a href="https://simonwillison.net/2026/Feb/20/beats/"&gt;added a feature I call beats&lt;/a&gt; to this blog, pulling in some of my other content from &lt;a href="https://simonwillison.net/elsewhere/"&gt;external sources&lt;/a&gt; and including it on the homepage, search and various archive pages on the site.&lt;/p&gt;
&lt;p&gt;On any given day these frequently outnumber my regular posts. They were looking a little bit thin and were lacking any form of explanation beyond a link, so I've added the ability to annotate them with a "note" which now shows up as part of their display.&lt;/p&gt;
&lt;p&gt;Here's what that looks like &lt;a href="https://simonwillison.net/2026/Mar/22/"&gt;for the content I published yesterday&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img class="blogmark-image" style="width:80%" src="https://static.simonwillison.net/static/2026/beats-notes.jpg" alt="Screenshot of part of my blog homepage showing four &amp;quot;beats&amp;quot; entries from March 22, 2026, each tagged as RESEARCH or TOOL, with titles like &amp;quot;PCGamer Article Performance Audit&amp;quot; and &amp;quot;DNS Lookup&amp;quot;, now annotated with short descriptive notes explaining the context behind each linked item."&gt;&lt;/p&gt;
&lt;p&gt;I've also updated the &lt;a href="https://simonwillison.net/atom/everything/"&gt;/atom/everything/&lt;/a&gt; Atom feed to include any beats that I've attached notes to.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/atom"&gt;atom&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/blogging"&gt;blogging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/site-upgrades"&gt;site-upgrades&lt;/a&gt;&lt;/p&gt;

</summary><category term="atom"/><category term="blogging"/><category term="site-upgrades"/></entry><entry><title>February sponsors-only newsletter</title><link href="https://simonwillison.net/2026/Mar/2/february-newsletter/#atom-notes" rel="alternate"/><published>2026-03-02T14:53:15+00:00</published><updated>2026-03-02T14:53:15+00:00</updated><id>https://simonwillison.net/2026/Mar/2/february-newsletter/#atom-notes</id><summary type="html">&lt;p&gt;I just sent the February edition of my &lt;a href="https://github.com/sponsors/simonw/"&gt;sponsors-only monthly newsletter&lt;/a&gt;. If you are a sponsor (or if you start a sponsorship now) you can &lt;a href="https://github.com/simonw-private/monthly/blob/main/2026-02-february.md"&gt;access it here&lt;/a&gt;. In this month's newsletter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;More OpenClaw, and Claws in general&lt;/li&gt;
&lt;li&gt;I started a not-quite-a-book about Agentic Engineering&lt;/li&gt;
&lt;li&gt;StrongDM, Showboat and Rodney&lt;/li&gt;
&lt;li&gt;Kākāpō breeding season&lt;/li&gt;
&lt;li&gt;Model releases&lt;/li&gt;
&lt;li&gt;What I'm using, February 2026 edition&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/36f567d1b3f8bb4ab4d872d477fbb295"&gt;a copy of the January newsletter&lt;/a&gt; as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!&lt;/p&gt;
&lt;p&gt;I use Claude as a proofreader for spelling and grammar via &lt;a href="https://simonwillison.net/guides/agentic-engineering-patterns/prompts/#proofreader"&gt;this prompt&lt;/a&gt; which also asks it to "Spot any logical errors or factual mistakes". I'm delighted to report that Claude Opus 4.6 called me out on this one:&lt;/p&gt;
&lt;p&gt;&lt;img alt="5. &amp;quot;No new chicks for four years (due to a lack of fruiting rimu trees)&amp;quot;
The phrasing &amp;quot;lack of fruiting rimu trees&amp;quot; is slightly imprecise. The issue isn't that rimu trees failed to fruit at all, but that there was no mass fruiting (masting) event, which is the specific trigger for kākāpō breeding. Consider &amp;quot;due to a lack of rimu masting&amp;quot; or &amp;quot;due to a lack of mass rimu fruiting.&amp;quot;" src="https://static.simonwillison.net/static/2026/claude-fact-check.jpg" /&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/newsletter"&gt;newsletter&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/kakapo"&gt;kakapo&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude"&gt;claude&lt;/a&gt;&lt;/p&gt;

</summary><category term="newsletter"/><category term="kakapo"/><category term="claude"/></entry><entry><title>My current policy on AI writing for my blog</title><link href="https://simonwillison.net/2026/Mar/1/ai-writing/#atom-notes" rel="alternate"/><published>2026-03-01T16:06:43+00:00</published><updated>2026-03-01T16:06:43+00:00</updated><id>https://simonwillison.net/2026/Mar/1/ai-writing/#atom-notes</id><summary type="html">&lt;p&gt;Because I write about LLMs (and maybe because of my &lt;a href="https://simonwillison.net/2026/Feb/15/em-dashes/"&gt;em dash text replacement code&lt;/a&gt;) a lot of people assume that the writing on my blog is partially or fully created by those LLMs.&lt;/p&gt;
&lt;p&gt;My current policy on this is that if text expresses opinions or has "I" pronouns attached to it then it's written by me. I don't let LLMs speak for me in this way.&lt;/p&gt;
&lt;p&gt;I'll let an LLM update code documentation or even write a README for my project but I'll edit that to ensure it doesn't express opinions or say things like "This is designed to help make code easier to maintain" - because that's an expression of a rationale that the LLM just made up.&lt;/p&gt;
&lt;p&gt;I use LLMs to proofread text I publish on my blog. I just shared &lt;a href="https://simonwillison.net/guides/agentic-engineering-patterns/prompts/#proofreader"&gt;my current prompt for that here&lt;/a&gt;.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/writing"&gt;writing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/blogging"&gt;blogging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="writing"/><category term="generative-ai"/><category term="blogging"/><category term="ai"/><category term="llms"/></entry><entry><title>Reply guy</title><link href="https://simonwillison.net/2026/Feb/23/reply-guy/#atom-notes" rel="alternate"/><published>2026-02-23T13:11:57+00:00</published><updated>2026-02-23T13:11:57+00:00</updated><id>https://simonwillison.net/2026/Feb/23/reply-guy/#atom-notes</id><summary type="html">&lt;p&gt;The latest scourge of Twitter is AI bots that reply to your tweets with generic, banal commentary slop, often accompanied by a question to "drive engagement" and waste as much of your time as possible.&lt;/p&gt;
&lt;p&gt;I just &lt;a href="https://twitter.com/simonw/status/2025918174894673986"&gt;found out&lt;/a&gt; that the category name for this genre of software is &lt;strong&gt;reply guy&lt;/strong&gt; tools. Amazing.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/twitter"&gt;twitter&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/slop"&gt;slop&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/definitions"&gt;definitions&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="twitter"/><category term="slop"/><category term="generative-ai"/><category term="definitions"/><category term="ai"/><category term="llms"/></entry><entry><title>Recovering lost code</title><link href="https://simonwillison.net/2026/Feb/19/recovering-lost-code/#atom-notes" rel="alternate"/><published>2026-02-19T23:48:35+00:00</published><updated>2026-02-19T23:48:35+00:00</updated><id>https://simonwillison.net/2026/Feb/19/recovering-lost-code/#atom-notes</id><summary type="html">&lt;p&gt;Reached the stage of parallel agent psychosis where I've lost a whole feature - I know I had it yesterday, but I can't seem to find the branch or worktree or cloud instance or checkout with it in.&lt;/p&gt;
&lt;p&gt;... found it! Turns out I'd been hacking on a random prototype in &lt;code&gt;/tmp&lt;/code&gt; and then my computer crashed and rebooted and I lost the code... but it's all still there in &lt;code&gt;~/.claude/projects/&lt;/code&gt; session logs and Claude Code can extract it out and spin up the missing feature again.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/parallel-agents"&gt;parallel-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-code"&gt;claude-code&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="parallel-agents"/><category term="coding-agents"/><category term="claude-code"/><category term="generative-ai"/><category term="ai"/><category term="llms"/></entry><entry><title>Experimenting with sponsorship for my blog and newsletter</title><link href="https://simonwillison.net/2026/Feb/19/sponsorship/#atom-notes" rel="alternate"/><published>2026-02-19T05:44:29+00:00</published><updated>2026-02-19T05:44:29+00:00</updated><id>https://simonwillison.net/2026/Feb/19/sponsorship/#atom-notes</id><summary type="html">&lt;p&gt;I've long been resistant to the idea of accepting sponsorship for my blog. I value my credibility as an independent voice, and I don't want to risk compromising that reputation.&lt;/p&gt;
&lt;p&gt;Then I learned about Troy Hunt's &lt;a href="https://www.troyhunt.com/sponsorship/"&gt;approach to sponsorship&lt;/a&gt;, which he first wrote about &lt;a href="https://www.troyhunt.com/im-now-offering-sponsorship-of-this-blog/"&gt;in 2016&lt;/a&gt;. Troy runs with a simple text row in the page banner - no JavaScript, no cookies, unobtrusive while providing value to the sponsor. I can live with that!&lt;/p&gt;
&lt;p&gt;Accepting sponsorship in this way helps me maintain my independence while offsetting the opportunity cost of not taking a full-time job.&lt;/p&gt;
&lt;p&gt;To start with I'm selling sponsorship by the week. Sponsors get that unobtrusive banner across my blog and also their sponsored message at the top of &lt;a href="https://simonw.substack.com/"&gt;my newsletter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Screenshot of my blog's homepage. Below the Simon Willison's Weblog heading and list of tags is a new blue page-wide banner reading &amp;quot;Sponsored by: Teleport - Secure, Govern, and Operate Al at Engineering Scale. Learn more&amp;quot;." src="https://static.simonwillison.net/static/2026/sponsor-banner.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I &lt;strong&gt;will not write content in exchange for sponsorship&lt;/strong&gt;. I hope the sponsors I work with understand that my credibility as an independent voice is a key reason I have an audience, and compromising that trust would be bad for everyone.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.freemanandforrest.com/"&gt;Freeman &amp;amp; Forrest&lt;/a&gt; helped me set up and sell my first slots. Thanks also to &lt;a href="https://t3.gg/"&gt;Theo Browne&lt;/a&gt; for helping me think through my approach.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/newsletter"&gt;newsletter&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/blogging"&gt;blogging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/troy-hunt"&gt;troy-hunt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/site-upgrades"&gt;site-upgrades&lt;/a&gt;&lt;/p&gt;

</summary><category term="newsletter"/><category term="blogging"/><category term="troy-hunt"/><category term="site-upgrades"/></entry><entry><title>Typing without having to type</title><link href="https://simonwillison.net/2026/Feb/18/typing/#atom-notes" rel="alternate"/><published>2026-02-18T18:56:56+00:00</published><updated>2026-02-18T18:56:56+00:00</updated><id>https://simonwillison.net/2026/Feb/18/typing/#atom-notes</id><summary type="html">&lt;p&gt;25+ years into my career as a programmer I think I may &lt;em&gt;finally&lt;/em&gt; be coming around to preferring type hints or even strong typing. I resisted those in the past because they slowed down the rate at which I could iterate on code, especially in the REPL environments that were key to my productivity. But if a coding agent is doing all that &lt;em&gt;typing&lt;/em&gt; for me, the benefits of explicitly defining all of those types are suddenly much more attractive.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/programming"&gt;programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/programming-languages"&gt;programming-languages&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/static-typing"&gt;static-typing&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-assisted-programming"/><category term="programming"/><category term="programming-languages"/><category term="static-typing"/></entry><entry><title>Nano Banana Pro diff to webcomic</title><link href="https://simonwillison.net/2026/Feb/17/release-notes-webcomic/#atom-notes" rel="alternate"/><published>2026-02-17T04:51:58+00:00</published><updated>2026-02-17T04:51:58+00:00</updated><id>https://simonwillison.net/2026/Feb/17/release-notes-webcomic/#atom-notes</id><summary type="html">&lt;p&gt;Given the threat of &lt;a href="https://simonwillison.net/tags/cognitive-debt/"&gt;cognitive debt&lt;/a&gt; brought on by AI-accelerated software development leading to more projects and less deep understanding of how they work and what they actually do, it's interesting to consider artifacts that might be able to help.&lt;/p&gt;
&lt;p&gt;Nathan Baschez &lt;a href="https://twitter.com/nbaschez/status/2023501535343509871"&gt;on Twitter&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;my current favorite trick for reducing "cognitive debt" (h/t @simonw
) is to ask the LLM to write two versions of the plan:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The version for it (highly technical and detailed)&lt;/li&gt;
&lt;li&gt;The version for me (an entertaining essay designed to build my intuition)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Works great&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This inspired me to try something new. I generated &lt;a href="https://github.com/simonw/showboat/compare/v0.5.0...v0.6.0.diff"&gt;the diff&lt;/a&gt; between v0.5.0 and v0.6.0 of my Showboat project - which introduced &lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#showboat-remote-publishing"&gt;the remote publishing feature&lt;/a&gt; - and dumped that into Nano Banana Pro with the prompt:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Create a webcomic that explains the new feature as clearly and entertainingly as possible&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's &lt;a href="https://gemini.google.com/share/cce6da8e5083"&gt;what it produced&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt="A six-panel comic strip illustrating a tool called &amp;quot;Showboat&amp;quot; for live-streaming document building. Panel 1, titled &amp;quot;THE OLD WAY: Building docs was a lonely voyage. You finished it all before anyone saw it.&amp;quot;, shows a sad bearded man on a wooden boat labeled &amp;quot;THE LOCALHOST&amp;quot; holding papers and saying &amp;quot;Almost done... then I have to export and email the HTML...&amp;quot;. Panel 2, titled &amp;quot;THE UPGRADE: Just set the environment variable!&amp;quot;, shows the same man excitedly plugging in a device with a speech bubble reading &amp;quot;ENV VAR: SHOWBOAT_REMOTE_URL&amp;quot; and the sound effect &amp;quot;*KA-CHUNK!*&amp;quot;. Panel 3, titled &amp;quot;init establishes the uplink and generates a unique UUID beacon.&amp;quot;, shows the man typing at a keyboard with a terminal reading &amp;quot;$ showboat init 'Live Demo'&amp;quot;, a satellite dish transmitting to a floating label &amp;quot;UUID: 550e84...&amp;quot;, and a monitor reading &amp;quot;WAITING FOR STREAM...&amp;quot;. Panel 4, titled &amp;quot;Every note and exec is instantly beamed to the remote viewer!&amp;quot;, shows the man coding with sound effects &amp;quot;*HAMMER!*&amp;quot;, &amp;quot;ZAP!&amp;quot;, &amp;quot;ZAP!&amp;quot;, &amp;quot;BANG!&amp;quot; as red laser beams shoot from a satellite dish to a remote screen displaying &amp;quot;NOTE: Step 1...&amp;quot; and &amp;quot;SUCCESS&amp;quot;. Panel 5, titled &amp;quot;Even image files are teleported in real-time!&amp;quot;, shows a satellite dish firing a cyan beam with the sound effect &amp;quot;*FOOMP!*&amp;quot; toward a monitor displaying a bar chart. Panel 6, titled &amp;quot;You just build. The audience gets the show live.&amp;quot;, shows the man happily working at his boat while a crowd of cheering people watches a projected screen reading &amp;quot;SHOWBOAT LIVE STREAM: Live Demo&amp;quot;, with a label &amp;quot;UUID: 550e84...&amp;quot; and one person in the foreground eating popcorn." src="https://static.simonwillison.net/static/2026/nano-banana-diff.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Good enough to publish with the release notes? I don't think so. I'm sharing it here purely to demonstrate the idea. Creating assets like this as a personal tool for thinking about novel ways to explain a feature feels worth exploring further.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/nano-banana"&gt;nano-banana&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/text-to-image"&gt;text-to-image&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/showboat"&gt;showboat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;&lt;/p&gt;

</summary><category term="nano-banana"/><category term="gemini"/><category term="llms"/><category term="cognitive-debt"/><category term="generative-ai"/><category term="ai"/><category term="text-to-image"/><category term="showboat"/><category term="ai-assisted-programming"/></entry></feed>