| Singing the gospel of collective efficacy |
https://interconnected.org/home/2026/01/30/efficacy |
Lovely piece from Matt Webb about how you can "just do things" to help make your community better for everyone:
> Similarly we all love when the swifts visit (beautiful birds), so somebody started a group to get swift nest boxes made and installed collectively, then applied for subsidy funding, then got everyone to chip in such that people who couldn’t afford it could have their boxes paid for, and now suddenly we’re all writing to MPs and following the legislation to include swift nesting sites in new build houses. Etc.
>
> It’s called *collective efficacy*, the belief that you can make a difference by acting together.
My current favorite "you can just do things" is a bit of a stretch, but apparently you can just build a successful software company for 20 years and then use the proceeds to [start a theater in Baltimore](https://bmoreart.com/2024/09/the-voxel-is-a-cutting-edge-theater-experiment.html) (for "research") and give the space away to artists for free. |
2026-01-31 01:22:15+00:00 |
| We gotta talk about AI as a programming tool for the arts |
https://www.tiktok.com/@chris_ashworth/video/7600801037292768525 |
Chris Ashworth is the creator and CEO of [QLab](https://en.wikipedia.org/wiki/QLab), a macOS software package for “cue-based, multimedia playback” which is designed to automate lighting and audio for live theater productions.
I recently started following him on TikTok where he posts about his business and theater automation in general - Chris founded [the Voxel](https://voxel.org/faq/) theater in Baltimore which QLab use as a combined performance venue, teaching hub and research lab (here's [a profile of the theater](https://bmoreart.com/2024/09/the-voxel-is-a-cutting-edge-theater-experiment.html)), and the resulting videos offer a fascinating glimpse into a world I know virtually nothing about.
[This latest TikTok](https://www.tiktok.com/@chris_ashworth/video/7600801037292768525) describes his Claude Opus moment, after he used Claude Code to build a custom lighting design application for a *very* niche project and put together a useful application in just a few days that he would never have been able to spare the time for otherwise.
Chris works full time in the arts and comes at generative AI from a position of rational distrust. It's interesting to see him working through that tension to acknowledge that there are valuable applications here to build tools for the community he serves.
> I have been at least gently skeptical about all this stuff for the last two years. Every time I checked in on it, I thought it was garbage, wasn't interested in it, wasn't useful. [...] But as a programmer, if you hear something like, this is changing programming, it's important to go check it out once in a while. So I went and checked it out a few weeks ago. And it's different. It's astonishing. [...]
>
> One thing I learned in this exercise is that it can't make you a fundamentally better programmer than you already are. It can take a person who is a bad programmer and make them faster at making bad programs. And I think it can take a person who is a good programmer and, from what I've tested so far, make them faster at making good programs. [...] You see programmers out there saying, "I'm shipping code I haven't looked at and don't understand." I'm terrified by that. I think that's awful. But if you're capable of understanding the code that it's writing, and directing, designing, editing, deleting, being quality control on it, it's kind of astonishing. [...]
>
> The positive thing I see here, and I think is worth coming to terms with, is this is an application that I would never have had time to write as a professional programmer. Because the audience is three people. [...] There's no way it was worth it to me to spend my energy of 20 years designing and implementing software for artists to build an app for three people that is this level of polish. And it took me a few days. [...]
>
> I know there are a lot of people who really hate this technology, and in some ways I'm among them. But I think we've got to come to terms with this is a career-changing moment. And I really hate that I'm saying that because I didn't believe it for the last two years. [...] It's like having a room full of power tools. I wouldn't want to send an untrained person into a room full of power tools because they might chop off their fingers. But if someone who knows how to use tools has the option to have both hand tools and a power saw and a power drill and a lathe, there's a lot of work they can do with those tools at a lot faster speed. |
2026-01-30 03:51:53+00:00 |
| How “95%” escaped into the world – and why so many believed it |
https://www.exponentialview.co/p/how-95-escaped-into-the-world |
. |
2026-01-29 20:18:55+00:00 |
| Datasette 1.0a24 |
https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29 |
New Datasette alpha this morning. Key new features:
- Datasette's `Request` object can now handle `multipart/form-data` file uploads via the new [await request.form(files=True)](https://docs.datasette.io/en/latest/internals.html#internals-formdata) method. I plan to use this for a `datasette-files` plugin to support attaching files to rows of data.
- The [recommended development environment](https://docs.datasette.io/en/latest/contributing.html#setting-up-a-development-environment) for hacking on Datasette itself now uses [uv](https://github.com/astral-sh/uv). Crucially, you can clone Datasette and run `uv run pytest` to run the tests without needing to manually create a virtual environment or install dependencies first, thanks to the [dev dependency group pattern](https://til.simonwillison.net/uv/dependency-groups).
- A new `?_extra=render_cell` parameter for both table and row JSON pages to return the results of executing the [render_cell() plugin hook](https://docs.datasette.io/en/latest/plugin_hooks.html#render-cell-row-value-column-table-database-datasette-request). This should unlock new JavaScript UI features in the future.
More details [in the release notes](https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29). I also invested a bunch of work in eliminating flaky tests that were intermittently failing in CI - I *think* those are all handled now. |
2026-01-29 17:21:51+00:00 |
| The Five Levels: from Spicy Autocomplete to the Dark Factory |
https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/ |
Dan Shapiro proposes a five level model of AI-assisted programming, inspired by the five (or rather six, it's zero-indexed) [levels of driving automation](https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-05/Level-of-Automation-052522-tag.pdf).
<ol start="0">
<li><strong>Spicy autocomplete</strong>, aka original GitHub Copilot or copying and pasting snippets from ChatGPT.</li>
<li>The <strong>coding intern</strong>, writing unimportant snippets and boilerplate with full human review.</li>
<li>The <strong>junior developer</strong>, pair programming with the model but still reviewing every line.</li>
<li>The <strong>developer</strong>. Most code is generated by AI, and you take on the role of full-time code reviewer.</li>
<li>The <strong>engineering team</strong>. You're more of an engineering manager or product/program/project manager. You collaborate on specs and plans, the agents do the work.</li>
<li>The <strong>dark software factory</strong>, like a factory run by robots where the lights are out because robots don't need to see.</li>
</ol>
Dan says about that last category:
> At level 5, it's not really a car any more. You're not really running anybody else's software any more. And your software process isn't really a software process any more. It's a black box that turns specs into software.
>
> Why Dark? Maybe you've heard of the Fanuc Dark Factory, [the robot factory staffed by robots](https://www.organizedergi.com/News/5493/robots-the-maker-of-robots-in-fanuc-s-dark-factory). It's dark, because it's a place where humans are neither needed nor welcome.
>
> I know a handful of people who are doing this. They're small teams, less than five people. And what they're doing is nearly unbelievable -- and it will likely be our future.
I've talked to one team that's doing the pattern hinted at here. It was *fascinating*. The key characteristics:
- Nobody reviews AI-produced code, ever. They don't even look at it.
- The goal of the system is to prove that the system works. A huge amount of the coding agent work goes into testing and tooling and simulating related systems and running demos.
- The role of the humans is to design that system - to find new patterns that can help the agents work more effectively and demonstrate that the software they are building is robust and effective.
It was a tiny team and they stuff they had built in just a few months looked very convincing to me. Some of them had 20+ years of experience as software developers working on systems with high reliability requirements, so they were not approaching this from a naive perspective.
I'm hoping they come out of stealth soon because I can't really share more details than this. |
2026-01-28 21:44:29+00:00 |
| One Human + One Agent = One Browser From Scratch |
https://emsh.cat/one-human-one-agent-one-browser/ |
embedding-shapes was [so infuriated](https://emsh.cat/cursor-implied-success-without-evidence/) by the hype around Cursor's [FastRender browser project](https://simonwillison.net/2026/Jan/23/fastrender/) - thousands of parallel agents producing ~1.6 million lines of Rust - that they were inspired to take a go at building a web browser using coding agents themselves.
The result is [one-agent-one-browser](https://github.com/embedding-shapes/one-agent-one-browser) and it's *really* impressive. Over three days they drove a single Codex CLI agent to build 20,000 lines of Rust that successfully renders HTML+CSS with no Rust crate dependencies at all - though it does (reasonably) use Windows, macOS and Linux system frameworks for image and text rendering.
I installed the [1MB macOS binary release](https://github.com/embedding-shapes/one-agent-one-browser/releases/tag/0.1.0) and ran it against my blog:
chmod 755 ~/Downloads/one-agent-one-browser-macOS-ARM64
~/Downloads/one-agent-one-browser-macOS-ARM64 https://simonwillison.net/
Here's the result:

It even rendered my SVG feed subscription icon! A PNG image is missing from the page, which looks like an intermittent bug (there's code to render PNGs).
The code is pretty readable too - here's [the flexbox implementation](https://github.com/embedding-shapes/one-agent-one-browser/blob/0.1.0/src/layout/flex.rs).
I had thought that "build a web browser" was the ideal prompt to really stretch the capabilities of coding agents - and that it would take sophisticated multi-agent harnesses (as seen in the Cursor project) and millions of lines of code to achieve.
Turns out one agent driven by a talented engineer, three days and 20,000 lines of Rust is enough to get a very solid basic renderer working!
I'm going to upgrade my [prediction for 2029](https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#3-years-someone-will-build-a-new-browser-using-mainly-ai-assisted-coding-and-it-won-t-even-be-a-surprise): I think we're going to get a *production-grade* web browser built by a small team using AI assistance by then. |
2026-01-27 16:58:08+00:00 |
| Kimi K2.5: Visual Agentic Intelligence |
https://www.kimi.com/blog/kimi-k2-5.html |
Kimi K2 landed [in July](https://simonwillison.net/2025/Jul/11/kimi-k2/) as a 1 trillion parameter open weight LLM. It was joined by Kimi K2 Thinking [in November](https://simonwillison.net/2025/Nov/6/kimi-k2-thinking/) which added reasoning capabilities. Now they've made it multi-modal: the K2 models were text-only, but the new 2.5 can handle image inputs as well:
> Kimi K2.5 builds on Kimi K2 with continued pretraining over approximately 15T mixed visual and text tokens. Built as a native multimodal model, K2.5 delivers state-of-the-art coding and vision capabilities and a self-directed agent swarm paradigm.
The "self-directed agent swarm paradigm" claim there means improved long-sequence tool calling and training on how to break down tasks for multiple agents to work on at once:
> For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls. Compared with a single-agent setup, this reduces execution time by up to 4.5x. The agent swarm is automatically created and orchestrated by Kimi K2.5 without any predefined subagents or workflow.
I used the [OpenRouter Chat UI](https://openrouter.ai/moonshotai/kimi-k2.5) to have it "Generate an SVG of a pelican riding a bicycle", and it did [quite well](https://gist.github.com/simonw/32a85e337fbc6ee935d10d89726c0476):

As a more interesting test, I decided to exercise the claims around multi-agent planning with this prompt:
> I want to build a Datasette plugin that offers a UI to upload files to an S3 bucket and stores information about them in a SQLite table. Break this down into ten tasks suitable for execution by parallel coding agents.
Here's [the full response](https://gist.github.com/simonw/ee2583b2eb5706400a4737f56d57c456). It produced ten realistic tasks and reasoned through the dependencies between them. For comparison here's the same prompt [against Claude Opus 4.5](https://claude.ai/share/df9258e7-97ba-4362-83da-76d31d96196f) and [against GPT-5.2 Thinking](https://chatgpt.com/share/6978d48c-3f20-8006-9c77-81161f899104).
The [Hugging Face repository](https://huggingface.co/moonshotai/Kimi-K2.5) is 595GB. The model uses Kimi's janky "modified MIT" license, which adds the following clause:
> Our only modification part is that, if the Software (or any derivative works thereof) is used for any of your commercial products or services that have more than 100 million monthly active users, or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.5" on the user interface of such product or service.
Given the model's size, I expect one way to run it locally would be with MLX and a pair of $10,000 512GB RAM M3 Ultra Mac Studios. That setup has [been demonstrated to work](https://twitter.com/awnihannun/status/1943723599971443134) with previous trillion parameter K2 models. |
2026-01-27 15:07:41+00:00 |
| the browser is the sandbox |
https://aifoc.us/the-browser-is-the-sandbox/ |
Paul Kinlan is a web platform developer advocate at Google and recently turned his attention to coding agents. He quickly identified the importance of a robust sandbox for agents to operate in and put together these detailed notes on how the web browser can help:
> This got me thinking about the browser. Over the last 30 years, we have built a sandbox specifically designed to run incredibly hostile, untrusted code from anywhere on the web, the instant a user taps a URL. [...]
>
> Could you build something like Cowork in the browser? Maybe. To find out, I built a demo called [Co-do](http://co-do.xyz) that tests this hypothesis. In this post I want to discuss the research I've done to see how far we can get, and determine if the browser's ability to run untrusted code is useful (and good enough) for enabling software to do more for us directly on our computer.
Paul then describes how the three key aspects of a sandbox - filesystem, network access and safe code execution - can be handled by browser technologies: the [File System Access API](https://developer.chrome.com/docs/capabilities/web-apis/file-system-access) (still Chrome-only as far as I can tell), CSP headers with `<iframe sandbox>` and WebAssembly in Web Workers.
Co-do is a very interesting demo that illustrates all of these ideas in a single application:

You select a folder full of files and configure an LLM provider and set an API key, Co-do then uses CSP-approved API calls to interact with that provider and provides a chat interface with tools for interacting with those files. It does indeed feel similar to [Claude Cowork](https://simonwillison.net/2026/Jan/12/claude-cowork/) but without running a multi-GB local container to provide the sandbox.
My biggest complaint about `<iframe sandbox>` remains how thinly documented it is, especially across different browsers. Paul's post has all sorts of useful details on that which I've not encountered elsewhere, including a complex [double-iframe technique](https://aifoc.us/the-browser-is-the-sandbox/#the-double-iframe-technique) to help apply network rules to the inner of the two frames.
Thanks to this post I also learned about the `<input type="file" webkitdirectory>` tag which turns out to work on Firefox, Safari *and* Chrome and allows a browser read-only access to a full directory of files at once. I had Claude knock up a [webkitdirectory demo](https://tools.simonwillison.net/webkitdirectory) to try it out and I'll certainly be using it for projects in the future.
 |
2026-01-25 23:51:32+00:00 |
| Kākāpō Cam: Rakiura live stream |
https://www.doc.govt.nz/our-work/kakapo-recovery/what-we-do/kakapo-cam-rakiura-live-stream/ |
Critical update for this year's [Kākāpō breeding season](https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#1-year-k-k-p-parrots-will-have-an-outstanding-breeding-season): the New Zealand Department of Conservation have a livestream running of Rakiura's nest!
> You’re looking at the underground nest of 23-year-old Rakiura. She has chosen this same site to nest for all seven breeding seasons since 2008, a large cavity under a rātā tree. Because she returns to the site so reliably, we’ve been able to make modifications over the years to keep it safe and dry, including adding a well-placed hatch for monitoring eggs and chicks.
Rakiura is a legendary Kākāpō:
> Rakiura hatched on 19 February 2002 on Whenua Hou/Codfish Island. She is the offspring of Flossie and Bill. Her name comes from the te reo Māori name for Stewart Island, the place where most of the founding kākāpō population originated.
>
> Rakiura has nine living descendants, three females and six males, across six breeding seasons. In 2008 came Tōitiiti, in 2009 Tamahou and Te Atapō, in 2011 Tia and Tūtoko, in 2014 Taeatanga and Te Awa, in 2019 Mati-mā and Tautahi. She also has many grandchicks.
She laid her first egg of the season at 4:30pm NZ time on 22nd January. The livestream went live shortly afterwards, once she committed to this nest.
The stream is [on YouTube](https://www.youtube.com/watch?v=BfGL7A2YgUY). I [used Claude Code](https://gisthost.github.io/?dc78322de89a2191c593215f109c65d7/index.html) to write [a livestream-gif.py script](https://tools.simonwillison.net/python/#livestream-gifpy) and used that to capture this sped-up video of the last few hours of footage, within which you can catch a glimpse of the egg!
<video autoplay muted loop controls playsinline style="width: 100%;">
<source src="https://static.simonwillison.net/static/2026/kakapo-timelapse.mp4" type="video/mp4">
</video> |
2026-01-25 04:53:01+00:00 |
| Don't "Trust the Process" |
https://www.youtube.com/watch?v=4u94juYwLLM |
Jenny Wen, Design Lead at Anthropic (and previously Director of Design at Figma) gave a provocative keynote at Hatch Conference in Berlin last September.

Jenny argues that the Design Process - user research leading to personas leading to user journeys leading to wireframes... all before anything gets built - may be outdated for today's world.
> **Hypothesis**: In a world where anyone can make anything — what matters is your ability to choose and curate what you make.
In place of the Process, designers should lean into prototypes. AI makes these much more accessible and less time-consuming than they used to be.
Watching this talk made me think about how AI-assisted programming significantly reduces the cost of building the *wrong* thing. Previously if the design wasn't right you could waste months of development time building in the wrong direction, which was a very expensive mistake. If a wrong direction wastes just a few days instead we can take more risks and be much more proactive in exploring the problem space.
I've always been a compulsive prototyper though, so this is very much playing into my own existing biases! |
2026-01-24 23:31:03+00:00 |