Simon Willison’s Weblog

Subscribe
Atom feed

Quotations

Filters: Sorted by date

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession.

Giles Turnbull, AI and the human voice

# 8th April 2026, 3:18 pm / ai-ethics, writing, ai

From anonymized U.S. ChatGPT data, we are seeing:

  • ~2M weekly messages on health insurance
  • ~600K weekly messages [classified as healthcare] from people living in “hospital deserts” (30 min drive to nearest hospital)
  • 7 out of 10 msgs happen outside clinic hours

Chengpeng Mou, Head of Business Finance, OpenAI

# 5th April 2026, 9:47 pm / ai-ethics, generative-ai, openai, chatgpt, ai, llms

[GitHub] platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)

GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.

Kyle Daigle, COO, GitHub

# 4th April 2026, 2:20 am / github, github-actions

On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.

And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.

Willy Tarreau, Lead Software Developer. HAPROXY

# 3rd April 2026, 9:48 pm / security, linux, generative-ai, ai, llms, ai-security-research

The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

I'm spending hours per day on this now. It's intense.

Daniel Stenberg, lead developer of cURL

# 3rd April 2026, 9:46 pm / daniel-stenberg, security, curl, generative-ai, ai, llms, ai-security-research

Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.

Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.

Greg Kroah-Hartman, Linux kernel maintainer (bio), in conversation with Steven J. Vaughan-Nichols

# 3rd April 2026, 9:44 pm / security, linux, generative-ai, ai, llms, ai-security-research

I want to argue that AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long-term.

Soohoon Choi, Slop Is Not Necessarily The Future

# 1st April 2026, 2:07 am / slop, ai-assisted-programming, generative-ai, agentic-engineering, ai, llms

Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain.

Georgi Gerganov, explaining why it's hard to find local models that work well with coding agents

# 30th March 2026, 9:31 pm / coding-agents, generative-ai, ai, local-llms, llms, georgi-gerganov

The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon. [...]

But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.

So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!

While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.

Matt Webb, An appreciation for (technical) architecture

# 28th March 2026, 12:04 pm / matt-webb, ai, llms, vibe-coding, coding-agents, ai-assisted-programming, generative-ai, agentic-engineering, definitions

FWIW, IANDBL, TINLA, etc., I don’t currently see any basis for concluding that chardet 7.0.0 is required to be released under the LGPL. AFAIK no one including Mark Pilgrim has identified persistence of copyrightable expressive material from earlier versions in 7.0.0 nor has anyone articulated some viable alternate theory of license violation. [...]

Richard Fontana, LGPLv3 co-author, weighing in on the chardet relicensing situation

# 27th March 2026, 9:11 pm / open-source, ai-ethics, llms, ai, generative-ai, ai-assisted-programming

I really think "give AI total control of my computer and therefore my entire life" is going to look so foolish in retrospect that everyone who went for this is going to look as dumb as Jimmy Fallon holding up a picture of his Bored Ape

Christopher Mims, Technology columnist at The Wall Street Journal

# 24th March 2026, 8:35 pm / ai, security

slop is something that takes more human effort to consume than it took to produce. When my coworker sends me raw Gemini output he’s not expressing his freedom to create, he’s disrespecting the value of my time

Neurotica, @schwarzgerat.bsky.social

# 23rd March 2026, 11:31 pm / ai-ethics, slop, generative-ai, ai, llms

I have been doing this for years, and the hardest parts of the job were never about typing out code. I have always struggled most with understanding systems, debugging things that made no sense, designing architectures that wouldn't collapse under heavy load, and making decisions that would save months of pain later.

None of these problems can be solved LLMs. They can suggest code, help with boilerplate, sometimes can act as a sounding board. But they don't understand the system, they don't carry context in their "minds", and they certianly don't know why a decision is right or wrong.

And the most importantly, they don't choose. That part is still yours. The real work of software development, the part that makes someone valuable, is knowing what should exist in the first place, and why.

David Abram, The machine didn't take your craft. You gave it up.

# 23rd March 2026, 6:56 pm / careers, ai-assisted-programming, generative-ai, ai, llms

Congrats to the @cursor_ai team on the launch of Composer 2!

We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support.

Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ hosted RL and inference platform as part of an authorized commercial partnership.

Kimi.ai @Kimi_Moonshot, responding to reports that Composer 2 was built on top of Kimi K2.5

# 20th March 2026, 8:29 pm / kimi, generative-ai, ai, cursor, llms, ai-in-china

Great news—we’ve hit our (very modest) performance goals for the CPython JIT over a year early for macOS AArch64, and a few months early for x86_64 Linux. The 3.15 alpha JIT is about 11-12% faster on macOS AArch64 than the tail calling interpreter, and 5-6%faster than the standard interpreter on x86_64 Linux.

Ken Jin, Python 3.15’s JIT is now back on track

# 17th March 2026, 9:48 pm / python

If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole. [...]

For a reviewer, it’s demoralizing to communicate with a facade of a human.

This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.

Tim Schilling, Give Django your time and money, not your tokens

# 17th March 2026, 4:13 pm / ai-ethics, open-source, generative-ai, ai, django, llms

The point of the blackmail exercise was to have something to describe to policymakers—results that are visceral enough to land with people, and make misalignment risk actually salient in practice for people who had never thought about it before.

A member of Anthropic’s alignment-science team, as told to Gideon Lewis-Kraus

# 16th March 2026, 9:38 pm / ai-ethics, anthropic, claude, generative-ai, ai, llms

Tidbit: the software-based camera indicator light in the MacBook Neo runs in the secure exclave¹ part of the chip, so it is almost as secure as the hardware indicator light. What that means in practice is that even a kernel-level exploit would not be able to turn on the camera without the light appearing on screen. It runs in a privileged environment separate from the kernel and blits the light directly onto the screen hardware.

Guilherme Rambo, in a text message to John Gruber

# 16th March 2026, 8:34 pm / hardware, apple, privacy, john-gruber

GitHub’s slopocalypse – the flood of AI-generated spam PRs and issues – has made Jazzband’s model of open membership and shared push access untenable.

Jazzband was designed for a world where the worst case was someone accidentally merging the wrong PR. In a world where only 1 in 10 AI-generated PRs meets project standards, where curl had to shut down its bug bounty because confirmation rates dropped below 5%, and where GitHub’s own response was a kill switch to disable pull requests entirely – an organization that gives push access to everyone who joins simply can’t operate safely anymore.

Jannis Leidel, Sunsetting Jazzband

# 14th March 2026, 6:41 pm / ai-ethics, open-source, python, ai, github

Simply put: It’s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own. It took me about five days. I am now using the best piece of accounting software I’ve ever used. It’s blazing fast. Entirely local. Handles multiple currencies and pulls daily (historical) conversion rates. It’s able to ingest any CSV I throw at it and represent it in my dashboard as needed. It knows US and Japan tax requirements, and formats my expenses and medical bills appropriately for my accountants. I feed it past returns to learn from. I dump 1099s and K1s and PDFs from hospitals into it, and it categorizes and organizes and packages them all as needed. It reconciles international wire transfers, taking into account small variations in FX rates and time for the transfers to complete. It learns as I categorize expenses and categorizes automatically going forward. It’s easy to do spot checks on data. If I find an anomaly, I can talk directly to Claude and have us brainstorm a batched solution, often saving me from having to manually modify hundreds of entries. And often resulting in a new, small, feature tweak. The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber.

Craig Mod, Software Bonkers

# 13th March 2026, 5:14 pm / vibe-coding, ai-assisted-programming, generative-ai, ai, llms

Here's what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.

Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.

Now there's a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.

Les Orchard, Grief and the AI Split

# 12th March 2026, 4:28 pm / les-orchard, ai-assisted-programming, generative-ai, ai, llms, careers, deep-blue

It is hard for less experienced developers to appreciate how rarely architecting for future requirements / applications turns out net-positive.

John Carmack, a tweet in June 2021

# 11th March 2026, 2:47 pm / john-carmack, software-engineering, yagni

What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum, creator of ELIZA, in 1976 (via)

# 8th March 2026, 2:59 pm / ai-ethics, ai, computer-history, internet-archive

Questions for developers:

  • “What’s the one area you’re afraid to touch?”
  • “When’s the last time you deployed on a Friday?”
  • “What broke in production in the last 90 days that wasn’t caught by tests?”

Questions for the CTO/EM:

  • “What feature has been blocked for over a year?”
  • “Do you have real-time error visibility right now?”
  • “What was the last feature that took significantly longer than estimated?”

Questions for business stakeholders:

  • “Are there features that got quietly turned off and never came back?”
  • “Are there things you’ve stopped promising customers?”

Ally Piechowski, How to Audit a Rails Codebase

# 6th March 2026, 9:58 pm / technical-debt, software-engineering, rails

Shock! Shock! I learned yesterday that an open problem I'd been working on for several weeks had just been solved by Claude Opus 4.6 - Anthropic's hybrid reasoning model that had been released three weeks earlier! It seems that I'll have to revise my opinions about "generative AI" one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving.

Donald Knuth, Claude's Cycles

# 3rd March 2026, 11:59 pm / november-2025-inflection, claude, generative-ai, ai, llms, donald-knuth, llm-reasoning, anthropic

I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.

claude.com/import-memory, Anthropic's "import your memories to Claude" feature is a prompt

# 1st March 2026, 11:21 am / prompt-engineering, llm-memory, anthropic, claude, generative-ai, ai, llms

It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. [...]

Andrej Karpathy

# 26th February 2026, 7:03 pm / andrej-karpathy, coding-agents, ai-assisted-programming, generative-ai, agentic-engineering, ai, llms, november-2025-inflection

If people are only using this a couple of times a week at most, and can’t think of anything to do with it on the average day, it hasn’t changed their life. OpenAI itself admits the problem, talking about a ‘capability gap’ between what the models can do and what people do with them, which seems to me like a way to avoid saying that you don’t have clear product-market fit.

Hence, OpenAI’s ad project is partly just about covering the cost of serving the 90% or more of users who don’t pay (and capturing an early lead with advertisers and early learning in how this might work), but more strategically, it’s also about making it possible to give those users the latest and most powerful (i.e. expensive) models, in the hope that this will deepen their engagement.

Benedict Evans, How will OpenAI compete?

# 26th February 2026, 3:44 am / openai, chatgpt, benedict-evans, ai

It’s also reasonable for people who entered technology in the last couple of decades because it was good job, or because they enjoyed coding to look at this moment with a real feeling of loss. That feeling of loss though can be hard to understand emotionally for people my age who entered tech because we were addicted to feeling of agency it gave us. The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.

Kellan Elliott-McCrea, Code has always been the easy part

# 25th February 2026, 3:30 am / perl, generative-ai, kellan-elliott-mccrea, agentic-engineering, ai, llms, deep-blue

The paper asked me to explain vibe coding, and I did so, because I think something big is coming there, and I'm deep in, and I worry that normal people are not able to see it and I want them to be prepared. But people can't just read something and hate you quietly; they can't see that you have provided them with a utility or a warning; they need their screech. You are distributed to millions of people, and become the local proxy for the emotions of maybe dozens of people, who disagree and demand your attention, and because you are the one in the paper you need to welcome them with a pastor's smile and deep empathy, and if you speak a word in your own defense they'll screech even louder.

Paul Ford, on writing about vibe coding for the New York Times

# 23rd February 2026, 4 pm / vibe-coding, new-york-times, paul-ford