<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: Quotations</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/atom/quotations/" rel="self"/><id>http://simonwillison.net/</id><updated>2026-04-08T15:18:49+00:00</updated><author><name>Simon Willison</name></author><entry><title>Quoting Giles Turnbull</title><link href="https://simonwillison.net/2026/Apr/8/giles-turnbull/#atom-quotations" rel="alternate"/><published>2026-04-08T15:18:49+00:00</published><updated>2026-04-08T15:18:49+00:00</updated><id>https://simonwillison.net/2026/Apr/8/giles-turnbull/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://gilest.org/notes/2026/human-ai/"&gt;&lt;p&gt;I have a feeling that &lt;strong&gt;everyone likes using AI tools to try doing someone else’s profession&lt;/strong&gt;. They’re much less keen when someone else uses it for their profession.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://gilest.org/notes/2026/human-ai/"&gt;Giles Turnbull&lt;/a&gt;, AI and the human voice&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/writing"&gt;writing&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="writing"/><category term="ai"/></entry><entry><title>Quoting Chengpeng Mou</title><link href="https://simonwillison.net/2026/Apr/5/chengpeng-mou/#atom-quotations" rel="alternate"/><published>2026-04-05T21:47:06+00:00</published><updated>2026-04-05T21:47:06+00:00</updated><id>https://simonwillison.net/2026/Apr/5/chengpeng-mou/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://twitter.com/cpmou2022/status/2040606209800290404"&gt;&lt;p&gt;From anonymized U.S. ChatGPT data, we are seeing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;~2M weekly messages on health insurance&lt;/li&gt;
&lt;li&gt;~600K weekly messages [classified as healthcare] from people living in “hospital deserts” (30 min drive to nearest hospital)&lt;/li&gt;
&lt;li&gt;7 out of 10 msgs happen outside clinic hours&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/cpmou2022/status/2040606209800290404"&gt;Chengpeng Mou&lt;/a&gt;, Head of Business Finance, OpenAI&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/chatgpt"&gt;chatgpt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="generative-ai"/><category term="openai"/><category term="chatgpt"/><category term="ai"/><category term="llms"/></entry><entry><title>Quoting Kyle Daigle</title><link href="https://simonwillison.net/2026/Apr/4/kyle-daigle/#atom-quotations" rel="alternate"/><published>2026-04-04T02:20:17+00:00</published><updated>2026-04-04T02:20:17+00:00</updated><id>https://simonwillison.net/2026/Apr/4/kyle-daigle/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://twitter.com/kdaigle/status/2040164759836778878"&gt;&lt;p&gt;[GitHub] platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)&lt;/p&gt;
&lt;p&gt;GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/kdaigle/status/2040164759836778878"&gt;Kyle Daigle&lt;/a&gt;, COO, GitHub&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/github"&gt;github&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/github-actions"&gt;github-actions&lt;/a&gt;&lt;/p&gt;

</summary><category term="github"/><category term="github-actions"/></entry><entry><title>Quoting Willy Tarreau</title><link href="https://simonwillison.net/2026/Apr/3/willy-tarreau/#atom-quotations" rel="alternate"/><published>2026-04-03T21:48:22+00:00</published><updated>2026-04-03T21:48:22+00:00</updated><id>https://simonwillison.net/2026/Apr/3/willy-tarreau/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://lwn.net/Articles/1065620/"&gt;&lt;p&gt;On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.&lt;/p&gt;
&lt;p&gt;And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://lwn.net/Articles/1065620/"&gt;Willy Tarreau&lt;/a&gt;, Lead Software Developer. HAPROXY&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/linux"&gt;linux&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;

</summary><category term="security"/><category term="linux"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Quoting Daniel Stenberg</title><link href="https://simonwillison.net/2026/Apr/3/daniel-stenberg/#atom-quotations" rel="alternate"/><published>2026-04-03T21:46:07+00:00</published><updated>2026-04-03T21:46:07+00:00</updated><id>https://simonwillison.net/2026/Apr/3/daniel-stenberg/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://mastodon.social/@bagder/116336957584445742"&gt;&lt;p&gt;The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.&lt;/p&gt;
&lt;p&gt;I'm spending hours per day on this now. It's intense.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://mastodon.social/@bagder/116336957584445742"&gt;Daniel Stenberg&lt;/a&gt;, lead developer of cURL&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/daniel-stenberg"&gt;daniel-stenberg&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/curl"&gt;curl&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;

</summary><category term="daniel-stenberg"/><category term="security"/><category term="curl"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Quoting Greg Kroah-Hartman</title><link href="https://simonwillison.net/2026/Apr/3/greg-kroah-hartman/#atom-quotations" rel="alternate"/><published>2026-04-03T21:44:41+00:00</published><updated>2026-04-03T21:44:41+00:00</updated><id>https://simonwillison.net/2026/Apr/3/greg-kroah-hartman/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/"&gt;&lt;p&gt;Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.&lt;/p&gt;
&lt;p&gt;Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/"&gt;Greg Kroah-Hartman&lt;/a&gt;, Linux kernel maintainer (&lt;a href="https://en.wikipedia.org/wiki/Greg_Kroah-Hartman"&gt;bio&lt;/a&gt;), in conversation with Steven J. Vaughan-Nichols&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/linux"&gt;linux&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;

</summary><category term="security"/><category term="linux"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Quoting Soohoon Choi</title><link href="https://simonwillison.net/2026/Apr/1/soohoon-choi/#atom-quotations" rel="alternate"/><published>2026-04-01T02:07:16+00:00</published><updated>2026-04-01T02:07:16+00:00</updated><id>https://simonwillison.net/2026/Apr/1/soohoon-choi/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://www.greptile.com/blog/ai-slopware-future"&gt;&lt;p&gt;I want to argue that AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long-term.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://www.greptile.com/blog/ai-slopware-future"&gt;Soohoon Choi&lt;/a&gt;, Slop Is Not Necessarily The Future&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/slop"&gt;slop&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="slop"/><category term="ai-assisted-programming"/><category term="generative-ai"/><category term="agentic-engineering"/><category term="ai"/><category term="llms"/></entry><entry><title>Quoting Georgi Gerganov</title><link href="https://simonwillison.net/2026/Mar/30/georgi-gerganov/#atom-quotations" rel="alternate"/><published>2026-03-30T21:31:02+00:00</published><updated>2026-03-30T21:31:02+00:00</updated><id>https://simonwillison.net/2026/Mar/30/georgi-gerganov/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://twitter.com/ggerganov/status/2038674698809102599"&gt;&lt;p&gt;Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/ggerganov/status/2038674698809102599"&gt;Georgi Gerganov&lt;/a&gt;, explaining why it's hard to find local models that work well with coding agents&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/georgi-gerganov"&gt;georgi-gerganov&lt;/a&gt;&lt;/p&gt;

</summary><category term="coding-agents"/><category term="generative-ai"/><category term="ai"/><category term="local-llms"/><category term="llms"/><category term="georgi-gerganov"/></entry><entry><title>Quoting Matt Webb</title><link href="https://simonwillison.net/2026/Mar/28/matt-webb/#atom-quotations" rel="alternate"/><published>2026-03-28T12:04:26+00:00</published><updated>2026-03-28T12:04:26+00:00</updated><id>https://simonwillison.net/2026/Mar/28/matt-webb/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://interconnected.org/home/2026/03/28/architecture"&gt;&lt;p&gt;The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon. [...]&lt;/p&gt;
&lt;p&gt;But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.&lt;/p&gt;
&lt;p&gt;So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!&lt;/p&gt;
&lt;p&gt;While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://interconnected.org/home/2026/03/28/architecture"&gt;Matt Webb&lt;/a&gt;, An appreciation for (technical) architecture&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/matt-webb"&gt;matt-webb&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vibe-coding"&gt;vibe-coding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/definitions"&gt;definitions&lt;/a&gt;&lt;/p&gt;

</summary><category term="matt-webb"/><category term="ai"/><category term="llms"/><category term="vibe-coding"/><category term="coding-agents"/><category term="ai-assisted-programming"/><category term="generative-ai"/><category term="agentic-engineering"/><category term="definitions"/></entry><entry><title>Quoting Richard Fontana</title><link href="https://simonwillison.net/2026/Mar/27/richard-fontana/#atom-quotations" rel="alternate"/><published>2026-03-27T21:11:17+00:00</published><updated>2026-03-27T21:11:17+00:00</updated><id>https://simonwillison.net/2026/Mar/27/richard-fontana/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://github.com/chardet/chardet/issues/334#issuecomment-4098524555"&gt;&lt;p&gt;FWIW, IANDBL, TINLA, etc., I don’t currently see any basis for concluding that chardet 7.0.0 is required to be released under the LGPL. AFAIK no one including Mark Pilgrim has identified persistence of copyrightable expressive material from earlier versions in 7.0.0 nor has anyone articulated some viable alternate theory of license violation. [...]&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://github.com/chardet/chardet/issues/334#issuecomment-4098524555"&gt;Richard Fontana&lt;/a&gt;, LGPLv3 co-author, weighing in on the &lt;a href="https://simonwillison.net/2026/Mar/5/chardet/"&gt;chardet relicensing situation&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;&lt;/p&gt;

</summary><category term="open-source"/><category term="ai-ethics"/><category term="llms"/><category term="ai"/><category term="generative-ai"/><category term="ai-assisted-programming"/></entry><entry><title>Quoting Christopher Mims</title><link href="https://simonwillison.net/2026/Mar/24/christopher-mims/#atom-quotations" rel="alternate"/><published>2026-03-24T20:35:52+00:00</published><updated>2026-03-24T20:35:52+00:00</updated><id>https://simonwillison.net/2026/Mar/24/christopher-mims/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://bsky.app/profile/mims.bsky.social/post/3mhsux67xpk2d"&gt;&lt;p&gt;I really think "give AI total control of my computer and therefore my entire life" is going to look so foolish in retrospect that everyone who went for this is going to look as dumb as Jimmy Fallon holding up a picture of his Bored Ape&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://bsky.app/profile/mims.bsky.social/post/3mhsux67xpk2d"&gt;Christopher Mims&lt;/a&gt;, Technology columnist at The Wall Street Journal&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai"/><category term="security"/></entry><entry><title>Quoting Neurotica</title><link href="https://simonwillison.net/2026/Mar/23/neurotica/#atom-quotations" rel="alternate"/><published>2026-03-23T23:31:45+00:00</published><updated>2026-03-23T23:31:45+00:00</updated><id>https://simonwillison.net/2026/Mar/23/neurotica/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://bsky.app/profile/schwarzgerat.bsky.social/post/3mhqu5dogos2v"&gt;&lt;p&gt;slop is something that takes more human effort to consume than it took to produce. When my coworker sends me raw Gemini output he’s not expressing his freedom to create, he’s disrespecting the value of my time&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://bsky.app/profile/schwarzgerat.bsky.social/post/3mhqu5dogos2v"&gt;Neurotica&lt;/a&gt;, @schwarzgerat.bsky.social&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/slop"&gt;slop&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="ai-ethics"/><category term="slop"/><category term="generative-ai"/><category term="ai"/><category term="llms"/></entry><entry><title>Quoting David Abram</title><link href="https://simonwillison.net/2026/Mar/23/david-abram/#atom-quotations" rel="alternate"/><published>2026-03-23T18:56:18+00:00</published><updated>2026-03-23T18:56:18+00:00</updated><id>https://simonwillison.net/2026/Mar/23/david-abram/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://www.davidabram.dev/musings/the-machine-didnt-take-your-craft/"&gt;&lt;p&gt;I have been doing this for years, and the hardest parts of the job were never about typing out code. I have always struggled most with understanding systems, debugging things that made no sense, designing architectures that wouldn't collapse under heavy load, and making decisions that would save months of pain later.&lt;/p&gt;
&lt;p&gt;None of these problems can be solved LLMs. They can suggest code, help with boilerplate, sometimes can act as a sounding board. But they don't understand the system, they don't carry context in their "minds", and they certianly don't know why a decision is right or wrong.&lt;/p&gt;
&lt;p&gt;And the most importantly, they don't choose. That part is still yours. The real work of software development, the part that makes someone valuable, is knowing what should exist in the first place, and why.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://www.davidabram.dev/musings/the-machine-didnt-take-your-craft/"&gt;David Abram&lt;/a&gt;, The machine didn't take your craft. You gave it up.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/careers"&gt;careers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;

</summary><category term="careers"/><category term="ai-assisted-programming"/><category term="generative-ai"/><category term="ai"/><category term="llms"/></entry><entry><title>Quoting Kimi.ai @Kimi_Moonshot</title><link href="https://simonwillison.net/2026/Mar/20/cursor-on-kimi/#atom-quotations" rel="alternate"/><published>2026-03-20T20:29:23+00:00</published><updated>2026-03-20T20:29:23+00:00</updated><id>https://simonwillison.net/2026/Mar/20/cursor-on-kimi/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://twitter.com/Kimi_Moonshot/status/2035074972943831491"&gt;&lt;p&gt;Congrats to the &lt;a href="https://x.com/cursor_ai"&gt;@cursor_ai&lt;/a&gt; team on the launch of Composer 2!&lt;/p&gt;
&lt;p&gt;We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining &amp;amp; high-compute RL training is the open model ecosystem we love to support.&lt;/p&gt;
&lt;p&gt;Note: Cursor accesses Kimi-k2.5 via &lt;a href="https://x.com/FireworksAI_HQ"&gt;@FireworksAI_HQ&lt;/a&gt; hosted RL and inference platform as part of an authorized commercial partnership.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/Kimi_Moonshot/status/2035074972943831491"&gt;Kimi.ai @Kimi_Moonshot&lt;/a&gt;, responding to reports that Composer 2 was built on top of Kimi K2.5&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/kimi"&gt;kimi&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cursor"&gt;cursor&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-in-china"&gt;ai-in-china&lt;/a&gt;&lt;/p&gt;

</summary><category term="kimi"/><category term="generative-ai"/><category term="ai"/><category term="cursor"/><category term="llms"/><category term="ai-in-china"/></entry><entry><title>Quoting Ken Jin</title><link href="https://simonwillison.net/2026/Mar/17/ken-jin/#atom-quotations" rel="alternate"/><published>2026-03-17T21:48:26+00:00</published><updated>2026-03-17T21:48:26+00:00</updated><id>https://simonwillison.net/2026/Mar/17/ken-jin/#atom-quotations</id><summary type="html">&lt;blockquote cite="https://fidget-spinner.github.io/posts/jit-on-track.html"&gt;&lt;p&gt;Great news—we’ve hit our (very modest) performance goals for the CPython JIT over a year early for macOS AArch64, and a few months early for x86_64 Linux. The 3.15 alpha JIT is about &lt;strong&gt;11-12%&lt;/strong&gt; faster on macOS AArch64 than the tail calling interpreter, and &lt;strong&gt;5-6%&lt;/strong&gt;faster than the standard interpreter on x86_64 Linux.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://fidget-spinner.github.io/posts/jit-on-track.html"&gt;Ken Jin&lt;/a&gt;, Python 3.15’s JIT is now back on track&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;&lt;/p&gt;

</summary><category term="python"/></entry></feed>