<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: cognitive-debt</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/cognitive-debt.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2026-04-03T23:57:04+00:00</updated><author><name>Simon Willison</name></author><entry><title>The cognitive impact of coding agents</title><link href="https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-tag" rel="alternate"/><published>2026-04-03T23:57:04+00:00</published><updated>2026-04-03T23:57:04+00:00</updated><id>https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-tag</id><summary type="html">
    &lt;p&gt;A fun thing about &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;recording a podcast&lt;/a&gt; with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's &lt;a href="https://x.com/lennysan/status/2039845666680176703"&gt;one he shared on Twitter today&lt;/a&gt; which ended up attracting over 1.1m views!&lt;/p&gt;
&lt;p&gt;&lt;video
  src="https://static.simonwillison.net/static/2026/cognitive-cost.mp4"
  poster="https://static.simonwillison.net/static/2026/cognitive-cost-poster.jpg"
  controls
  preload="none"
  playsinline
  style="display:block; max-width:400px; width:100%; height:auto; margin:0 auto"
&gt;&lt;track src="https://static.simonwillison.net/static/2026/cognitive-cost.vtt" kind="captions" srclang="en" label="English"&gt;&lt;/video&gt;
&lt;/p&gt;
&lt;p&gt;That was 48 seconds. Our &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;full conversation&lt;/a&gt; lasted 1 hour 40 minutes.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/podcast-appearances"&gt;podcast-appearances&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="podcast-appearances"/><category term="ai-ethics"/><category term="coding-agents"/><category term="cognitive-debt"/><category term="agentic-engineering"/></entry><entry><title>Thoughts on slowing the fuck down</title><link href="https://simonwillison.net/2026/Mar/25/thoughts-on-slowing-the-fuck-down/#atom-tag" rel="alternate"/><published>2026-03-25T21:47:17+00:00</published><updated>2026-03-25T21:47:17+00:00</updated><id>https://simonwillison.net/2026/Mar/25/thoughts-on-slowing-the-fuck-down/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://news.ycombinator.com/item?id=47517539"&gt;Thoughts on slowing the fuck down&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Mario Zechner created the &lt;a href="https://github.com/badlogic/pi-mono"&gt;Pi agent framework&lt;/a&gt; used by OpenClaw, giving considerable credibility to his opinions on current trends in agentic engineering. He's not impressed:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Agents and humans both make mistakes, but agent mistakes accumulate much faster:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there's only so many booboos the human can introduce in a codebase per day. [...]&lt;/p&gt;
&lt;p&gt;With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that's unsustainable. You have removed yourself from the loop, so you don't even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it's too late. [...]&lt;/p&gt;
&lt;p&gt;You have zero fucking idea what's going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I think Mario is exactly right about this. Agents let us move &lt;em&gt;so much faster&lt;/em&gt;, but this speed also means that changes which we would normally have considered over the course of weeks are landing in a matter of hours.&lt;/p&gt;
&lt;p&gt;It's so easy to let the codebase evolve outside of our abilities to reason clearly about it. &lt;a href="https://simonwillison.net/tags/cognitive-debt/"&gt;Cognitive debt&lt;/a&gt; is real.&lt;/p&gt;
&lt;p&gt;Mario recommends slowing down:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.&lt;/p&gt;
&lt;p&gt;Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. [...]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I'm not convinced writing by hand is the best way to address this, but it's absolutely the case that we need the discipline to find a new balance of speed v.s. mental thoroughness now that typing out the code is no longer anywhere close to being the bottleneck on writing software.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="coding-agents"/><category term="cognitive-debt"/><category term="agentic-engineering"/></entry><entry><title>Interactive explanations</title><link href="https://simonwillison.net/guides/agentic-engineering-patterns/interactive-explanations/#atom-tag" rel="alternate"/><published>2026-02-28T23:09:39+00:00</published><updated>2026-02-28T23:09:39+00:00</updated><id>https://simonwillison.net/guides/agentic-engineering-patterns/interactive-explanations/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;em&gt;&lt;a href="https://simonwillison.net/guides/agentic-engineering-patterns/"&gt;Agentic Engineering Patterns&lt;/a&gt; &amp;gt;&lt;/em&gt;&lt;/p&gt;
    &lt;p&gt;When we lose track of how code written by our agents works we take on &lt;strong&gt;cognitive debt&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For a lot of things this doesn't matter: if the code fetches some data from a database and outputs it as JSON the implementation details are likely simple enough that we don't need to care. We can try out the new feature and make a very solid guess at how it works, then glance over the code to be sure.&lt;/p&gt;
&lt;p&gt;Often though the details really do matter. If the core of our application becomes a black box that we don't fully understand we can no longer confidently reason about it, which makes planning new features harder and eventually slows our progress in the same way that accumulated technical debt does.&lt;/p&gt;
&lt;p&gt;How do we pay down cognitive debt? By improving our understanding of how the code works.&lt;/p&gt;
&lt;p&gt;One of my favorite ways to do that is by building &lt;strong&gt;interactive explanations&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="understanding-word-clouds"&gt;Understanding word clouds&lt;/h2&gt;
&lt;p&gt;In &lt;a href="https://minimaxir.com/2026/02/ai-agent-coding/"&gt;An AI agent coding skeptic tries AI agent coding, in excessive detail&lt;/a&gt; Max Woolf mentioned testing LLMs' Rust abilities with the prompt &lt;code&gt;Create a Rust app that can create "word cloud" data visualizations given a long input text&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This captured my imagination: I've always wanted to know how word clouds work, so I fired off an &lt;a href="https://simonwillison.net/2025/Nov/6/async-code-research/"&gt;asynchronous research project&lt;/a&gt; - &lt;a href="https://github.com/simonw/research/pull/91#issue-4002426963"&gt;initial prompt here&lt;/a&gt;, &lt;a href="https://github.com/simonw/research/tree/main/rust-wordcloud"&gt;code and report here&lt;/a&gt; - to explore the idea.&lt;/p&gt;
&lt;p&gt;This worked really well: Claude Code for web built me a Rust CLI tool that could produce images like
this one:&lt;/p&gt;
&lt;p&gt;&lt;img alt="A word cloud, many words, different colors and sizes, larger words in the middle." src="https://raw.githubusercontent.com/simonw/research/refs/heads/main/rust-wordcloud/wordcloud.png" /&gt;&lt;/p&gt;
&lt;p&gt;But how does it actually work?&lt;/p&gt;
&lt;p&gt;Claude's report said it uses "&lt;strong&gt;Archimedean spiral placement&lt;/strong&gt; with per-word random angular offset for natural-looking layouts". This did not help me much!&lt;/p&gt;
&lt;p&gt;I requested a &lt;a href="https://simonwillison.net/guides/agentic-engineering-patterns/linear-walkthroughs/"&gt;linear walkthrough&lt;/a&gt; of the codebase which helped me understand the Rust code in more detail - here's &lt;a href="https://github.com/simonw/research/blob/main/rust-wordcloud/walkthrough.md"&gt;that walkthrough&lt;/a&gt; (and &lt;a href="https://github.com/simonw/research/commit/2cb8c62477173ef6a4c2e274be9f712734df6126"&gt;the prompt&lt;/a&gt;). This helped me understand the structure of the Rust code but I still didn't have an intuitive understanding of how that "Archimedean spiral placement" part actually worked.&lt;/p&gt;
&lt;p&gt;So I asked for an &lt;strong&gt;animated explanation&lt;/strong&gt;. I did this by pasting a link to that existing &lt;code&gt;walkthrough.md&lt;/code&gt; document into a Claude Code session along with the following:&lt;/p&gt;
&lt;p&gt;&lt;pre&gt;Fetch https://raw.githubusercontent.com/simonw/research/refs/heads/main/rust-wordcloud/walkthrough.md to /tmp using curl so you can read the whole thing

Inspired by that, build animated-word-cloud.html - a page that accepts pasted text (which it persists in the `#fragment` of the URL such that a page loaded with that `#` populated will use that text as input and auto-submit it) such that when you submit the text it builds a word cloud using the algorithm described in that document but does it animated, to make the algorithm as clear to understand. Include a slider for the animation which can be paused and the speed adjusted or even stepped through frame by frame while paused. At any stage the visible in-progress word cloud can be downloaded as a PNG.&lt;/pre&gt;
You can &lt;a href="https://tools.simonwillison.net/animated-word-cloud"&gt;play with the result here&lt;/a&gt;. Here's an animated GIF demo:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Words appear on the word cloud one at a time, with little boxes showing where the algorithm is attempting to place them - if those boxes overlap an existing word it tries again." src="https://static.simonwillison.net/static/2026/animated-word-cloud-demo.gif" /&gt;&lt;/p&gt;
&lt;p&gt;This was using Claude Opus 4.6, which turns out to have quite good taste when it comes to building explanatory animations.&lt;/p&gt;
&lt;p&gt;If you watch the animation closely you can see that for each word it attempts to place it somewhere on the page by showing a box, run checks if that box intersects an existing word. If so it continues to try to find a good spot, moving outward in a spiral from the center.&lt;/p&gt;
&lt;p&gt;I found that this animation really helped make the way the algorithm worked click for me.&lt;/p&gt;
&lt;p&gt;I have long been a fan of animations and interactive interfaces to help explain different concepts. A good coding agent can produce these on demand to help explain code - its own code or code written by others.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/explorables"&gt;explorables&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="ai"/><category term="llms"/><category term="coding-agents"/><category term="ai-assisted-programming"/><category term="cognitive-debt"/><category term="generative-ai"/><category term="explorables"/><category term="agentic-engineering"/></entry><entry><title>Nano Banana Pro diff to webcomic</title><link href="https://simonwillison.net/2026/Feb/17/release-notes-webcomic/#atom-tag" rel="alternate"/><published>2026-02-17T04:51:58+00:00</published><updated>2026-02-17T04:51:58+00:00</updated><id>https://simonwillison.net/2026/Feb/17/release-notes-webcomic/#atom-tag</id><summary type="html">
    &lt;p&gt;Given the threat of &lt;a href="https://simonwillison.net/tags/cognitive-debt/"&gt;cognitive debt&lt;/a&gt; brought on by AI-accelerated software development leading to more projects and less deep understanding of how they work and what they actually do, it's interesting to consider artifacts that might be able to help.&lt;/p&gt;
&lt;p&gt;Nathan Baschez &lt;a href="https://twitter.com/nbaschez/status/2023501535343509871"&gt;on Twitter&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;my current favorite trick for reducing "cognitive debt" (h/t @simonw
) is to ask the LLM to write two versions of the plan:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The version for it (highly technical and detailed)&lt;/li&gt;
&lt;li&gt;The version for me (an entertaining essay designed to build my intuition)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Works great&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This inspired me to try something new. I generated &lt;a href="https://github.com/simonw/showboat/compare/v0.5.0...v0.6.0.diff"&gt;the diff&lt;/a&gt; between v0.5.0 and v0.6.0 of my Showboat project - which introduced &lt;a href="https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#showboat-remote-publishing"&gt;the remote publishing feature&lt;/a&gt; - and dumped that into Nano Banana Pro with the prompt:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Create a webcomic that explains the new feature as clearly and entertainingly as possible&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's &lt;a href="https://gemini.google.com/share/cce6da8e5083"&gt;what it produced&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt="A six-panel comic strip illustrating a tool called &amp;quot;Showboat&amp;quot; for live-streaming document building. Panel 1, titled &amp;quot;THE OLD WAY: Building docs was a lonely voyage. You finished it all before anyone saw it.&amp;quot;, shows a sad bearded man on a wooden boat labeled &amp;quot;THE LOCALHOST&amp;quot; holding papers and saying &amp;quot;Almost done... then I have to export and email the HTML...&amp;quot;. Panel 2, titled &amp;quot;THE UPGRADE: Just set the environment variable!&amp;quot;, shows the same man excitedly plugging in a device with a speech bubble reading &amp;quot;ENV VAR: SHOWBOAT_REMOTE_URL&amp;quot; and the sound effect &amp;quot;*KA-CHUNK!*&amp;quot;. Panel 3, titled &amp;quot;init establishes the uplink and generates a unique UUID beacon.&amp;quot;, shows the man typing at a keyboard with a terminal reading &amp;quot;$ showboat init 'Live Demo'&amp;quot;, a satellite dish transmitting to a floating label &amp;quot;UUID: 550e84...&amp;quot;, and a monitor reading &amp;quot;WAITING FOR STREAM...&amp;quot;. Panel 4, titled &amp;quot;Every note and exec is instantly beamed to the remote viewer!&amp;quot;, shows the man coding with sound effects &amp;quot;*HAMMER!*&amp;quot;, &amp;quot;ZAP!&amp;quot;, &amp;quot;ZAP!&amp;quot;, &amp;quot;BANG!&amp;quot; as red laser beams shoot from a satellite dish to a remote screen displaying &amp;quot;NOTE: Step 1...&amp;quot; and &amp;quot;SUCCESS&amp;quot;. Panel 5, titled &amp;quot;Even image files are teleported in real-time!&amp;quot;, shows a satellite dish firing a cyan beam with the sound effect &amp;quot;*FOOMP!*&amp;quot; toward a monitor displaying a bar chart. Panel 6, titled &amp;quot;You just build. The audience gets the show live.&amp;quot;, shows the man happily working at his boat while a crowd of cheering people watches a projected screen reading &amp;quot;SHOWBOAT LIVE STREAM: Live Demo&amp;quot;, with a label &amp;quot;UUID: 550e84...&amp;quot; and one person in the foreground eating popcorn." src="https://static.simonwillison.net/static/2026/nano-banana-diff.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Good enough to publish with the release notes? I don't think so. I'm sharing it here purely to demonstrate the idea. Creating assets like this as a personal tool for thinking about novel ways to explain a feature feels worth exploring further.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/text-to-image"&gt;text-to-image&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nano-banana"&gt;nano-banana&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/showboat"&gt;showboat&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="gemini"/><category term="text-to-image"/><category term="nano-banana"/><category term="showboat"/><category term="cognitive-debt"/></entry><entry><title>The AI Vampire</title><link href="https://simonwillison.net/2026/Feb/15/the-ai-vampire/#atom-tag" rel="alternate"/><published>2026-02-15T23:59:36+00:00</published><updated>2026-02-15T23:59:36+00:00</updated><id>https://simonwillison.net/2026/Feb/15/the-ai-vampire/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163"&gt;The AI Vampire&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Steve Yegge's take on agent fatigue, and its relationship to burnout.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Let's pretend you're the only person at your company using AI.&lt;/p&gt;
&lt;p&gt;In Scenario A, you decide you're going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.&lt;/p&gt;
&lt;p&gt;In that scenario, your employer captures 100% of the value from &lt;em&gt;you&lt;/em&gt; adopting AI. You get nothing, or at any rate, it ain't gonna be 9x your salary. And everyone hates you now.&lt;/p&gt;
&lt;p&gt;And you're &lt;em&gt;exhausted.&lt;/em&gt; You're tired, Boss. You got nothing for it.&lt;/p&gt;
&lt;p&gt;Congrats, you were just drained by a company. I've been drained to the point of burnout several times in my career, even at Google once or twice. But now with AI, it's oh, so much easier.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Steve reports needing more sleep due to the cognitive burden involved in agentic engineering, and notes that four hours of agent work a day is a more realistic pace:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice.&lt;/p&gt;
&lt;/blockquote&gt;

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://cosocial.ca/@timbray/116076167774984883"&gt;Tim Bray&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/steve-yegge"&gt;steve-yegge&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;



</summary><category term="steve-yegge"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="ai-ethics"/><category term="coding-agents"/><category term="cognitive-debt"/><category term="agentic-engineering"/></entry><entry><title>How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</title><link href="https://simonwillison.net/2026/Feb/15/cognitive-debt/#atom-tag" rel="alternate"/><published>2026-02-15T05:20:11+00:00</published><updated>2026-02-15T05:20:11+00:00</updated><id>https://simonwillison.net/2026/Feb/15/cognitive-debt/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/"&gt;How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
This piece by Margaret-Anne Storey is the best explanation of the term &lt;strong&gt;cognitive debt&lt;/strong&gt; I've seen so far.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Cognitive debt&lt;/em&gt;, a term gaining &lt;a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/"&gt;traction&lt;/a&gt; recently, instead communicates the notion that the debt compounded from going fast lives in the brains of the developers and affects their lived experiences and abilities to “go fast” or to make changes. Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Margaret-Anne expands on this further with an anecdote about a student team she coached:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I've experienced this myself on some of my more ambitious vibe-code-adjacent projects. I've been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I've found myself getting lost in my own projects.&lt;/p&gt;
&lt;p&gt;I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://martinfowler.com/fragments/2026-02-13.html"&gt;Martin Fowler&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/definitions"&gt;definitions&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vibe-coding"&gt;vibe-coding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="definitions"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="vibe-coding"/><category term="cognitive-debt"/></entry><entry><title>AI Doesn’t Reduce Work—It Intensifies It</title><link href="https://simonwillison.net/2026/Feb/9/ai-intensifies-work/#atom-tag" rel="alternate"/><published>2026-02-09T16:43:07+00:00</published><updated>2026-02-09T16:43:07+00:00</updated><id>https://simonwillison.net/2026/Feb/9/ai-intensifies-work/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it"&gt;AI Doesn’t Reduce Work—It Intensifies It&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Aruna Ranganathan and Xingqi Maggie Ye from Berkeley Haas School of Business report initial findings in the HBR from their April to December 2025 study of 200 employees at a "U.S.-based technology company".&lt;/p&gt;
&lt;p&gt;This captures an effect I've been observing in my own work with LLMs: the productivity boost these things can provide is &lt;em&gt;exhausting&lt;/em&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in parallel, or reviving long-deferred tasks because AI could “handle them” in the background. They did this, in part, because they felt they had a “partner” that could help them move through their workload.&lt;/p&gt;
&lt;p&gt;While this sense of having a “partner” enabled a feeling of momentum, the reality was a continual switching of attention, frequent checking of AI outputs, and a growing number of open tasks. This created cognitive load and a sense of always juggling, even as the work felt productive.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I'm frequently finding myself with work on two or three projects running parallel. I can get &lt;em&gt;so much done&lt;/em&gt;, but after just an hour or two my mental energy for the day feels almost entirely depleted.&lt;/p&gt;
&lt;p&gt;I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.&lt;/p&gt;
&lt;p&gt;The HBR piece calls for organizations to build an "AI practice" that structures how AI is used to help avoid burnout and counter effects that "make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity".&lt;/p&gt;
&lt;p&gt;I think we've just disrupted decades of existing intuition about sustainable working practices. It's going to take a while and some discipline to find a good new balance.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=46945755"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/careers"&gt;careers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="careers"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-assisted-programming"/><category term="ai-ethics"/><category term="cognitive-debt"/></entry><entry><title>Quoting Tom Dale</title><link href="https://simonwillison.net/2026/Feb/6/tom-dale/#atom-tag" rel="alternate"/><published>2026-02-06T23:41:31+00:00</published><updated>2026-02-06T23:41:31+00:00</updated><id>https://simonwillison.net/2026/Feb/6/tom-dale/#atom-tag</id><summary type="html">
    &lt;blockquote cite="https://twitter.com/tomdale/status/2019828626972131441"&gt;&lt;p&gt;I don't know why this week became the tipping point, but nearly every software engineer I've talked to is experiencing some degree of mental health crisis.&lt;/p&gt;
&lt;p&gt;[...] Many people assuming I meant job loss anxiety but that's just one presentation. I'm seeing near-manic episodes triggered by watching software shift from scarce to abundant. Compulsive behaviors around agent usage. Dissociative awe at the temporal compression of change. It's not fear necessarily just the cognitive overload from living in an inflection point.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/tomdale/status/2019828626972131441"&gt;Tom Dale&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/careers"&gt;careers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="careers"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-ethics"/><category term="coding-agents"/><category term="cognitive-debt"/></entry><entry><title>Quoting Simon Højberg</title><link href="https://simonwillison.net/2025/Oct/8/simon-hojberg/#atom-tag" rel="alternate"/><published>2025-10-08T18:08:32+00:00</published><updated>2025-10-08T18:08:32+00:00</updated><id>https://simonwillison.net/2025/Oct/8/simon-hojberg/#atom-tag</id><summary type="html">
    &lt;blockquote cite="https://hojberg.xyz/the-programmer-identity-crisis/"&gt;&lt;p&gt;The cognitive debt of LLM-laden coding extends beyond disengagement of our craft. We’ve all heard the stories. Hyped up, vibed up, slop-jockeys with attention spans shorter than the framework-hopping JavaScript devs of the early 2010s, sling their sludge in pull requests and design docs, discouraging collaboration and disrupting teams. Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://hojberg.xyz/the-programmer-identity-crisis/"&gt;Simon Højberg&lt;/a&gt;, The Programmer Identity Crisis&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/code-review"&gt;code-review&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-misuse"&gt;ai-misuse&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="code-review"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="ai-ethics"/><category term="ai-misuse"/><category term="cognitive-debt"/></entry><entry><title>Cognitive load is what matters</title><link href="https://simonwillison.net/2024/Dec/26/cognitive-load-is-what-matters/#atom-tag" rel="alternate"/><published>2024-12-26T06:01:08+00:00</published><updated>2024-12-26T06:01:08+00:00</updated><id>https://simonwillison.net/2024/Dec/26/cognitive-load-is-what-matters/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://minds.md/zakirullin/cognitive"&gt;Cognitive load is what matters&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Excellent living document (the underlying repo has &lt;a href="https://github.com/zakirullin/cognitive-load/commits/main/"&gt;625 commits&lt;/a&gt; since being created in May 2023) maintained by Artem Zakirullin about minimizing the cognitive load needed to understand and maintain software.&lt;/p&gt;
&lt;p&gt;This all rings very true to me. I judge the quality of a piece of code by how easy it is to change, and anything that causes me to take on more cognitive load - unraveling a class hierarchy, reading though dozens of tiny methods - reduces the quality of the code by that metric.&lt;/p&gt;
&lt;p&gt;Lots of accumulated snippets of wisdom in this one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.&lt;/p&gt;
&lt;/blockquote&gt;

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/karpathy/status/1872038630405054853?s=46"&gt;@karpathy&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/programming"&gt;programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/software-engineering"&gt;software-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="programming"/><category term="software-engineering"/><category term="cognitive-debt"/></entry></feed>