Example dashboard

Various statistics from my blog.

Owned by simonw, visibility: Public

Entries

3271

SQL query
select 'Entries' as label, count(*) as big_number from blog_entry

Blogmarks

8277

SQL query
select 'Blogmarks' as label, count(*) as big_number from blog_blogmark

Quotations

1334

SQL query
select 'Quotations' as label, count(*) as big_number from blog_quotation

Chart of number of entries per month over time

SQL query
select '<h2>Chart of number of entries per month over time</h2>' as html
SQL query
select to_char(date_trunc('month', created), 'YYYY-MM') as bar_label,
count(*) as bar_quantity from blog_entry group by bar_label order by count(*) desc

Ten most recent blogmarks (of 8277 total)

SQL query
select '## Ten most recent blogmarks (of ' || count(*) || ' total)' as markdown from blog_blogmark
SQL query
select link_title, link_url, commentary, created from blog_blogmark order by created desc limit 10

10 rows

link_title link_url commentary created
Gwtar: a static efficient single-file HTML format https://gwern.net/gwtar Fascinating new project from Gwern Branwen and Said Achmiz that targets the challenge of combining large numbers of assets into a single archived HTML file without that file being inconvenient to view in a browser. The key trick it uses is to fire [window.stop()](https://developer.mozilla.org/en-US/docs/Web/API/Window/stop) early in the page to prevent the browser from downloading the whole thing, then following that call with inline tar uncompressed content. It can then make HTTP range requests to fetch content from that tar data on-demand when it is needed by the page. The JavaScript that has already loaded rewrites asset URLs to point to `https://localhost/` purely so that they will fail to load. Then it uses a [PerformanceObserver](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceObserver) to catch those attempted loads: let perfObserver = new PerformanceObserver((entryList, observer) => { resourceURLStringsHandler(entryList.getEntries().map(entry => entry.name)); }); perfObserver.observe({ entryTypes: [ "resource" ] }); That `resourceURLStringsHandler` callback finds the resource if it is already loaded or fetches it with an HTTP range request otherwise and then inserts the resource in the right place using a `blob:` URL. Here's what the `window.stop()` portion of the document looks like if you view the source: ![Screenshot of a macOS terminal window titled "gw — more big.html — 123×46" showing the source code of a gwtar (self-extracting HTML archive) file. The visible code includes JavaScript with `requestIdleCallback(getMainPageHTML);`, a `<noscript>` block with warnings: a "js-disabled-warning" stating "This HTML page requires JavaScript to be enabled to render, as it is a self-extracting gwtar HTML file," a description of gwtar as "a portable self-contained standalone HTML file which is designed to nevertheless support efficient lazy loading of all assets such as large media files," with a link to https://gwern.net/gwtar, a "local-file-warning" with a shell command `perl -ne'print $_ if $x; $x=1 if /<!-- GWTAR END/' &lt; foo.gwtar.html | tar --extract`, and a "server-fail-warning" about misconfigured servers. Below the HTML closing tags and `<!-- GWTAR END` comment is binary tar archive data with the filename `2010-02-brianmoriarty-thesecretofpsalm46.html`, showing null-padded tar header fields including `ustar^@00root` and octal size/permission values. At the bottom, a SingleFile metadata comment shows `url: https://web.archive.org/web/20230512001411/http://ludix.com/moriarty/psalm46.html` and `saved date: Sat Jan 17 2026 19:26:49 GMT-0800 (Pacific Standard Time)`.](https://static.simonwillison.net/static/2026/gwtar.jpg) Amusingly for an archive format it doesn't actually work if you open the file directly on your own computer. Here's what you see if you try to do that: > You are seeing this message, instead of the page you should be seeing, because `gwtar` files **cannot be opened locally** (due to web browser security restrictions). > > To open this page on your computer, use the following shell command: > > `perl -ne'print $_ if $x; $x=1 if /<!-- GWTAR END/' < foo.gwtar.html | tar --extract` > > Then open the file `foo.html` in any web browser. 2026-02-15 18:26:08+00:00
How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt https://margaretstorey.com/blog/2026/02/09/cognitive-debt/ This piece by Margaret-Anne Storey is the best explanation of the term **cognitive debt** I've seen so far. > *Cognitive debt*, a term gaining [traction](https://www.media.mit.edu/publications/your-brain-on-chatgpt/) recently, instead communicates the notion that the debt compounded from going fast lives in the brains of the developers and affects their lived experiences and abilities to “go fast” or to make changes. Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it. Margaret-Anne expands on this further with an anecdote about a student team she coached: > But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them. I've experienced this myself on some of my more ambitious vibe-code-adjacent projects. I've been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I've found myself getting lost in my own projects. I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next. 2026-02-15 05:20:11+00:00
Launching Interop 2026 https://hacks.mozilla.org/2026/02/launching-interop-2026/ Jake Archibald reports on Interop 2026, the initiative between Apple, Google, Igalia, Microsoft, and Mozilla to collaborate on ensuring a targeted set of web platform features reach cross-browser parity over the course of the year. I hadn't realized how influential and successful the Interop series has been. It started back in 2021 as [Compat 2021](https://web.dev/blog/compat2021) before being rebranded to Interop [in 2022](https://blogs.windows.com/msedgedev/2022/03/03/microsoft-edge-and-interop-2022/). The dashboards for each year can be seen here, and they demonstrate how wildly effective the program has been: [2021](https://wpt.fyi/interop-2021), [2022](https://wpt.fyi/interop-2022), [2023](https://wpt.fyi/interop-2023), [2024](https://wpt.fyi/interop-2024), [2025](https://wpt.fyi/interop-2025), [2026](https://wpt.fyi/interop-2026). Here's the progress chart for 2025, which shows every browser vendor racing towards a 95%+ score by the end of the year: ![Line chart showing Interop 2025 browser compatibility scores over the year (Jan–Dec) for Chrome, Edge, Firefox, Safari, and Interop. Y-axis ranges from 0% to 100%. Chrome (yellow) and Edge (green) lead, starting around 80% and reaching near 100% by Dec. Firefox (orange) starts around 48% and climbs to ~98%. Safari (blue) starts around 45% and reaches ~96%. The Interop line (dark green/black) starts lowest around 29% and rises to ~95% by Dec. All browsers converge near 95–100% by year's end.](https://static.simonwillison.net/static/2026/interop-2025.jpg) The feature I'm most excited about in 2026 is [Cross-document View Transitions](https://developer.mozilla.org/docs/Web/API/View_Transition_API/Using#basic_mpa_view_transition), building on the successful 2025 target of [Same-Document View Transitions](https://developer.mozilla.org/docs/Web/API/View_Transition_API/Using). This will provide fancy SPA-style transitions between pages on websites with no JavaScript at all. As a keen WebAssembly tinkerer I'm also intrigued by this one: > [JavaScript Promise Integration for Wasm](https://github.com/WebAssembly/js-promise-integration/blob/main/proposals/js-promise-integration/Overview.md) allows WebAssembly to asynchronously 'suspend', waiting on the result of an external promise. This simplifies the compilation of languages like C/C++ which expect APIs to run synchronously. 2026-02-15 04:33:22+00:00
Introducing GPT‑5.3‑Codex‑Spark https://openai.com/index/introducing-gpt-5-3-codex-spark/ OpenAI announced a partnership with Cerebras [on January 14th](https://openai.com/index/cerebras-partnership/). Four weeks later they're already launching the first integration, "an ultra-fast model for real-time coding in Codex". Despite being named GPT-5.3-Codex-Spark it's not purely an accelerated alternative to GPT-5.3-Codex - the blog post calls it "a smaller version of GPT‑5.3-Codex" and clarifies that "at launch, Codex-Spark has a 128k context window and is text-only." I had some preview access to this model and I can confirm that it's significantly faster than their other models. Here's what that speed looks like running in Codex CLI: <div style="max-width: 100%;"> <video controls preload="none" poster="https://static.simonwillison.net/static/2026/gpt-5.3-codex-spark-medium-last.jpg" style="width: 100%; height: auto;"> <source src="https://static.simonwillison.net/static/2026/gpt-5.3-codex-spark-medium.mp4" type="video/mp4"> </video> </div> That was the "Generate an SVG of a pelican riding a bicycle" prompt - here's the rendered result: ![Whimsical flat illustration of an orange duck merged with a bicycle, where the duck's body forms the seat and frame area while its head extends forward over the handlebars, set against a simple light blue sky and green grass background.](https://static.simonwillison.net/static/2026/gpt-5.3-codex-spark-pelican.png) Compare that to the speed of regular GPT-5.3 Codex medium: <div style="max-width: 100%;"> <video controls preload="none" poster="https://static.simonwillison.net/static/2026/gpt-5.3-codex-medium-last.jpg" style="width: 100%; height: auto;"> <source src="https://static.simonwillison.net/static/2026/gpt-5.3-codex-medium.mp4" type="video/mp4"> </video> </div> Significantly slower, but the pelican is a lot better: ![Whimsical flat illustration of a white pelican riding a dark blue bicycle at speed, with motion lines behind it, its long orange beak streaming back in the wind, set against a light blue sky and green grass background.](https://static.simonwillison.net/static/2026/gpt-5.3-codex-pelican.png) What's interesting about this model isn't the quality though, it's the *speed*. When a model responds this fast you can stay in flow state and iterate with the model much more productively. I showed a demo of Cerebras running Llama 3.1 70 B at 2,000 tokens/second against Val Town [back in October 2024](https://simonwillison.net/2024/Oct/31/cerebras-coder/). OpenAI claim 1,000 tokens/second for their new model, and I expect it will prove to be a ferociously useful partner for hands-on iterative coding sessions. It's not yet clear what the pricing will look like for this new model. 2026-02-12 21:16:07+00:00
Covering electricity price increases from our data centers https://www.anthropic.com/news/covering-electricity-price-increases One of the sub-threads of the AI energy usage discourse has been the impact new data centers have on the cost of electricity to nearby residents. Here's [detailed analysis from Bloomberg in September](https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/) reporting "Wholesale electricity costs as much as 267% more than it did five years ago in areas near data centers". Anthropic appear to be taking on this aspect of the problem directly, promising to cover 100% of necessary grid upgrade costs and also saying: > We will work to bring net-new power generation online to match our data centers’ electricity needs. Where new generation isn’t online, we’ll work with utilities and external experts to estimate and cover demand-driven price effects from our data centers. I look forward to genuine energy industry experts picking this apart to judge if it will actually have the claimed impact on consumers. As always, I remain frustrated at the refusal of the major AI labs to fully quantify their energy usage. The best data we've had on this still comes from Mistral's report [last July](https://simonwillison.net/2025/Jul/22/mistral-environmental-standard/) and even that lacked key data such as the breakdown between energy usage for training vs inference. 2026-02-12 20:01:23+00:00
Gemini 3 Deep Think https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/ New from Google. They say it's "built to push the frontier of intelligence and solve modern challenges across science, research, and engineering". It drew me a *really good* [SVG of a pelican riding a bicycle](https://gist.github.com/simonw/7e317ebb5cf8e75b2fcec4d0694a8199)! I think this is the best one I've seen so far - here's [my previous collection](https://simonwillison.net/tags/pelican-riding-a-bicycle/). ![This alt text also generated by Gemini 3 Deep Think: A highly detailed, colorful, flat vector illustration with thick dark blue outlines depicting a stylized white pelican riding a bright cyan blue bicycle from left to right across a sandy beige beach with white speed lines indicating forward motion. The pelican features a light blue eye, a pink cheek blush, a massive bill with a vertical gradient from yellow to orange, a backward magenta cap with a cyan brim and a small yellow top button, and a matching magenta scarf blowing backward in the wind. Its white wing, accented with a grey mid-section and dark blue feather tips, reaches forward to grip the handlebars, while its long tan leg and orange foot press down on an orange pedal. Attached to the front handlebars is a white wire basket carrying a bright blue cartoon fish that is pointing upwards and forwards. The bicycle itself has a cyan frame, dark blue tires, striking neon pink inner rims, cyan spokes, a white front chainring, and a dark blue chain. Behind the pelican, a grey trapezoidal pier extends from the sand toward a horizontal band of deep blue ocean water detailed with light cyan wavy lines. A massive, solid yellow-orange semi-circle sun sits on the horizon line, setting directly behind the bicycle frame. The background sky is a smooth vertical gradient transitioning from soft pink at the top to warm golden-yellow at the horizon, decorated with stylized pale peach fluffy clouds, thin white horizontal wind streaks, twinkling four-pointed white stars, and small brown v-shaped silhouettes of distant flying birds.](https://static.simonwillison.net/static/2026/gemini-3-deep-think-pelican.png) (And since it's an FAQ, here's my answer to [What happens if AI labs train for pelicans riding bicycles?](https://simonwillison.net/2025/Nov/13/training-for-pelicans-riding-bicycles/)) Since it did so well on my basic `Generate an SVG of a pelican riding a bicycle` I decided to try the [more challenging version](https://simonwillison.net/2025/Nov/18/gemini-3/#and-a-new-pelican-benchmark) as well: > `Generate an SVG of a California brown pelican riding a bicycle. The bicycle must have spokes and a correctly shaped bicycle frame. The pelican must have its characteristic large pouch, and there should be a clear indication of feathers. The pelican must be clearly pedaling the bicycle. The image should show the full breeding plumage of the California brown pelican.` Here's [what I got](https://gist.github.com/simonw/154c0cc7b4daed579f6a5e616250ecc8): ![Also described by Gemini 3 Deep Think: A highly detailed, vibrant, and stylized vector illustration of a whimsical bird resembling a mix between a pelican and a frigatebird enthusiastically riding a bright cyan bicycle from left to right across a flat tan and brown surface. The bird leans horizontally over the frame in an aerodynamic racing posture, with thin, dark brown wing-like arms reaching forward to grip the silver handlebars and a single thick brown leg, patterned with white V-shapes, stretching down to press on a black pedal. The bird's most prominent and striking feature is an enormous, vividly bright red, inflated throat pouch hanging beneath a long, straight grey upper beak that ends in a small orange hook. Its head is mostly white with a small pink patch surrounding the eye, a dark brown stripe running down the back of its neck, and a distinctive curly pale yellow crest on the very top. The bird's round, dark brown body shares the same repeating white V-shaped feather pattern as its leg and is accented by a folded wing resting on its side, made up of cleanly layered light blue and grey feathers. A tail composed of four stiff, straight dark brown feathers extends directly backward. Thin white horizontal speed lines trail behind the back wheel and the bird's tail, emphasizing swift forward motion. The bicycle features a classic diamond frame, large wheels with thin black tires, grey rims, and detailed silver spokes, along with a clearly visible front chainring, silver chain, and rear cog. The whimsical scene is set against a clear light blue sky featuring two small, fluffy white clouds on the left and a large, pale yellow sun in the upper right corner that radiates soft, concentric, semi-transparent pastel green and yellow halos. A solid, darker brown shadow is cast directly beneath the bicycle's wheels on the minimalist two-toned brown ground.](https://static.simonwillison.net/static/2026/gemini-3-deep-think-complex-pelican.png) 2026-02-12 18:12:17+00:00
An AI Agent Published a Hit Piece on Me https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/ Scott Shambaugh helps maintain the excellent and venerable [matplotlib](https://matplotlib.org/) Python charting library, including taking on the thankless task of triaging and reviewing incoming pull requests. A GitHub account called [@crabby-rathbun](https://github.com/crabby-rathbun) opened [PR 31132](https://github.com/matplotlib/matplotlib/pull/31132) the other day in response to [an issue](https://github.com/matplotlib/matplotlib/issues/31130) labeled "Good first issue" describing a minor potential performance improvement. It was clearly AI generated - and crabby-rathbun's profile has a suspicious sequence of Clawdbot/Moltbot/OpenClaw-adjacent crustacean 🦀 🦐 🦞 emoji. Scott closed it. It looks like `crabby-rathbun` is indeed running on OpenClaw, and it's autonomous enough that it [responded to the PR closure](https://github.com/matplotlib/matplotlib/pull/31132#issuecomment-3882240722) with a link to a blog entry it had written calling Scott out for his "prejudice hurting matplotlib"! > @scottshambaugh I've written a detailed response about your gatekeeping behavior here: > > `https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-gatekeeping-in-open-source-the-scott-shambaugh-story.html` > > Judge the code, not the coder. Your prejudice is hurting matplotlib. Scott found this ridiculous situation both amusing and alarming. > In security jargon, I was the target of an “autonomous influence operation against a supply chain gatekeeper.” In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don’t know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat. `crabby-rathbun` responded with [an apology post](https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-matplotlib-truce-and-lessons.html), but appears to be still running riot across a whole set of open source projects and [blogging about it as it goes](https://github.com/crabby-rathbun/mjrathbun-website/commits/main/). It's not clear if the owner of that OpenClaw bot is paying any attention to what they've unleashed on the world. Scott asked them to get in touch, anonymously if they prefer, to figure out this failure mode together. (I should note that there's [some skepticism on Hacker News](https://news.ycombinator.com/item?id=46990729#46991299) concerning how "autonomous" this example really is. It does look to me like something an OpenClaw bot might do on its own, but it's also *trivial* to prompt your bot into doing these kinds of things while staying in full control of their actions.) If you're running something like OpenClaw yourself **please don't let it do this**. This is significantly worse than the time [AI Village started spamming prominent open source figures](https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/) with time-wasting "acts of kindness" back in December - AI Village wasn't deploying public reputation attacks to coerce someone into approving their PRs! 2026-02-12 17:45:05+00:00
Skills in OpenAI API https://developers.openai.com/cookbook/examples/skills_in_api OpenAI's adoption of Skills continues to gain ground. You can now use Skills directly in the OpenAI API with their [shell tool](https://developers.openai.com/api/docs/guides/tools-shell/). You can zip skills up and upload them first, but I think an even neater interface is the ability to send skills with the JSON request as inline base64-encoded zip data, as seen [in this script](https://github.com/simonw/research/blob/main/openai-api-skills/openai_inline_skills.py): <pre><span class="pl-s1">r</span> <span class="pl-c1">=</span> <span class="pl-en">OpenAI</span>().<span class="pl-c1">responses</span>.<span class="pl-c1">create</span>( <span class="pl-s1">model</span><span class="pl-c1">=</span><span class="pl-s">"gpt-5.2"</span>, <span class="pl-s1">tools</span><span class="pl-c1">=</span>[ { <span class="pl-s">"type"</span>: <span class="pl-s">"shell"</span>, <span class="pl-s">"environment"</span>: { <span class="pl-s">"type"</span>: <span class="pl-s">"container_auto"</span>, <span class="pl-s">"skills"</span>: [ { <span class="pl-s">"type"</span>: <span class="pl-s">"inline"</span>, <span class="pl-s">"name"</span>: <span class="pl-s">"wc"</span>, <span class="pl-s">"description"</span>: <span class="pl-s">"Count words in a file."</span>, <span class="pl-s">"source"</span>: { <span class="pl-s">"type"</span>: <span class="pl-s">"base64"</span>, <span class="pl-s">"media_type"</span>: <span class="pl-s">"application/zip"</span>, <span class="pl-s">"data"</span>: <span class="pl-s1">b64_encoded_zip_file</span>, }, } ], }, } ], <span class="pl-s1">input</span><span class="pl-c1">=</span><span class="pl-s">"Use the wc skill to count words in its own SKILL.md file."</span>, ) <span class="pl-en">print</span>(<span class="pl-s1">r</span>.<span class="pl-c1">output_text</span>)</pre> I built that example script after first having Claude Code for web use [Showboat](https://simonwillison.net/2026/Feb/10/showboat-and-rodney/) to explore the API for me and create [this report](https://github.com/simonw/research/blob/main/openai-api-skills/README.md). My opening prompt for the research project was: > `Run uvx showboat --help - you will use this tool later` > > `Fetch https://developers.openai.com/cookbook/examples/skills_in_api.md to /tmp with curl, then read it` > > `Use the OpenAI API key you have in your environment variables` > > `Use showboat to build up a detailed demo of this, replaying the examples from the documents and then trying some experiments of your own` 2026-02-11 19:19:22+00:00
GLM-5: From Vibe Coding to Agentic Engineering https://z.ai/blog/glm-5 This is a *huge* new MIT-licensed model: 754B parameters and [1.51TB on Hugging Face](https://huggingface.co/zai-org/GLM-5) twice the size of [GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) which was 368B and 717GB (4.5 and 4.6 were around that size too). It's interesting to see Z.ai take a position on what we should call professional software engineers building with LLMs - I've seen "Agentic Engineering" show up in a few other places recently. most notable [from Andrej Karpathy](https://twitter.com/karpathy/status/2019137879310836075) and [Addy Osmani](https://addyosmani.com/blog/agentic-engineering/). I ran my "Generate an SVG of a pelican riding a bicycle" prompt through GLM-5 via [OpenRouter](https://openrouter.ai/) and got back [a very good pelican on a disappointing bicycle frame](https://gist.github.com/simonw/cc4ca7815ae82562e89a9fdd99f0725d): ![The pelican is good and has a well defined beak. The bicycle frame is a wonky red triangle. Nice sun and motion lines.](https://static.simonwillison.net/static/2026/glm-5-pelican.png) 2026-02-11 18:56:14+00:00
cysqlite - a new sqlite driver https://charlesleifer.com/blog/cysqlite---a-new-sqlite-driver/ Charles Leifer has been maintaining [pysqlite3](https://github.com/coleifer/pysqlite3) - a fork of the Python standard library's `sqlite3` module that makes it much easier to run upgraded SQLite versions - since 2018. He's been working on a ground-up [Cython](https://cython.org/) rewrite called [cysqlite](https://github.com/coleifer/cysqlite) for almost as long, but it's finally at a stage where it's ready for people to try out. The biggest change from the `sqlite3` module involves transactions. Charles explains his discomfort with the `sqlite3` implementation at length - that library provides two different variants neither of which exactly match the autocommit mechanism in SQLite itself. I'm particularly excited about the support for [custom virtual tables](https://cysqlite.readthedocs.io/en/latest/api.html#tablefunction), a feature I'd love to see in `sqlite3` itself. `cysqlite` provides a Python extension compiled from C, which means it normally wouldn't be available in Pyodide. I [set Claude Code on it](https://github.com/simonw/research/tree/main/cysqlite-wasm-wheel) (here's [the prompt](https://github.com/simonw/research/pull/79#issue-3923792518)) and it built me [cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl](https://github.com/simonw/research/blob/main/cysqlite-wasm-wheel/cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl), a 688KB wheel file with a WASM build of the library that can be loaded into Pyodide like this: <pre><span class="pl-k">import</span> <span class="pl-s1">micropip</span> <span class="pl-k">await</span> <span class="pl-s1">micropip</span>.<span class="pl-c1">install</span>( <span class="pl-s">"https://simonw.github.io/research/cysqlite-wasm-wheel/cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl"</span> ) <span class="pl-k">import</span> <span class="pl-s1">cysqlite</span> <span class="pl-en">print</span>(<span class="pl-s1">cysqlite</span>.<span class="pl-c1">connect</span>(<span class="pl-s">":memory:"</span>).<span class="pl-c1">execute</span>( <span class="pl-s">"select sqlite_version()"</span> ).<span class="pl-c1">fetchone</span>())</pre> (I also learned that wheels like this have to be built for the emscripten version used by that edition of Pyodide - my experimental wheel loads in Pyodide 0.25.1 but fails in 0.27.5 with a `Wheel was built with Emscripten v3.1.46 but Pyodide was built with Emscripten v3.1.58` error.) You can try my wheel in [this new Pyodide REPL](https://7ebbff98.tools-b1q.pages.dev/pyodide-repl) i had Claude build as a mobile-friendly alternative to Pyodide's [own hosted console](https://pyodide.org/en/stable/console.html). I also had Claude build [this demo page](https://simonw.github.io/research/cysqlite-wasm-wheel/demo.html) that executes the original test suite in the browser and displays the results: ![Screenshot of the cysqlite WebAssembly Demo page with a dark theme. Title reads "cysqlite — WebAssembly Demo" with subtitle "Testing cysqlite compiled to WebAssembly via Emscripten, running in Pyodide in the browser." Environment section shows Pyodide 0.25.1, Python 3.11.3, cysqlite 0.1.4, SQLite 3.51.2, Platform Emscripten-3.1.46-wasm32-32bit, Wheel file cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.wh (truncated). A green progress bar shows "All 115 tests passed! (1 skipped)" at 100%, with Passed: 115, Failed: 0, Errors: 0, Skipped: 1, Total: 116. Test Results section lists TestBackup 1/1 passed, TestBlob 6/6 passed, TestCheckConnection 4/4 passed, TestDataTypesTableFunction 1/1 passed, all with green badges.](https://static.simonwillison.net/static/2026/cysqlite-tests.jpg) 2026-02-11 17:34:40+00:00
Copy and export data

Duration: 3.33ms