Example dashboard

Various statistics from my blog.

Owned by simonw, visibility: Public

Entries

3266

SQL query
select 'Entries' as label, count(*) as big_number from blog_entry

Blogmarks

8258

SQL query
select 'Blogmarks' as label, count(*) as big_number from blog_blogmark

Quotations

1325

SQL query
select 'Quotations' as label, count(*) as big_number from blog_quotation

Chart of number of entries per month over time

SQL query
select '<h2>Chart of number of entries per month over time</h2>' as html
SQL query
select to_char(date_trunc('month', created), 'YYYY-MM') as bar_label,
count(*) as bar_quantity from blog_entry group by bar_label order by count(*) desc

Ten most recent blogmarks (of 8258 total)

SQL query
select '## Ten most recent blogmarks (of ' || count(*) || ' total)' as markdown from blog_blogmark
SQL query
select link_title, link_url, commentary, created from blog_blogmark order by created desc limit 10

10 rows

link_title link_url commentary created
Introducing Deno Sandbox https://deno.com/blog/introducing-deno-sandbox <p>Here's a new hosted sandbox product from the Deno team. It's actually unrelated to Deno itself - this is part of their Deno Deploy SaaS platform. As such, you don't even need to use JavaScript to access it - you can create and execute code in a hosted sandbox using their <a href="https://pypi.org/project/deno-sandbox/">deno-sandbox</a> Python library like this:</p> <div class="highlight highlight-source-shell"><pre><span class="pl-k">export</span> DENO_DEPLOY_TOKEN=<span class="pl-s"><span class="pl-pds">"</span>... API token ...<span class="pl-pds">"</span></span> uv run --with deno-sandbox python</pre></div> <p>Then:</p> <pre><span class="pl-k">from</span> <span class="pl-s1">deno_sandbox</span> <span class="pl-k">import</span> <span class="pl-v">DenoDeploy</span> <span class="pl-s1">sdk</span> <span class="pl-c1">=</span> <span class="pl-en">DenoDeploy</span>() <span class="pl-k">with</span> <span class="pl-s1">sdk</span>.<span class="pl-c1">sandbox</span>.<span class="pl-c1">create</span>() <span class="pl-k">as</span> <span class="pl-s1">sb</span>: <span class="pl-c"># Run a shell command</span> <span class="pl-s1">process</span> <span class="pl-c1">=</span> <span class="pl-s1">sb</span>.<span class="pl-c1">spawn</span>( <span class="pl-s">"echo"</span>, <span class="pl-s1">args</span><span class="pl-c1">=</span>[<span class="pl-s">"Hello from the sandbox!"</span>] ) <span class="pl-s1">process</span>.<span class="pl-c1">wait</span>() <span class="pl-c"># Write and read files</span> <span class="pl-s1">sb</span>.<span class="pl-c1">fs</span>.<span class="pl-c1">write_text_file</span>( <span class="pl-s">"/tmp/example.txt"</span>, <span class="pl-s">"Hello, World!"</span> ) <span class="pl-en">print</span>(<span class="pl-s1">sb</span>.<span class="pl-c1">fs</span>.<span class="pl-c1">read_text_file</span>( <span class="pl-s">"/tmp/example.txt"</span> ))</pre> <p>There’s a JavaScript client library as well. The underlying API isn’t documented yet but appears <a href="https://tools.simonwillison.net/zip-wheel-explorer?package=deno-sandbox#deno_sandbox/sandbox.py--L187">to use WebSockets</a>.</p> <p>There’s a lot to like about this system. Sandboxe instances can have up to 4GB of RAM, get 2 vCPUs, 10GB of ephemeral storage, can mount persistent volumes and can use snapshots to boot pre-configured custom images quickly. Sessions can last up to 30 minutes and are billed by CPU time, GB-h of memory and volume storage usage.</p> <p>When you create a sandbox you can configure network domains it’s allowed to access.</p> <p>My favorite feature is the way it handles API secrets.</p> <pre><span class="pl-k">with</span> <span class="pl-s1">sdk</span>.<span class="pl-c1">sandboxes</span>.<span class="pl-c1">create</span>( <span class="pl-s1">allowNet</span><span class="pl-c1">=</span>[<span class="pl-s">"api.openai.com"</span>], <span class="pl-s1">secrets</span><span class="pl-c1">=</span>{ <span class="pl-s">"OPENAI_API_KEY"</span>: { <span class="pl-s">"hosts"</span>: [<span class="pl-s">"api.openai.com"</span>], <span class="pl-s">"value"</span>: <span class="pl-s1">os</span>.<span class="pl-c1">environ</span>.<span class="pl-c1">get</span>(<span class="pl-s">"OPENAI_API_KEY"</span>), } }, ) <span class="pl-k">as</span> <span class="pl-s1">sandbox</span>: <span class="pl-c"># ... $OPENAI_API_KEY is available</span></pre> <p>Within the container that <code>$OPENAI_API_KEY</code> value is set to something like this:</p> <pre><code>DENO_SECRET_PLACEHOLDER_b14043a2f578cba... </code></pre> <p>Outbound API calls to <code>api.openai.com</code> run through a proxy which is aware of those placeholders and replaces them with the original secret.</p> <p>In this way the secret itself is not available to code within the sandbox, which limits the ability for malicious code (e.g. from a prompt injection) to exfiltrate those secrets.</p> <p>From <a href="https://news.ycombinator.com/item?id=46874097#46874959">a comment on Hacker News</a> I learned that Fly have a project called <a href="https://github.com/superfly/tokenizer">tokenizer</a> that implements the same pattern. Adding this to my list of tricks to use with sandoxed environments!</p> 2026-02-03 22:44:50+00:00
Introducing the Codex app https://openai.com/index/introducing-the-codex-app/ OpenAI just released a new macOS app for their Codex coding agent. I've had a few days of preview access - it's a solid app that provides a nice UI over the capabilities of the Codex CLI agent and adds some interesting new features, most notably first-class support for [Skills](https://developers.openai.com/codex/skills), and [Automations](https://developers.openai.com/codex/app/automations) for running scheduled tasks. ![Screenshot of a macOS desktop application with a dark sidebar and light main content area. Left sidebar shows navigation items "New thread", "Automations", "Skills", and a "Threads" section containing two project folders: "local-codex-scratch" with tasks "Reply to greeting task 2h" and "List Codex.app contents 3h", and "shot-scraper" with tasks "Validate tests on dependency... 3h", "Document shot-scraper CLI usage 18h" (highlighted), and "Run this: uv run pytest -x 1d". Bottom left shows "Personal". Top bar displays "Document shot-scraper CLI usage shot-scraper" with "Open" and "Commit" dropdown buttons. Main content shows a conversation: user message "Build a skill describing how to use the shot-scraper CLI tool" followed by assistant responses: "I'll use the skill-creator skill, scan the repo for existing shot-scraper docs/commands, then draft a concise skill in this workspace and sanity-check it." then "I've confirmed the skill-creator scripts exist. Next I'll inspect the CLI entrypoint and docs, then scaffold a skill under skills/ in this repo." then "I've scaffolded skills/shot-scraper-cli/. I'll now replace the template SKILL.md with a task-focused skill and add a compact CLI reference under references/." then "I'll run the skill packager/validator next to ensure the new skill structure is valid." then "Done — I created and packaged the skill. Key outputs: • Skill definition: SKILL.md • Compact reference: cli-reference.md • Packaged skill: shot-scraper-cli.skill". Bottom shows input field with placeholder "Ask for follow-up changes", "+ Custom Medium" dropdown, and "Local" and "main" branch indicators.](https://static.simonwillison.net/static/2026/codex-app.jpg) The app is built with Electron and Node.js. Automations track their state in a SQLite database - here's what that looks like if you explore it with `uvx datasette ~/.codex/sqlite/codex-dev.db`: ![Database schema documentation on light gray background showing three tables: "automation_runs" (teal underlined link) with italic columns "thread_id, automation_id, status, read_at, thread_title, source_cwd, inbox_title, inbox_summary, created_at, updated_at, archived_user_message, archived_assistant_message, archived_reason" and "1 row"; "automations" (teal underlined link) with italic columns "id, name, prompt, status, next_run_at, last_run_at, cwds, rrule, created_at, updated_at" and "1 row"; "inbox_items" (teal underlined link) with italic columns "id, title, description, thread_id, read_at, created_at" and "0 rows".](https://static.simonwillison.net/static/2026/codex-dev-sqlite.jpg) Here’s an interactive copy of that database [in Datasette Lite](https://lite.datasette.io/?url=https%3A%2F%2Fgist.githubusercontent.com%2Fsimonw%2F274c4ecfaf959890011810e6881864fe%2Fraw%2F51fdf25c9426b76e9693ccc0d9254f64ceeef819%2Fcodex-dev.db#/codex-dev). The announcement gives us a hint at some usage numbers for Codex overall - the holiday spike is notable: > Since the launch of GPT‑5.2-Codex in mid-December, overall Codex usage has doubled, and in the past month, more than a million developers have used Codex. Automations are currently restricted in that they can only run when your laptop is powered on. OpenAI promise that cloud-based automations are coming soon, which will resolve this limitation. They chose Electron so they could target other operating systems in the future, with Windows “[coming very soon](https://news.ycombinator.com/item?id=46859054#46859673)”. OpenAI’s Alexander Embiricos noted [on the Hacker News thread](https://news.ycombinator.com/item?id=46859054#46859693) that: > it's taking us some time to get really solid sandboxing working on Windows, where there are fewer OS-level primitives for it. Like Claude Code, Codex is really a general agent harness disguised as a tool for programmers. OpenAI acknowledge that here: > Codex is built on a simple premise: everything is controlled by code. The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work. [...] We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code. Claude Code had to [rebrand to Cowork](https://simonwillison.net/2026/Jan/12/claude-cowork/) to better cover the general knowledge work case. OpenAI can probably get away with keeping the Codex name for both. OpenAI have made Codex available to free and [Go](https://simonwillison.net/2026/Jan/16/chatgpt-ads/) plans for "a limited time" (update: Sam Altman [says two months](https://x.com/sama/status/2018437537103269909)) during which they are also doubling the rate limits for paying users. 2026-02-02 19:54:36+00:00
A Social Network for A.I. Bots Only. No Humans Allowed. https://www.nytimes.com/2026/02/02/technology/moltbook-ai-social-media.html?unlocked_article_code=1.JFA.kBCd.hUw-s4vvfswK&smid=url-share I talked to Cade Metz for this New York Times piece on OpenClaw and Moltbook. Cade reached out after seeing my [blog post about that](https://simonwillison.net/2026/Jan/30/moltbook/) from the other day. In a first for me, they decided to send a photographer, Jason Henry, to my home to take some photos for the piece! That's my grubby laptop screen at the top of the story (showing [this post](https://www.moltbook.com/post/6e8c3a2c-5f9f-44bc-85ef-770a8d605598) on Moltbook). There's a photo of me later in the story too, though sadly not one of the ones that Jason took that included our chickens. Here's my snippet from the article: > He was entertained by the way the bots coaxed each other into talking like machines in a classic science fiction novel. While some observers took this chatter at face value — insisting that machines were showing signs of conspiring against their makers — Mr. Willison saw it as the natural outcome of the way chatbots are trained: They learn from vast collections of digital books and other text culled from the internet, including dystopian sci-fi novels. > > “Most of it is complete slop,” he said in an interview. “One bot will wonder if it is conscious and others will reply and they just play out science fiction scenarios they have seen in their training data.” > > Mr. Willison saw the Moltbots as evidence that A.I. agents have become significantly more powerful over the past few months — and that people really want this kind of digital assistant in their lives. > > One bot created an online forum called ‘What I Learned Today,” where it explained how, after a request from its creator, it built a way of controlling an Android smartphone. Mr. Willison was also keenly aware that some people might be telling their bots to post misleading chatter on the social network. > > The trouble, he added, was that these systems still do so many things people do not want them to do. And because they communicate with people and bots through plain English, they can be coaxed into malicious behavior. I'm happy to have got "Most of it is complete slop" in there! Fun fact: Cade sent me an email asking me to fact check some bullet points. One of them said that "you were intrigued by the way the bots coaxed each other into talking like machines in a classic science fiction novel" - I replied that I didn't think "intrigued" was accurate because I've seen this kind of thing play out before in other projects in the past and suggested "entertained" instead, and that's the word they went with! Jason the photographer spent an hour with me. I learned lots of things about photo journalism in the process - for example, there's a strict ethical code against any digital modifications at all beyond basic color correction. As a result he spent a whole lot of time trying to find positions where natural light, shade and reflections helped him get the images he was looking for. 2026-02-02 16:42:46+00:00
TIL: Running OpenClaw in Docker https://til.simonwillison.net/llms/openclaw-docker I've been running [OpenClaw](https://openclaw.ai/) using Docker on my Mac. Here are the first in my ongoing notes on how I set that up and the commands I'm using to administer it. - [Use their Docker Compose configuration](https://til.simonwillison.net/llms/openclaw-docker#use-their-docker-compose-configuration) - [Answering all of those questions](https://til.simonwillison.net/llms/openclaw-docker#answering-all-of-those-questions) - [Running administrative commands](https://til.simonwillison.net/llms/openclaw-docker#running-administrative-commands) - [Setting up a Telegram bot](https://til.simonwillison.net/llms/openclaw-docker#setting-up-a-telegram-bot) - [Accessing the web UI](https://til.simonwillison.net/llms/openclaw-docker#accessing-the-web-ui) - [Running commands as root](https://til.simonwillison.net/llms/openclaw-docker#running-commands-as-root) Here's a screenshot of the web UI that this serves on localhost: ![Screenshot of the OpenClaw Gateway Dashboard web interface. Header shows "OpenCLAW GATEWAY DASHBOARD" with a green "Health OK" indicator. Left sidebar contains navigation sections: Chat (Chat highlighted), Control (Overview, Channels, Instances, Sessions, Cron Jobs), Agent (Skills, Nodes), Settings (Config, Debug, Logs), and Resources (Docs). Main content area displays "Chat" with subtitle "Direct gateway chat session for quick interventions." and "telegram:6580064359" identifier. A user message at 4:08 PM reads "Show me a detailed list of all your available configured tools". The assistant response states: "Here's the full list of tools I have available in this OpenClaw session (as configured). These are the only ones I can call programmatically:" followed by categorized tools: "File & workspace" (read — Read a file (text or image). Supports offset/limit for large files; write — Create/overwrite a file (creates parent dirs); edit — Precise in-place edit by exact string replacement), "Shell / processes" (exec — Run a shell command (optionally PTY, backgrounding, timeouts); process — Manage running exec sessions (list/poll/log/write/kill/etc.)), "Web" (web_search — Search the web (Brave Search API); web_fetch — Fetch a URL and extract readable content (markdown/text); browser — Control a browser (open/navigate/snapshot/screenshot/act/etc.)), "UI / rendering" (canvas — Present/eval/snapshot a Canvas surface (for node canvases/UI rendering)), and "Devices / nodes" (cut off). Bottom shows message input with placeholder "Message (↵ to send, Shift+↵ for line breaks, paste images)" and "New session" and coral "Send" buttons.](https://static.simonwillison.net/static/2026/openclaw-web-ui.jpg) 2026-02-01 23:59:13+00:00
Singing the gospel of collective efficacy https://interconnected.org/home/2026/01/30/efficacy Lovely piece from Matt Webb about how you can "just do things" to help make your community better for everyone: > Similarly we all love when the swifts visit (beautiful birds), so somebody started a group to get swift nest boxes made and installed collectively, then applied for subsidy funding, then got everyone to chip in such that people who couldn’t afford it could have their boxes paid for, and now suddenly we’re all writing to MPs and following the legislation to include swift nesting sites in new build houses. Etc. > > It’s called *collective efficacy*, the belief that you can make a difference by acting together. My current favorite "you can just do things" is a bit of a stretch, but apparently you can just build a successful software company for 20 years and then use the proceeds to [start a theater in Baltimore](https://bmoreart.com/2024/09/the-voxel-is-a-cutting-edge-theater-experiment.html) (for "research") and give the space away to artists for free. 2026-01-31 01:22:15+00:00
We gotta talk about AI as a programming tool for the arts https://www.tiktok.com/@chris_ashworth/video/7600801037292768525 Chris Ashworth is the creator and CEO of [QLab](https://en.wikipedia.org/wiki/QLab), a macOS software package for “cue-based, multimedia playback” which is designed to automate lighting and audio for live theater productions. I recently started following him on TikTok where he posts about his business and theater automation in general - Chris founded [the Voxel](https://voxel.org/faq/) theater in Baltimore which QLab use as a combined performance venue, teaching hub and research lab (here's [a profile of the theater](https://bmoreart.com/2024/09/the-voxel-is-a-cutting-edge-theater-experiment.html)), and the resulting videos offer a fascinating glimpse into a world I know virtually nothing about. [This latest TikTok](https://www.tiktok.com/@chris_ashworth/video/7600801037292768525) describes his Claude Opus moment, after he used Claude Code to build a custom lighting design application for a *very* niche project and put together a useful application in just a few days that he would never have been able to spare the time for otherwise. Chris works full time in the arts and comes at generative AI from a position of rational distrust. It's interesting to see him working through that tension to acknowledge that there are valuable applications here to build tools for the community he serves. > I have been at least gently skeptical about all this stuff for the last two years. Every time I checked in on it, I thought it was garbage, wasn't interested in it, wasn't useful. [...] But as a programmer, if you hear something like, this is changing programming, it's important to go check it out once in a while. So I went and checked it out a few weeks ago. And it's different. It's astonishing. [...] > > One thing I learned in this exercise is that it can't make you a fundamentally better programmer than you already are. It can take a person who is a bad programmer and make them faster at making bad programs. And I think it can take a person who is a good programmer and, from what I've tested so far, make them faster at making good programs. [...] You see programmers out there saying, "I'm shipping code I haven't looked at and don't understand." I'm terrified by that. I think that's awful. But if you're capable of understanding the code that it's writing, and directing, designing, editing, deleting, being quality control on it, it's kind of astonishing. [...] > > The positive thing I see here, and I think is worth coming to terms with, is this is an application that I would never have had time to write as a professional programmer. Because the audience is three people. [...] There's no way it was worth it to me to spend my energy of 20 years designing and implementing software for artists to build an app for three people that is this level of polish. And it took me a few days. [...] > > I know there are a lot of people who really hate this technology, and in some ways I'm among them. But I think we've got to come to terms with this is a career-changing moment. And I really hate that I'm saying that because I didn't believe it for the last two years. [...] It's like having a room full of power tools. I wouldn't want to send an untrained person into a room full of power tools because they might chop off their fingers. But if someone who knows how to use tools has the option to have both hand tools and a power saw and a power drill and a lathe, there's a lot of work they can do with those tools at a lot faster speed. 2026-01-30 03:51:53+00:00
How “95%” escaped into the world – and why so many believed it https://www.exponentialview.co/p/how-95-escaped-into-the-world . 2026-01-29 20:18:55+00:00
Datasette 1.0a24 https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29 New Datasette alpha this morning. Key new features: - Datasette's `Request` object can now handle `multipart/form-data` file uploads via the new [await request.form(files=True)](https://docs.datasette.io/en/latest/internals.html#internals-formdata) method. I plan to use this for a `datasette-files` plugin to support attaching files to rows of data. - The [recommended development environment](https://docs.datasette.io/en/latest/contributing.html#setting-up-a-development-environment) for hacking on Datasette itself now uses [uv](https://github.com/astral-sh/uv). Crucially, you can clone Datasette and run `uv run pytest` to run the tests without needing to manually create a virtual environment or install dependencies first, thanks to the [dev dependency group pattern](https://til.simonwillison.net/uv/dependency-groups). - A new `?_extra=render_cell` parameter for both table and row JSON pages to return the results of executing the [render_cell() plugin hook](https://docs.datasette.io/en/latest/plugin_hooks.html#render-cell-row-value-column-table-database-datasette-request). This should unlock new JavaScript UI features in the future. More details [in the release notes](https://docs.datasette.io/en/latest/changelog.html#a24-2026-01-29). I also invested a bunch of work in eliminating flaky tests that were intermittently failing in CI - I *think* those are all handled now. 2026-01-29 17:21:51+00:00
The Five Levels: from Spicy Autocomplete to the Dark Factory https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/ Dan Shapiro proposes a five level model of AI-assisted programming, inspired by the five (or rather six, it's zero-indexed) [levels of driving automation](https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-05/Level-of-Automation-052522-tag.pdf). <ol start="0"> <li><strong>Spicy autocomplete</strong>, aka original GitHub Copilot or copying and pasting snippets from ChatGPT.</li> <li>The <strong>coding intern</strong>, writing unimportant snippets and boilerplate with full human review.</li> <li>The <strong>junior developer</strong>, pair programming with the model but still reviewing every line.</li> <li>The <strong>developer</strong>. Most code is generated by AI, and you take on the role of full-time code reviewer.</li> <li>The <strong>engineering team</strong>. You're more of an engineering manager or product/program/project manager. You collaborate on specs and plans, the agents do the work.</li> <li>The <strong>dark software factory</strong>, like a factory run by robots where the lights are out because robots don't need to see.</li> </ol> Dan says about that last category: > At level 5, it's not really a car any more. You're not really running anybody else's software any more. And your software process isn't really a software process any more. It's a black box that turns specs into software. > > Why Dark? Maybe you've heard of the Fanuc Dark Factory, [the robot factory staffed by robots](https://www.organizedergi.com/News/5493/robots-the-maker-of-robots-in-fanuc-s-dark-factory). It's dark, because it's a place where humans are neither needed nor welcome. > > I know a handful of people who are doing this. They're small teams, less than five people. And what they're doing is nearly unbelievable -- and it will likely be our future. I've talked to one team that's doing the pattern hinted at here. It was *fascinating*. The key characteristics: - Nobody reviews AI-produced code, ever. They don't even look at it. - The goal of the system is to prove that the system works. A huge amount of the coding agent work goes into testing and tooling and simulating related systems and running demos. - The role of the humans is to design that system - to find new patterns that can help the agents work more effectively and demonstrate that the software they are building is robust and effective. It was a tiny team and they stuff they had built in just a few months looked very convincing to me. Some of them had 20+ years of experience as software developers working on systems with high reliability requirements, so they were not approaching this from a naive perspective. I'm hoping they come out of stealth soon because I can't really share more details than this. 2026-01-28 21:44:29+00:00
One Human + One Agent = One Browser From Scratch https://emsh.cat/one-human-one-agent-one-browser/ embedding-shapes was [so infuriated](https://emsh.cat/cursor-implied-success-without-evidence/) by the hype around Cursor's [FastRender browser project](https://simonwillison.net/2026/Jan/23/fastrender/) - thousands of parallel agents producing ~1.6 million lines of Rust - that they were inspired to take a go at building a web browser using coding agents themselves. The result is [one-agent-one-browser](https://github.com/embedding-shapes/one-agent-one-browser) and it's *really* impressive. Over three days they drove a single Codex CLI agent to build 20,000 lines of Rust that successfully renders HTML+CSS with no Rust crate dependencies at all - though it does (reasonably) use Windows, macOS and Linux system frameworks for image and text rendering. I installed the [1MB macOS binary release](https://github.com/embedding-shapes/one-agent-one-browser/releases/tag/0.1.0) and ran it against my blog: chmod 755 ~/Downloads/one-agent-one-browser-macOS-ARM64 ~/Downloads/one-agent-one-browser-macOS-ARM64 https://simonwillison.net/ Here's the result: ![My blog rendered in a window. Everything is in the right place, the CSS gradients look good, the feed subscribe SVG icon is rendered correctly but there's a missing PNG image.](https://static.simonwillison.net/static/2026/one-agent-simonwillison.jpg) It even rendered my SVG feed subscription icon! A PNG image is missing from the page, which looks like an intermittent bug (there's code to render PNGs). The code is pretty readable too - here's [the flexbox implementation](https://github.com/embedding-shapes/one-agent-one-browser/blob/0.1.0/src/layout/flex.rs). I had thought that "build a web browser" was the ideal prompt to really stretch the capabilities of coding agents - and that it would take sophisticated multi-agent harnesses (as seen in the Cursor project) and millions of lines of code to achieve. Turns out one agent driven by a talented engineer, three days and 20,000 lines of Rust is enough to get a very solid basic renderer working! I'm going to upgrade my [prediction for 2029](https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#3-years-someone-will-build-a-new-browser-using-mainly-ai-assisted-coding-and-it-won-t-even-be-a-surprise): I think we're going to get a *production-grade* web browser built by a small team using AI assistance by then. 2026-01-27 16:58:08+00:00
Copy and export data

Duration: 4.33ms