Simon Willison’s Weblog

Subscribe

Blogmarks

Filters: Sorted by date

Upgrading GitHub.com to MySQL 8.0 (via) I love a good zero-downtime upgrade story, and this is a fine example of the genre. GitHub spent a year upgrading MySQL from 5.7 to 8 across 1200+ hosts, covering 300+ TB that was serving 5.5 million queries per second. The key technique was extremely carefully managed replication, plus tricks like leaving enough 5.7 replicas available to handle a rollback should one be needed.

# 10th December 2023, 8:36 pm / github, mysql, ops, replication, zero-downtime

ast-grep (via) There are a lot of interesting things about this year-old project.

sg (an alias for ast-grep) is a CLI tool for running AST-based searches against code, built in Rust on top of the Tree-sitter parsing library. You can run commands like this:

sg -p ’await await_me_maybe($ARG)’ datasette --lang python

To search the datasette directory for code that matches the search pattern, in a syntax-aware way.

It works across 19 different languages, and can handle search-and-replace too, so it can work as a powerful syntax-aware refactoring tool.

My favourite detail is how it’s packaged. You can install the CLI utility using Homebrew, Cargo, npm or pip/pipx—each of which will give you a CLI tool you can start running. On top of that it provides API bindings for Rust, JavaScript and Python!

# 10th December 2023, 7:56 pm / cli, javascript, python, search, tools, rust, treesitter

3D Gaussian Splatting—Why Graphics Will Never Be The Same (via) Gaussian splatting is an intriguing new approach to 3D computer graphics that’s getting a lot of buzz at the moment. This 2m11s YouTube video is the best condensed explanation I’ve seen of the key idea.

# 9th December 2023, 6:06 am / 3d, graphics

Announcing Purple Llama: Towards open trust and safety in the new world of generative AI (via) New from Meta AI, Purple Llama is “an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers to responsibly deploy generative AI models and experiences”.

There are three components: a 27 page “Responsible Use Guide”, a new open model called Llama Guard and CyberSec Eval, “a set of cybersecurity safety evaluations benchmarks for LLMs”.

Disappointingly, despite this being an initiative around trustworthy LLM development,prompt injection is mentioned exactly once, in the Responsible Use Guide, with an incorrect description describing it as involving “attempts to circumvent content restrictions”!

The Llama Guard model is interesting: it’s a fine-tune of Llama 2 7B designed to help spot “toxic” content in input or output from a model, effectively an openly released alternative to OpenAI’s moderation API endpoint.

The CyberSec Eval benchmarks focus on two concepts: generation of insecure code, and preventing models from assisting attackers from generating new attacks. I don’t think either of those are anywhere near as important as prompt injection mitigation.

My hunch is that the reason prompt injection didn’t get much coverage in this is that, like the rest of us, Meta’s AI research teams have no idea how to fix it yet!

# 8th December 2023, 6:36 am / facebook, security, ai, prompt-injection, generative-ai, llms, meta, llm-release

Standard Webhooks 1.0.0 (via) A loose specification for implementing webhooks, put together by a technical steering committee that includes representatives from Zapier, Twilio and more.

These recommendations look great to me. Even if you don’t follow them precisely, this document is still worth reviewing any time you consider implementing webhooks—it covers a bunch of non-obvious challenges, such as responsible retry scheduling, thin-vs-thick hook payloads, authentication, custom HTTP headers and protecting against Server side request forgery attacks.

# 8th December 2023, 4:16 am / security, webhooks

SVG Tutorial: Learn SVG through 25 examples (via) Hunor Márton Borbély published this fantastic advent calendar of tutorials for learning SVG, from the basics up to advanced concepts like animation and interactivity.

# 7th December 2023, 6:47 pm / svg

Long context prompting for Claude 2.1. Claude 2.1 has a 200,000 token context, enough for around 500 pages of text. Convincing it to answer a question based on a single sentence buried deep within that content can be difficult, but Anthropic found that adding “Assistant: Here is the most relevant sentence in the context:” to the end of the prompt was enough to raise Claude 2.1’s score from 27% to 98% on their evaluation.

# 6th December 2023, 11:44 pm / ai, prompt-engineering, generative-ai, llms, anthropic, claude, long-context

Ice Cubes GPT-4 prompts. The Ice Cubes open source Mastodon app recently grew a very good "describe this image" feature to help people add alt text to their images. I had a dig around in their repo and it turns out they're using GPT-4 Vision for this (and regular GPT-4 for other features), passing the image with this prompt:

What’s in this image? Be brief, it's for image alt description on a social network. Don't write in the first person.

# 6th December 2023, 7:38 pm / accessibility, alt-text, ai, prompt-engineering, generative-ai, mastodon, gpt-4, llms, vision-llms

AI and Trust. Barnstormer of an essay by Bruce Schneier about AI and trust. It’s worth spending some time with this—it’s hard to extract the highlights since there are so many of them.

A key idea is that we are predisposed to trust AI chat interfaces because they imitate humans, which means we are highly susceptible to profit-seeking biases baked into them.

Bruce suggests that what’s needed is public models, backed by government funds: “A public model is a model built by the public for the public. It requires political accountability, not just market accountability.”

# 5th December 2023, 9:43 pm / bruce-schneier, trust, ai, generative-ai, llms

Simon Willison (Part Two): How Datasette Helps With Investigative Reporting. The second part of my Newsroom Robots podcast conversation with Nikita Roy. This episode includes my best audio answer yet to the “what is Datasette?” question, plus notes on how to use LLMs in journalism despite their propensity to make things up.

# 5th December 2023, 8:27 pm / data-journalism, journalism, podcasts, datasette, podcast-appearances

Spider-Man: Across the Spider-Verse screenplay (PDF) (via) Phil Lord shared this on Twitter yesterday—the final screenplay for Spider-Man: Across the Spider-Verse. It’s a really fun read.

# 5th December 2023, 7:42 pm / movies, screen-writing, spiderverse

LLM Visualization. Brendan Bycroft’s beautifully crafted interactive explanation of the transformers architecture—that universal but confusing model diagram, only here you can step through and see a representation of the flurry of matrix algebra that occurs every time you get a Large Language Model to generate the next token.

# 4th December 2023, 10:24 pm / ai, explorables, generative-ai, llms

Write shaders for the Vegas sphere (via) Alexandre Devaux built this phenomenal three.js / WebGL demo, which displays a rotating flyover of the Vegas Sphere and lets you directly edit shader code to render your own animations on it and see what they would look like. The via Hacker News thread includes dozens of examples of scripts you can paste in.

# 1st December 2023, 6:45 pm / 3d, graphics, webgl

Seamless Communication (via) A new “family of AI research models” from Meta AI for speech and text translation. The live demo is particularly worth trying—you can record a short webcam video of yourself speaking and get back the same video with your speech translated into another language.

The key to it is the new SeamlessM4T v2 model, which supports 101 languages for speech input, 96 Languages for text input/output and 35 languages for speech output. SeamlessM4T-Large v2 is a 9GB file, available on Hugging Face.

Also in this release: SeamlessExpressive, which “captures certain underexplored aspects of prosody such as speech rate and pauses”—effectively maintaining things like expressed enthusiasm across languages.

Plus SeamlessStreaming, “a model that can deliver speech and text translations with around two seconds of latency”.

# 1st December 2023, 5:01 pm / facebook, transformers, translation, ai, llms

Annotate and explore your data with datasette-comments. New plugin for Datasette and Datasette Cloud: datasette-comments, providing tools for collaborating on data exploration with a team through posting comments on individual rows of data.

Alex Garcia built this for Datasette Cloud but as with almost all of our work there it’s also available as an open source Python package.

# 30th November 2023, 9:59 pm / collaboration, projects, datasette, datasette-cloud, alex-garcia

ChatGPT is one year old. Here’s how it changed the world. I’m quoted in this piece by Benj Edwards about ChatGPT’s one year birthday:

“Imagine if every human being could automate the tedious, repetitive information tasks in their lives, without needing to first get a computer science degree,” AI researcher Simon Willison told Ars in an interview about ChatGPT’s impact. “I’m seeing glimpses that LLMs might help make a huge step in that direction.”

# 30th November 2023, 6:07 pm / ai, openai, generative-ai, chatgpt, llms, benj-edwards, press-quotes

Announcing Deno Cron. Scheduling tasks in deployed applications is surprisingly difficult. Deno clearly understand this, and they’ve added a new Deno.cron(name, cron_definition, callback) mechanism for running a JavaScript function every X minutes/hours/etc.

As with several other recent Deno features, there are two versions of the implementation. The first is an in-memory implementation in the Deno open source binary, while the second is a much more robust closed-source implementation that runs in Deno Deploy:

“When a new production deployment of your project is created, an ephemeral V8 isolate is used to evaluate your project’s top-level scope and to discover any Deno.cron definitions. A global cron scheduler is then updated with your project’s latest cron definitions, which includes updates to your existing crons, new crons, and deleted crons.”

Two interesting features: unlike regular cron the Deno version prevents cron tasks that take too long from ever overlapping each other, and a backoffSchedule: [1000, 5000, 10000] option can be used to schedule attempts to re-run functions if they raise an exception.

# 29th November 2023, 5:49 pm / cron, deno

MonadGPT (via) “What would have happened if ChatGPT was invented in the 17th century? MonadGPT is a possible answer.

MonadGPT is a finetune of Mistral-Hermes 2 on 11,000 early modern texts in English, French and Latin, mostly coming from EEBO and Gallica.

Like the original Mistral-Hermes, MonadGPT can be used in conversation mode. It will not only answer in an historical language and style but will use historical and dated references.”

# 27th November 2023, 4:01 am / ai, generative-ai, llms, mistral

The 6 Types of Conversations with Generative AI. I’ve hoping to see more user research on how users interact with LLMs for a while. Here’s a study from Nielsen Norman Group, who conducted a 2-week diary study involving 18 participants, then interviewed 14 of them.

They identified six categories of conversation, and made some resulting design recommendations.

A key observation is that “search style” queries (just a few keywords) often indicate users who are new to LLMs, and should be identified as a sign that the user needs more inline education on how to best harness the tool.

Suggested follow-up prompts are valuable for most of the types of conversation identified.

# 23rd November 2023, 5:37 pm / usability, ux, userresearch, ai, generative-ai, llms

YouTube: Intro to Large Language Models. Andrej Karpathy is an outstanding educator, and this one hour video offers an excellent technical introduction to LLMs.

At 42m Andrej expands on his idea of LLMs as the center of a new style of operating system, tying together tools and and a filesystem and multimodal I/O.

There’s a comprehensive section on LLM security—jailbreaking, prompt injection, data poisoning—at the 45m mark.

I also appreciated his note on how parameter size maps to file size: Llama 70B is 140GB, because each of those 70 billion parameters is a 2 byte 16bit floating point number on disk.

# 23rd November 2023, 5:02 pm / jailbreaking, ai, andrej-karpathy, prompt-injection, generative-ai, llms

Claude: How to use system prompts. Documentation for the new system prompt support added in Claude 2.1. The design surprises me a little: the system prompt is just the text that comes before the first instance of the text “Human: ...”—but Anthropic promise that instructions in that section of the prompt will be treated differently and followed more closely than any instructions that follow.

This whole page of documentation is giving me some pretty serious prompt injection red flags to be honest. Anthropic’s recommended way of using their models is entirely based around concatenating together strings of text using special delimiter phrases.

I’ll give it points for honesty though. OpenAI use JSON to field different parts of the prompt, but under the hood they’re all concatenated together with special tokens into a single token stream.

# 22nd November 2023, 4:31 am / ai, prompt-injection, generative-ai, llms, anthropic, claude, system-prompts

Introducing Claude 2.1. Anthropic’s Claude used to have the longest token context of any of the major models: 100,000 tokens, which is about 300 pages. Then GPT-4 Turbo came out with 128,000 tokens and Claude lost one of its key differentiators.

Claude is back! Version 2.1, announced today, bumps the token limit up to 200,000—and also adds support for OpenAI-style system prompts, a feature I’ve been really missing.

They also announced tool use, but that’s only available for a very limited set of partners to preview at the moment.

# 22nd November 2023, 4:28 am / ai, generative-ai, llms, anthropic, claude, llm-tool-use

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding. This is the first piece of reporting I've seen on the OpenAI situation which has offered a glimmer of an explanation as to what happened.

It sounds like the board had been fighting about things for over a year - notably including who should replace departed members, which is how they'd shrunk down to just six people.

There's also an interesting detail in here about the formation of Anthropic:

Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.

# 22nd November 2023, 12:31 am / ai, openai, anthropic, sam-altman

An Interactive Guide to CSS Grid (via) Josh Comeau’s extremely clear guide to CSS grid, with interactive examples for all of the core properties.

# 21st November 2023, 4:25 pm / css, josh-comeau

Cloudflare does not consider vary values in caching decisions. Here’s the spot in Cloudflare’s documentation where they hide a crucially important detail:

“Cloudflare does not consider vary values in caching decisions. Nevertheless, vary values are respected when Vary for images is configured and when the vary header is vary: accept-encoding.”

This means you can’t deploy an application that uses content negotiation via the Accept header behind the Cloudflare CDN—for example serving JSON or HTML for the same URL depending on the incoming Accept header. If you do, Cloudflare may serve cached JSON to an HTML client or vice-versa.

There’s an exception for image files, which Cloudflare added support for in September 2021 (for Pro accounts only) in order to support formats such as WebP which may not have full support across all browsers.

# 20th November 2023, 5:08 am / caching, http, cloudflare

Inside the Chaos at OpenAI (via) Outstanding reporting on the current situation at OpenAI from Karen Hao and Charlie Warzel, informed by Karen’s research for a book she is currently writing. There are all sorts of fascinating details in here that I haven’t seen reported anywhere, and it strongly supports the theory that this entire situation (Sam Altman being fired by the board of the OpenAI non-profit) resulted from deep disagreements within OpenAI concerning speed to market and commercialization of their technology v.s. safety research and cautious progress towards AGI.

# 20th November 2023, 4:35 am / ai, openai, chatgpt, sam-altman

Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI. The board of the non-profit in control of OpenAI fired CEO Sam Altman yesterday, which is sending seismic waves around the AI technology industry. This overview by Benj Edwards is the best condensed summary I’ve seen yet of everything that’s known so far.

# 18th November 2023, 8:14 pm / ai, openai, benj-edwards, sam-altman

It’s Time For A Change: datetime.utcnow() Is Now Deprecated (via) Miguel Grinberg explains the deprecation of datetime.utcnow() and utcfromtimestamp() in Python 3.12, since they return naive datetime objects which cause all sorts of follow-on problems.

The replacement idiom is datetime.datetime.now(datetime.timezone.utc)

# 18th November 2023, 7:24 pm / python, timezones

HTML Web Components: An Example (via) Jim Nielsen provides a clear example illustrating the idea of the recently coined "HTML Web Components" pattern. It's Web Components as progressive enhancement: in this example a <user-avatar> custom element wraps a regular image, then JavaScript defines a Web Component that enhances that image. If the JavaScript fails to load the image still displays.

# 17th November 2023, 4:33 pm / javascript, progressive-enhancement, web-components

tldraw/draw-a-ui (via) Absolutely spectacular GPT-4 Vision API demo. Sketch out a rough UI prototype using the open source tldraw drawing app, then select a set of components and click "Make Real" (after giving it an OpenAI API key). It generates a PNG snapshot of your selection and sends that to GPT-4 with instructions to turn it into a Tailwind HTML+JavaScript prototype, then adds the result as an iframe next to your mockup.

You can then make changes to your mockup, select it and the previous mockup and click "Make Real" again to ask for an updated version that takes your new changes into account.

This is such a great example of innovation at the UI layer, and everything is open source. Check app/lib/getHtmlFromOpenAI.ts for the system prompt that makes it work.

# 16th November 2023, 4:42 pm / open-source, prototyping, openai, generative-ai, gpt-4, llms, system-prompts

Years

Tags