Simon Willison’s Weblog

Subscribe

October 2024

165 posts: 12 entries, 82 links, 22 quotes, 49 beats

Oct. 28, 2024

Release llm-gemini 0.3a0 — LLM plugin to access Google's Gemini family of models

Oct. 29, 2024

Release llm-claude-3 0.6a0 — LLM plugin for interacting with the Claude 3 family of models
Release llm 0.17 — Access large language models from the command-line
Release llm-gemini 0.3 — LLM plugin to access Google's Gemini family of models
Release llm-claude-3 0.6 — LLM plugin for interacting with the Claude 3 family of models
Release llm-mistral 0.7 — LLM plugin providing access to Mistral models using the Mistral API

Matt Webb’s Colophon. I love a good colophon (here's mine, I should really expand it). Matt Webb has been publishing his thoughts online for 24 years, so his colophon is a delightful accumulation of ideas and principles.

So following the principles of web longevity, what matters is the data, i.e. the posts, and simplicity. I want to minimise maintenance, not panic if a post gets popular, and be able to add new features without thinking too hard. [...]

I don’t deliberately choose boring technology but I think a lot about longevity on the web (that’s me writing about it in 2017) and boring technology is a consequence.

I'm tempted to adopt Matt's XSL template that he uses to style his RSS feed for my own sites.

# 4:59 am / blogging, matt-webb, rss, xslt, boring-technology

You can now run prompts against images, audio and video in your terminal using LLM

Visit You can now run prompts against images, audio and video in your terminal using LLM

I released LLM 0.17 last night, the latest version of my combined CLI tool and Python library for interacting with hundreds of different Large Language Models such as GPT-4o, Llama, Claude and Gemini.

[... 1,399 words]

Generating Descriptive Weather Reports with LLMs. Drew Breunig produces the first example I've seen in the wild of the new LLM attachments Python API. Drew's Downtown San Francisco Weather Vibes project combines output from a JSON weather API with the latest image from a webcam pointed at downtown San Francisco to produce a weather report "with a style somewhere between Jack Kerouac and J. Peterman".

Here's the Python code that constructs and executes the prompt. The code runs in GitHub Actions.

# 11:12 pm / ai, github-actions, prompt-engineering, generative-ai, llms, llm, drew-breunig, vision-llms

Oct. 30, 2024

Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview. The big announcement from GitHub Universe: Copilot is growing support for alternative models.

GitHub Copilot predated the release of ChatGPT by more than year, and was the first widely used LLM-powered tool. This announcement includes a brief history lesson:

The first public version of Copilot was launched using Codex, an early version of OpenAI GPT-3, specifically fine-tuned for coding tasks. Copilot Chat was launched in 2023 with GPT-3.5 and later GPT-4. Since then, we have updated the base model versions multiple times, using a range from GPT 3.5-turbo to GPT 4o and 4o-mini models for different latency and quality requirements.

It's increasingly clear that any strategy that ties you to models from exclusively one provider is short-sighted. The best available model for a task can change every few months, and for something like AI code assistance model quality matters a lot. Getting stuck with a model that's no longer best in class could be a serious competitive disadvantage.

The other big announcement from the keynote was GitHub Spark, described like this:

Sparks are fully functional micro apps that can integrate AI features and external data sources without requiring any management of cloud resources.

I got to play with this at the event. It's effectively a cross between Claude Artifacts and GitHub Gists, with some very neat UI details. The features that really differentiate it from Artifacts is that Spark apps gain access to a server-side key/value store which they can use to persist JSON - and they can also access an API against which they can execute their own prompts.

The prompt integration is particularly neat because prompts used by the Spark apps are extracted into a separate UI so users can view and modify them without having to dig into the (editable) React JavaScript code.

# 1:23 am / github, javascript, ai, react, openai, github-copilot, llms, ai-assisted-programming, anthropic, gemini, claude-artifacts, prompt-to-app

W̶e̶e̶k̶n̶o̶t̶e̶s̶ Monthnotes for October

I try to publish weeknotes at least once every two weeks. It’s been four since the last entry, so I guess this one counts as monthnotes instead.

[... 797 words]

docs.jina.ai—the Jina meta-prompt. From Jina AI on Twitter:

curl docs.jina.ai - This is our Meta-Prompt. It allows LLMs to understand our Reader, Embeddings, Reranker, and Classifier APIs for improved codegen. Using the meta-prompt is straightforward. Just copy the prompt into your preferred LLM interface like ChatGPT, Claude, or whatever works for you, add your instructions, and you're set.

The page is served using content negotiation. If you hit it with curl you get plain text, but a browser with text/html in the accept: header gets an explanation along with a convenient copy to clipboard button.

Screenshot of an API documentation page for Jina AI with warning message, access instructions, and code sample. Contains text: Note: This content is specifically designed for LLMs and not intended for human reading. For human-readable content, please visit Jina AI. For LLMs/programmatic access, you can fetch this content directly: curl docs.jina.ai/v2 # or wget docs.jina.ai/v2 # or fetch docs.jina.ai/v2 You only see this as a HTML when you access docs.jina.ai via browser. If you access it via code/program, you will get a text/plain response as below. You are an AI engineer designed to help users use Jina AI Search Foundation API's for their specific use case. # Core principles...

# 5:07 pm / documentation, ai, generative-ai, llms, llm, jina

Creating a LLM-as-a-Judge that drives business results (via) Hamel Husain's sequel to Your AI product needs evals. This is packed with hard-won actionable advice.

Hamel warns against using scores on a 1-5 scale, instead promoting an alternative he calls "Critique Shadowing". Find a domain expert (one is better than many, because you want to keep their scores consistent) and have them answer the yes/no question "Did the AI achieve the desired outcome?" - providing a critique explaining their reasoning for each of their answers.

This gives you a reliable score to optimize against, and the critiques mean you can capture nuance and improve the system based on that captured knowledge.

Most importantly, the critique should be detailed enough so that you can use it in a few-shot prompt for a LLM judge. In other words, it should be detailed enough that a new employee could understand it.

Once you've gathered this expert data system you can switch to using an LLM-as-a-judge. You can then iterate on the prompt you use for it in order to converge its "opinions" with those of your domain expert.

Hamel concludes:

The real value of this process is looking at your data and doing careful analysis. Even though an AI judge can be a helpful tool, going through this process is what drives results. I would go as far as saying that creating a LLM judge is a nice “hack” I use to trick people into carefully looking at their data!

# 6:08 pm / ai, generative-ai, llms, hamel-husain, evals

Oct. 31, 2024

Australia/Lord_Howe is the weirdest timezone (via) Lord Howe Island - part of Australia, population 382 - is unique in that the island's standard time zone is UTC+10:30 but is UTC+11 when daylight saving time applies. It's the only time zone where DST represents a 30 minute offset.

# 10:03 pm / timezones

Cerebras Coder (via) Val Town founder Steve Krouse has been building demos on top of the Cerebras API that runs Llama3.1-70b at 2,000 tokens/second.

Having a capable LLM with that kind of performance turns out to be really interesting. Cerebras Coder is a demo that implements Claude Artifact-style on-demand JavaScript apps, and having it run at that speed means changes you request are visible within less than a second:

Steve's implementation (created with the help of Townie, the Val Town code assistant) demonstrates the simplest possible version of an iframe sandbox:

<iframe
    srcDoc={code}
    sandbox="allow-scripts allow-modals allow-forms allow-popups allow-same-origin allow-top-navigation allow-downloads allow-presentation allow-pointer-lock"
/>

Where code is populated by a setCode(...) call inside a React component.

The most interesting applications of LLMs continue to be where they operate in a tight loop with a human - this can make those review loops potentially much faster and more productive.

# 10:39 pm / iframes, sandboxing, ai, react, generative-ai, llama, llms, ai-assisted-programming, val-town, steve-krouse, cerebras, inference-speed