Simon Willison’s Weblog

Subscribe
Atom feed for matt-webb

17 items tagged “matt-webb”

2024

Matt Webb’s Colophon. I love a good colophon (here's mine, I should really expand it). Matt Webb has been publishing his thoughts online for 24 years, so his colophon is a delightful accumulation of ideas and principles.

So following the principles of web longevity, what matters is the data, i.e. the posts, and simplicity. I want to minimise maintenance, not panic if a post gets popular, and be able to add new features without thinking too hard. [...]

I don’t deliberately choose boring technology but I think a lot about longevity on the web (that’s me writing about it in 2017) and boring technology is a consequence.

I'm tempted to adopt Matt's XSL template that he uses to style his RSS feed for my own sites.

# 29th October 2024, 4:59 am / blogging, matt-webb, rss, boring-technology

Grandma’s secret cake recipe, passed down generation to generation, could be literally passed down: a flat slab of beige ooze kept in a battered pan, DNA-spliced and perfected by guided evolution by her own deft and ancient hands, a roiling wet mass of engineered microbes that slowly scabs over with delicious sponge cake, a delectable crust to be sliced once a week and enjoyed still warm with creme and spoons of pirated jam.

Matt Webb

# 24th October 2024, 12:12 pm / matt-webb

Braggoscope Prompts. Matt Webb's Braggoscope (previously) is an alternative way to browse the archive's of the BBC's long-running radio series In Our Time, including the ability to browse by Dewey Decimal library classification, view related episodes and more.

Matt used an LLM to generate the structured data for the site, based on the episode synopsis on the BBC's episode pages like this one.

The prompts he used for this are now described on this new page on the site.

Of particular interest is the way the Dewey Decimal classifications are derived. Quoting an extract from the prompt:

- Provide a Dewey Decimal Classification code, label, and reason for the classification.

- Reason: summarise your deduction process for the Dewey code, for example considering the topic and era of history by referencing lines in the episode description. Bias towards the main topic of the episode which is at the beginning of the description.

- Code: be as specific as possible with the code, aiming to give a second level code (e.g. "510") or even lower level (e.g. "510.1"). If you cannot be more specific than the first level (e.g. "500"), then use that.

Return valid JSON conforming to the following Typescript type definition:

{
    "dewey_decimal": {"reason": string, "code": string, "label": string}
}

That "reason" key is essential, even though it's not actually used in the resulting project. Matt explains why:

It gives the AI a chance to generate tokens to narrow down the possibility space of the code and label that follow (the reasoning has to appear before the Dewey code itself is generated).

Here's a relevant note from OpenAI's new structured outputs documentation:

When using Structured Outputs, outputs will be produced in the same order as the ordering of keys in the schema.

That's despite JSON usually treating key order as undefined. I think OpenAI designed the feature to work this way precisely to support the kind of trick Matt is using for his Dewey Decimal extraction process.

# 7th August 2024, 11:23 pm / matt-webb, ai, prompt-engineering, generative-ai, llms

Mapping the landscape of gen-AI product user experience. Matt Webb attempts to map out the different user experience approaches to building on top of generative AI. I like the way he categorizes these potential experiences:

  • Tools. Users control AI to generate something.
  • Copilots. The AI works alongside the user in an app in multiple ways.
  • Agents. The AI has some autonomy over how it approaches a task.
  • Chat. The user talks to the AI as a peer in real-time.

# 20th July 2024, 4:40 am / matt-webb, ux, ai, generative-ai, llms, ai-agents

Apple’s terminology distinguishes between “personal intelligence,” on-device and under their control, and “world knowledge,” which is prone to hallucinations – but is also what consumers expect when they use AI, and it’s what may replace Google search as the “point of first intent” one day soon.

It’s wise for them to keep world knowledge separate, behind a very clear gate, but still engage with it. Protects the brand and hedges their bets.

Matt Webb

# 11th June 2024, 5:26 pm / apple, llms, ai, generative-ai, matt-webb

2023

It feels pretty likely that prompting or chatting with AI agents is going to be a major way that we interact with computers into the future, and whereas there’s not a huge spread in the ability between people who are not super good at tapping on icons on their smartphones and people who are, when it comes to working with AI it seems like we’ll have a high dynamic range. Prompting opens the door for non-technical virtuosos in a way that we haven’t seen with modern computers, outside of maybe Excel.

Matt Webb

# 9th July 2023, 3:29 pm / matt-webb, prompt-engineering, generative-ai, ai, llms, ai-agents

If I were an AI sommelier I would say that gpt-3.5-turbo is smooth and agreeable with a long finish, though perhaps lacking depth. text-davinci-003 is spicy and tight, sophisticated even.

Matt Webb

# 31st May 2023, 2:52 pm / matt-webb, llms, ai, generative-ai

The surprising ease and effectiveness of AI in a loop (via) Matt Webb on the langchain Python library and the ReAct design pattern, where you plug additional tools into a language model by teaching it to work in a “Thought... Act... Observation” loop where the Act specifies an action it wishes to take (like searching Wikipedia) and an extra layer of software than carries out that action and feeds back the result as the Observation. Matt points out that the ChatGPT 1/10th price drop makes this kind of model usage enormously more cost effective than it was before.

# 17th March 2023, 12:04 am / matt-webb, ai, openai, generative-ai, chatgpt, llms, llm-tool-use

Browse the BBC In Our Time archive by Dewey decimal code. Matt Webb built Braggoscope, an alternative interface for browsing the 1,000 episodes of the BBC’s In Our Time dating back to 1998, organized by Dewey decimal system and with related episodes calculated using OpenAI embeddings and guests and reading lists extracted using GPT-3. “Using GitHub Copilot to write code and calling out to GPT-3 programmatically to dodge days of graft actually brought tears to my eyes.”

# 13th February 2023, 4:03 pm / matt-webb, gpt-3, openai, generative-ai, llms, embeddings

2020

15 rules for blogging, and my current streak (via) Matt Webb is on a 24 week streak of blogging multiple posts a week and shares his rules on how he’s doing this. These are really good rules. A rule of thumb that has helped me a lot is to fight back against the temptation to make a post as good as I can before I publish it— because that way lies a giant drafts folder and no actual published content. “Perfect is the enemy of shipped”.

# 10th September 2020, 6:09 pm / blogging, matt-webb

2010

Popular Science+. Matt Webb’s write-up of the Mag+ project, the platform behind the highly praised Popular Science+ iPad application.

# 12th April 2010, 1:06 pm / berg, design, ipad, magplus, matt-webb

2009

Scope. Matt Webb’s opening keynote at this year’s reboot11. You owe it to yourself to read it.

# 8th July 2009, 8:15 pm / matt-webb, reboot, scope, talks

2008

The technological future of the Web is in micro and macro structure. The approach to the micro is akin to proteins and surface binding--or, to put it another way, phenotropics and pattern matching. Massively parallel agents need to be evolved to discover how to bind onto something that looks like a blog post; a crumb-trail; a right-hand nav; a top 10 list; a review; an event description; search boxes.

Matt Webb

# 1st January 2008, 12:13 pm / matt-webb, web, microformats, phenotropics, patternmatching, agents

2007

BBC Olinda digital radio: Social hardware. Schulze and Webb made a social radio prototype for the BBC; the IPR will be under an attribution license so manufacturers can run with it without asking for permission first.

# 20th August 2007, 9:47 pm / attribution, bbc, digitalradio, hardware, jack-schultz, matt-webb, olinda, radio, schulzeandwebb, socialradio

robotlab juke bots (via) Decommissioned industrial robot arms reprogrammed to act as super precise DJs.

# 30th March 2007, 11:13 am / jukebots, matt-webb, robotarms, robotlab, robots

From Pixels to Plastic. Awesome talk given by Matt Webb at ETech, on the emerging culture of Generation C, cheap hardware prototyping and physical extensions to the online world.

# 30th March 2007, 11:09 am / etech, generationc, hardware, hardware-hacking, matt-webb

2006

about making things (via) Matt Webb on thinking through making.

# 31st July 2006, 3:09 pm / matt-webb