Simon Willison’s Weblog

Subscribe

My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends

10th January 2025

The Oxide and Friends podcast has an annual tradition of asking guests to share their predictions for the next 1, 3 and 6 years. Here’s 2022, 2023 and 2024. This year they invited me to participate. I’ve never been brave enough to share any public predictions before, so this was a great opportunity to get outside my comfort zone!

We recorded the episode live using Discord on Monday. It’s now available on YouTube, and should be out in podcast form shortly.

Here are my predictions, written up here in a little more detail than the stream of consciousness I shared on the podcast.

I should emphasize that I find the very idea of trying to predict AI/LLMs over a multi-year period to be completely absurd! I can’t predict what’s going to happen a week from now, six years is a different universe.

With that disclaimer out of the way, here’s an expanded version of what I said.

One year: Agents fail to happen, again

I wrote about how “Agents” still haven’t really happened yet in my review of Large Language Model developments in 2024.

I think we are going to see a lot more froth about agents in 2025, but I expect the results will be a great disappointment to most of the people who are excited about this term. I expect a lot of money will be lost chasing after several different poorly defined dreams that share that name.

What are agents anyway? Ask a dozen people and you’ll get a dozen slightly different answers—I collected and then AI-summarized a bunch of those here.

For the sake of argument, let’s pick a definition that I can predict won’t come to fruition: the idea of an AI assistant that can go out into the world and semi-autonomously act on your behalf. I think of this as the travel agent definition of agents, because for some reason everyone always jumps straight to flight and hotel booking and itinerary planning when they describe this particular dream.

Having the current generation of LLMs make material decisions on your behalf—like what to spend money on—is a really bad idea. They’re too unreliable, but more importantly they are too gullible.

If you’re going to arm your AI assistant with a credit card and set it loose on the world, you need to be confident that it’s not going to hit “buy” on the first website that claims to offer the best bargains!

I’m confident that reliability is the reason we haven’t seen LLM-powered agents that have taken off yet, despite the idea attracting a huge amount of buzz since right after ChatGPT first came out.

I would be very surprised if any of the models released over the next twelve months had enough of a reliability improvement to make this work. Solving gullibility is an astonishingly difficult problem.

(I had a particularly spicy rant about how stupid the idea of sending a “digital twin” to a meeting on your behalf is.)

One year: ... except for code and research assistants

There are two categories of “agent” that I do believe in, because they’re proven to work already.

The first is coding assistants—where an LLM writes, executes and then refines computer code in a loop.

I first saw this pattern demonstrated by OpenAI with their Code Interpreter feature for ChatGPT, released back in March/April of 2023.

You can ask ChatGPT to solve a problem that can use Python code and it will write that Python, execute it in a secure sandbox (I think it’s Kubernetes) and then use the output—or any error messages—to determine if the goal has been achieved.

It’s a beautiful pattern that worked great with early 2023 models (I believe it first shipped using original GPT-4), and continues to work today.

Claude added their own version in October (Claude analysis, using JavaScript that runs in the browser), Mistral have it, Gemini has a version and there are dozens of other implementations of the same pattern.

The second category of agents that I believe in is research assistants—where an LLM can run multiple searches, gather information and aggregate that into an answer to a question or write a report.

Perplexity and ChatGPT Search have both been operating in this space for a while, but by far the most impressive implementation I’ve seen is Google Gemini’s Deep Research tool, which I’ve had access to for a few weeks.

With Deep Research I can pose a question like this one:

Pillar Point Harbor is one of the largest communal brown pelican roosts on the west coast of North America.

find others

And Gemini will draft a plan, consult dozens of different websites via Google Search and then assemble a report (with all-important citations) describing what it found.

Here’s the plan it came up with:

Pillar Point Harbor is one of the largest communal brown pelican roosts on the west coast of North America. Find other large communal brown pelican roosts on the west coast of North America.
(1) Find a list of brown pelican roosts on the west coast of North America.
(2) Find research papers or articles about brown pelican roosts and their size.
(3) Find information from birdwatching organizations or government agencies about brown pelican roosts.
(4) Compare the size of the roosts found in (3) to the size of the Pillar Point Harbor roost.
(5) Find any news articles or recent reports about brown pelican roosts and their populations.

It dug up a whole bunch of details, but the one I cared most about was these PDF results for the 2016-2019 Pacific Brown Pelican Survey conducted by the West Coast Audubon network and partners—a PDF that included this delightful list:

Top 10 Megaroosts (sites that traditionally host >500 pelicans) with average fall count numbers:

  • Alameda Breakwater, CA (3,183)
  • Pillar Point Harbor, CA (1,481)
  • East Sand Island, OR (1,121)
  • Ano Nuevo State Park, CA (1,068)
  • Salinas River mouth, CA (762)
  • Bolinas Lagoon, CA (755)
  • Morro Rock, CA (725)
  • Moss landing, CA (570)
  • Crescent City Harbor, CA (514)
  • Bird Rock Tomales, CA (514)

My local harbor is the second biggest megaroost!

It makes intuitive sense to me that this kind of research assistant can be built on our current generation of LLMs. They’re competent at driving tools, they’re capable of coming up with a relatively obvious research plan (look for newspaper articles and research papers) and they can synthesize sensible answers given the right collection of context gathered through search.

Google are particularly well suited to solving this problem: they have the world’s largest search index and their Gemini model has a 2 million token context. I expect Deep Research to get a whole lot better, and I expect it to attract plenty of competition.

Three years: Someone wins a Pulitzer for AI-assisted investigative reporting

I went for a bit of a self-serving prediction here: I think within three years someone is going to win a Pulitzer prize for a piece of investigative reporting that was aided by generative AI tools.

I do not mean that an LLM will write the article! I continue to think that having LLMs write on your behalf is one of the least interesting applications of these tools.

I called this prediction self-serving because I want to help make this happen! My Datasette suite of open source tools for data journalism has been growing AI features, like LLM-powered data enrichments and extracting structured data into tables from unstructured text.

My dream is for those tools—or tools like them—to be used for an award winning piece of investigative reporting.

I picked three years for this because I think that’s how long it will take for knowledge of how to responsibly and effectively use these tools to become widespread enough for that to happen.

LLMs are not an obvious fit for journalism: journalists look for the truth, and LLMs are notoriously prone to hallucination and making things up. But journalists are also really good at extracting useful information from potentially untrusted sources—that’s a lot of what the craft of journalism is about.

The two areas I think LLMs are particularly relevant to journalism are:

  • Structured data extraction. If you have 10,000 PDFs from a successful Freedom of Information Act request, someone or something needs to kick off the process of reading through them to find the stories. LLMs are a fantastic way to take a vast amount of information and start making some element of sense from it. They can act as lead generators, helping identify the places to start looking more closely.
  • Coding assistance. Writing code to help analyze data is a huge part of modern data journalism—from SQL queries through data cleanup scripts, custom web scrapers or visualizations to help find signal among the noise. Most newspapers don’t have a team of programmers on staff: I think within three years we’ll have robust enough tools built around this pattern that non-programmer journalists will be able to use them as part of their reporting process.

I hope to build some of these tools myself!

So my concrete prediction for three years is that someone wins a Pulitzer with a small amount of assistance from LLMs.

My more general prediction: within three years it won’t be surprising at all to see most information professionals use LLMs as part of their daily workflow, in increasingly sophisticated ways. We’ll know exactly what patterns work and how best to explain them to people. These skills will become widespread.

Three years part two: privacy laws with teeth

My other three year prediction concerned privacy legislation.

The levels of (often justified) paranoia around both targeted advertising and what happens to the data people paste into these models is a constantly growing problem.

I wrote recently about the inexterminable conspiracy theory that Apple target ads through spying through your phone’s microphone. I’ve written in the past about the AI trust crisis, where people refuse to believe that models are not being trained on their inputs no matter how emphatically the companies behind them deny it.

I think the AI industry itself would benefit enormously from legislation that helps clarify what’s going on with training on user-submitted data, and the wider tech industry could really do with harder rules around things like data retention and targeted advertising.

I don’t expect the next four years of US federal government to be effective at passing legislation, but I expect we’ll see privacy legislation with sharper teeth emerging at the state level or internationally. Let’s just hope we don’t end up with a new generation of cookie-consent banners as a result!

Six years utopian: amazing art

For six years I decided to go with two rival predictions, one optimistic and one pessimistic.

I think six years is long enough that we’ll figure out how to harness this stuff to make some really great art.

I don’t think generative AI for art—images, video and music—deserves nearly the same level of respect as a useful tool as text-based LLMs. Generative art tools are a lot of fun to try out but the lack of fine-grained control over the output greatly limits its utility outside of personal amusement or generating slop.

More importantly, they lack social acceptability. The vibes aren’t good. Many talented artists have loudly rejected the idea of these tools, to the point that the very term “AI” is developing a distasteful connotation in society at large.

Image and video models are also ground zero for the AI training data ethics debate, and for good reason: no artist wants to see a model trained on their work without their permission that then directly competes with them!

I think six years is long enough for this whole thing to shake out—for society to figure out acceptable ways of using these tools to truly elevate human expression. What excites me is the idea of truly talented, visionary creative artists using whatever these tools have evolved into in six years to make meaningful art that could never have been achieved without them.

On the podcast I talked about Everything Everywhere All at Once, a film that deserved every one of its seven Oscars. The core visual effects team on that film was just five people. Imagine what a team like that could do with the generative AI tools we’ll have in six years time!

Since recording the podcast I learned from Swyx that Everything Everywhere All at Once used Runway ML as part of their toolset already:

Evan Halleck was on this team, and he used Runway’s AI tools to save time and automate tedious aspects of editing. Specifically in the film’s rock scene, he used Runway’s rotoscoping tool to get a quick, clean cut of the rocks as sand and dust were moving around the shot. This translated days of work to a matter of minutes.

I said I thought a film that had used generative AI tools would win an Oscar within six years. Looks like I was eight years late on that one already!

Six years dystopian: AGI/ASI causes mass civil unrest

My pessimistic alternative take for 2031 concerns “AGI”—a term which, like “agents”, is constantly being redefined. The Information recently reported (see also The Verge) that Microsoft and OpenAI are now defining AGI as a system capable of generating $100bn in profit!

If we assume AGI is the point at which AI systems are capable of performing almost any job currently reserved for a human being it’s hard not to see potentially negative consequences.

Sam Altman may have experimented with Universal Basic Income, but the USA is a country that can’t even figure out universal healthcare! I have huge trouble imagining a future economy that works for the majority of people when the majority of jobs are being done by machines.

So my dystopian prediction for 2031 is that if that form of AGI has come to pass it will be accompanied by extraordinarily bad economic outcomes and mass civil unrest.

My version of an AI utopia is tools that augment existing humans. That’s what we’ve had with LLMs so far, and my ideal is that those tools continue to improve and subsequently humans become able to take on more ambitious work.

If there’s a version of AGI that results in that kind of utopia, I’m all for it.

My total lack of conviction

There’s a reason I haven’t made predictions like this before: my confidence in my ability to predict the future is almost non-existent. At least one of my predictions here already proved to be eight years late!

These predictions are in the public record now (I even submitted a pull request).

It’s going to be interesting looking back at these in one, three and six years to see how I did.