Simon Willison’s Weblog

Subscribe

I’m on the Newsroom Robots podcast, with thoughts on the OpenAI board

25th November 2023

Newsroom Robots is a weekly podcast exploring the intersection of AI and journalism, hosted by Nikita Roy.

I’m the guest for the latest episode, recorded on Wednesday and published today:

Newsroom Robots: Simon Willison: Breaking Down OpenAI’s New Features & Security Risks of Large Language Models

We ended up splitting our conversation in two.

This first episode covers the recent huge news around OpenAI’s board dispute, plus an exploration of the new features they released at DevDay and other topics such as applications for Large Language Models in data journalism, prompt injection and LLM security and the exciting potential of smaller models that journalists can run on their own hardware.

You can read the full transcript on the Newsroom Robots site.

I decided to extract and annotate one portion of the transcript, where we talk about the recent OpenAI news.

Nikita asked for my thoughts on the OpenAI board situation, at 4m55s (a link to that section on Overcast).

The fundamental issue here is that OpenAI is a weirdly shaped organization, because they are structured as a non-profit, and the non-profit owns the for-profit arm.

The for-profit arm was only spun up in 2019, before that they were purely a non-profit.

They spun up a for-profit arm so they could accept investment to spend on all of the computing power that they needed to do everything, and they raised like 13 billion dollars or something, mostly from Microsoft. [Correction: $11 billion total from Microsoft to date.]

But the non-profit stayed in complete control. They had a charter, they had an independent board, and the whole point was that—if they build this mystical AGI —they were trying to serve humanity and keep it out of control of a single corporation.

That was kind of what they were supposed to be going for. But it all completely fell apart.

I spent the first three days of this completely confused—I did not understand why the board had fired Sam Altman.

And then it became apparent that this is all rooted in long-running board dysfunction.

The board of directors for OpenAI had been having massive fights with each other for years, but the thing is that the stakes involved in those fights weren’t really that important prior to November last year when ChatGPT came out.

You know, before ChatGPT, OpenAI was an AI research organization that had some interesting results, but it wasn’t setting the world on fire.

And then ChatGPT happens, and suddenly this board of directors of this non-profit is responsible for a product that has hundreds of millions of users, that is upending the entire technology industry, and is worth, on paper, at one point $80 billion.

And yet the board continued. It was still pretty much the board from a year ago, which had shrunk down to six people, which I think is one of the most interesting things about it.

The reason it shrunk to six people is they had not been able to agree on who to add to the board as people were leaving it.

So that’s your first sign that the board was not in a healthy shape. The fact that they could not appoint new board members because of their disagreements is what led them to the point where they only had six people on the board, which meant that it just took a majority of four for all of this stuff to kick off.

And so now what’s happened is the board has reset down to three people, where the job of those three is to grow the board to nine. That’s effectively what they are for, to start growing that board out again.

But meanwhile, it’s pretty clear that Sam has been made the king.

They tried firing Sam. If you’re going to fire Sam and he comes back four days later, that’s never going to work again.

So the whole internal debate around whether we are a research organization or are we an organization that’s growing and building products and providing a developer platform and growing as fast as we can, that seems to have been resolved very much in Sam’s direction.

Nikita asked what this means for them in terms of reputational risk?

Honestly, their biggest reputational risk in the last few days was around their stability as a platform.

They are trying to provide a platform for developers, for startups to build enormously complicated and important things on top of.

There were people out there saying, “Oh my God, my startup, I built it on top of this platform. Is it going to not exist next week?”

To OpenAI’s credit, their developer relations team were very vocal about saying, “No, we’re keeping the lights on. We’re keeping it running.”

They did manage to ship that new feature, the ChatGPT voice feature, but then they had an outage which did not look good!

You know, from their status board, the APIs were out for I think a few hours.

[The status board shows a partial outage with “Elevated Errors on API and ChatGPT” for 3 hours and 16 minutes.]

So I think one of the things that people who build on top of OpenAI will look for is stability at the board level, such that they can trust the organization to stick around.

But I feel like the biggest reputation hit they’ve taken is this idea that they were set up differently as a non-profit that existed to serve humanity and make sure that the powerful thing they were building wouldn’t fall under the control of a single corporation.

And then 700 of the staff members signed a letter saying, “Hey, we will go and work for Microsoft tomorrow under Sam to keep on building this stuff if the board don’t resign.”

I feel like that dents this idea of them as plucky independents who are building for humanity first and keeping this out of the hands of corporate control!

The episode with the second half of our conversation, talking about some of my AI and data journalism adjacent projects, should be out next week.