It’s OK to call it Artificial Intelligence
7th January 2024
Update 9th January 2024: This post was clumsily written and failed to make the point I wanted it to make. I’ve published a follow-up, What I should have said about the term Artificial Intelligence which you should read instead.
My original post follows.
We need to be having high quality conversations about AI: what it can and can’t do, its many risks and pitfalls and how to integrate it into society in the most beneficial ways possible.
Any time I write anything that mentions AI it’s inevitable that someone will object to the very usage of the term.
Strawman: “Don’t call it AI! It’s not actually intelligent—it’s just spicy autocomplete.”
That strawman is right: it’s not “intelligent” in the same way that humans are. And “spicy autocomplete” is actually a pretty good analogy for how a lot of these things work. But I still don’t think this argument is a helpful contribution to the discussion.
We need an agreed term for this class of technology, in order to have conversations about it. I think it’s time to accept that “AI” is good enough, and is already widely understood.
I’ve fallen for this trap myself. Every time I write a headline about AI I find myself reaching for terms like “LLMs” or “Generative AI”, because I worry that the term “Artificial Intelligence” over-promises and implies a false mental model of a sci-fi system like Data from Star Trek, not the predict-next-token technology we are building with today.
I’ve decided to cut that out. I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet.
The term “Artificial Intelligence” has been in use by academia since 1956, with the Dartmouth Summer Research Project on Artificial Intelligence—this field of research is nearly 70 years old now!
John McCarthy’s 2nd September 1955 proposal for that workshop included this:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
I think this is a very strong definition, which fits well with the AI models and use-cases we are talking about today. Let’s use it.
Why not LLMs?
I’ve spent the past year mainly talking about LLMs—Large Language Models—often as an alternative to the wider term “AI”.
While this term is accurate, it comes with a very significant downside: most people still don’t know what it means.
I find myself starting every article with “Large Language Models (LLMs), the technology behind ChatGPT and Google Bard...”
I don’t think this is helpful. Why use a relatively obscure term I have to define every single time, just because the word “intelligence” in the common acronym AI might be rejected by some readers?
The term LLM is already starting to splinter as language models go multi-modal. I’ve seen the term “LMMs” for Large Multimodal Models start to circulate, which risks introducing yet another piece of jargon that people need to understand in order to comprehend my writing!
The argument against using the term AI
My link to this post on Mastodon has attracted thoughtful commentary that goes well beyond the straw man argument I posed above.
The thing that’s different with the current wave of AI tools, most notably LLMs, is that at first glance they really do look like the AI from science fiction. You can have conversations with them, they exhibit knowledge of a vast array of things, and it’s easy to form an initial impression of them that puts them at the same level of “intelligence” as Jarvis, or Data, or a T-800.
The more time you spend with them, and the more you understand about how they work, the more this illusion falls apart as their many flaws start to become apparent.
Where this gets actively harmful is when people start to deploy systems under the assumption that these tools really are trustworthy, intelligent systems—capable of making decisions that have a real impact on people’s lives.
This is the thing we have to fight back against: we need to help people overcome their science fiction priors, understand exactly what modern AI systems are capable of, how they can be used responsibly and what their limitations are.
I don’t think refusing to use the term AI is an effective tool to help us do that.
Let’s tell people it’s “not AGI” instead
If we’re going to use “Artificial Intelligence” to describe the entire field of machine learning, generative models, deep learning, computer vision and so on... what should we do about the science fiction definition of AI that’s already lodged in people’s heads?
Our goal here is clear: we want people to understand that the LLM-powered tools they are interacting with today aren’t actually anything like the omniscient AIs they’ve seen in science fiction for the past ~150 years.
Thankfully there’s a term that’s a good fit for this goal already: AGI, for “Artificial General Intelligence”. This is generally understood to mean AI that matches or exceeds human intelligence.
AGI itself is vague and infuriatingly hard to define, but in this case I think that’s a feature. “ChatGPT isn’t AGI” is an easy statement to make, and I don’t think its accuracy is even up for debate.
The term is right there for the taking. “You’re thinking about science fiction there: ChatGPT isn’t AGI, like in the movies. It’s just an AI language model that can predict next tokens to generate text.”
Miscellaneous additional thoughts
There’s so much good stuff in the conversation about this post. I already added the new sections Why not LLMs?, The argument against using the term AI and Let’s tell people it’s “not AGI” instead based on those comments.
I’ll collect a few more miscellaneous thoughts in this section, which I may continue to grow in the future.
- I’ve seen a few reactions to this post that appear to interpret me as saying “Everyone should be calling it AI. Stop calling it something else.” That really wasn’t my intention. If you want to use more accurate language in your conversations, go ahead! What I’m asking for here is that people try to resist the temptation to jump in to every AI discussion with “well actually, AI is a bad name for it because...”, in place of more productive conversations.
- Academia really did go all-in on Artificial Intelligence, across many decades. The Stanford Artificial Intelligence Lab (SAIL) was founded by John McCarthy (who coined the term for the Dartmouth workshop) in 1963. The University of Texas Laboratory for Artificial Intelligence opened in 1983. Bernard Meltzer at the University of Edinburgh founded the Artificial Intelligence Journal in 1970.
- Industry has behaved slightly differently. Just 5-10 years ago my experience was that people building with these technologies tended to avoid the term “AI”, instead talking about “machine learning” or “deep learning”. That’s changed in the past few years, as the rise of LLMs and generative AI have produced systems that feel a little bit closer to the sci-fi version.
- The term AI winter was coined in 1984 to describe a period of reduced funding for AI research. There have been two major winters so far, and people are already predicting a third.
- The most influential organizations building Large Language Models today are OpenAI, Mistral AI, Meta AI, Google AI and Anthropic. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?
- One of the problems with saying “it’s not actually intelligent” is that it raises the question of what intelligence truly is, and what capabilities a system would need in order to match that definition. This is a rabbit hole which I think can only act as a distraction from discussing the concrete problems with the systems we have today.
[O]ne of the reasons for inventing the term “artificial intelligence” was to escape association with “cybernetics”. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him.
So part of the reason he coined the term AI was kind of petty! This is also a neat excuse for me to link to one of my favourite podcast episodes, 99% Invisible on Project Cybersyn (only tangentially related but it’s so good!)
- Jim Gardner pointed out that the term AI "is polysemic. It means X to researchers, but Y to laypeople who only know of ChatGPT". I think this observation may be crucial to understanding why this topic is so hotly debated!
- Aside from confusion with science-fiction, one of the strongest reasons for people to reject the term AI is due to its association with marketing and hype. Slapping the label “AI” on something is seen as a cheap trick that any company can use to attract attention and raise money, to the point that some people have a visceral aversion to the term.
Here’s what Glyph Lefkowitz had to say about this last point:
A lot of insiders—not practitioners as such, but marketers & executives—use “AI” as the label not in spite of its confusion with the layperson’s definition, but because of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It’s a term whose primary purpose has become deceptive.
At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as “AI”, that the media calls “AI”, that industry publications refer to as “AI”, that other products identify as “AI”, going out on a limb and trying to build a brand identity around pedantic hairsplitting around “LLMs” and “machine learning” is a massive uphill battle which you are disincentivized at every possible turn to avoid.
Glyph’s closing thought here reflects my own experience: I tried to avoid leaning too hard into the term “AI” myself, but eventually it felt like an uphill battle that was resulting in little to no positive impact.
More recent articles
- The killer app of Gemini Pro 1.5 is video - 21st February 2024
- Weeknotes: a Datasette release, an LLM release and a bunch of new plugins - 9th February 2024
- LLM 0.13: The annotated release notes - 26th January 2024
- Weeknotes: datasette-test, datasette-build, PSF board retreat - 21st January 2024
- Talking about Open Source LLMs on Oxide and Friends - 17th January 2024
- Publish Python packages to PyPI with a python-lib cookiecutter template and GitHub Actions - 16th January 2024
- What I should have said about the term Artificial Intelligence - 9th January 2024
- Weeknotes: Page caching and custom templates for Datasette Cloud - 7th January 2024