Talking AI and jobs with Natasha Zouves for News Nation
30th May 2025
I was interviewed by News Nation’s Natasha Zouves about the very complicated topic of how we should think about AI in terms of threatening our jobs and careers. I previously talked with Natasha two years ago about Microsoft Bing.
I’ll be honest: I was nervous about this one. I’m not an economist and I didn’t feel confident talking about this topic!
I do find the challenge of making recent advances in AI and LLMs accessible to a general audience absolutely fascinating though, so I took the risk and agreed to the interview.
I think it came out very well. The full hour long video is now available on the News Nation YouTube channel, or as an audio podcast on iTunes or on Spotify.
I made my own transcript of the video (using MacWhisper) and fed it into the new Claude Opus 4 model to see if it could do a good job of turning that into an outline of the episode, with links to segments, short summaries and illustrative quotes. It did such a good job that I’m including it here on my blog—I very rarely publish AI-produced text of this length, but in this case I think it’s justified—especially since most of it is direct quotes from things I said (and have confirmed I said) during the episode.
I ran this command (using my LLM tool):
llm -m claude-4-opus -f transcript.md -s 'Create a markdown outline list of topics covered by this talk. For each topic have a title that links to that point in the video and a single sentence paragraph summary of that section and two or three of the best illustrative quotes. The YouTube video URL is https://www.youtube.com/watch?v=RIvIpILrNXE - use that to link to the exact moments in the video.'
It cost me 23,942 input tokens and 2,973 outputs, which for Claude Opus 4 adds up to 58 cents.
Claude included the relevant timestamps from the transcript. I ended tweaking those a little to ensure they included the introductory context to the session.
The economic disruption nightmare scenario (0:46)
Simon discusses his primary concern about AI’s impact on employment and the economy. He explains that while skeptical of AGI claims, he sees real job impacts already happening, particularly for information workers and programmers.
- “The biggest nightmare scenario for me, or the more realistic one is the economic disruption this causes”
- “If you have a job that primarily deals with handling information, this stuff is a very powerful tool to help with that. And maybe that results in job losses”
- “This stuff is incredibly good at writing software, which was a huge surprise to everyone”
Jobs most vulnerable to AI: translation and information processing (2:12)
The conversation explores how jobs involving information transformation are already being affected, with translation services as a prime example. Simon explains how translators have shifted from doing translations to reviewing AI-generated work.
- “Something we’ve seen already is jobs that are purely about transforming information from one shape to another are already being affected quite heavily”
- “It’s not so much that they’re put out of work. It’s that their job has changed from doing the translation to reviewing translations created by machines”
- “Paralegals, who are assisting lawyers in going through contracts and so forth, a lot of what they do is beginning to be impacted by these tools as well”
The jagged frontier: what AI can and cannot do (3:33)
Simon introduces the concept of AI’s “jagged frontier”—the unpredictable boundary between tasks AI excels at and those it fails at. He emphasizes that discovering these boundaries requires constant experimentation.
- “There are things that AI is really good at and there’s things that AI is terrible at, but those things are very non-obvious”
- “The only way to find out if AI can do a task is to sort of push it through the AI, try it lots of different times”
- “People are still finding things that it can’t do, finding things that it can do, and trying to explore those edges”
AI’s strength: processing and synthesizing large documents (4:16)
Simon details how AI excels at answering questions about information you provide it, making it valuable for document analysis and synthesis. He particularly highlights its surprising capability in code generation.
- “You can paste in a hundred-page document and ask it questions about the information in that document”
- “AI is shockingly good at writing code for computers”
- “If you can describe what you need, the AI can churn out hundreds of lines of codes that do exactly that”
The hallucination problem: AI’s critical weakness (5:28)
A detailed discussion of AI hallucination—when models confidently state false information. Simon provides examples including lawyers citing non-existent cases and explains why this is such a fundamental limitation.
- “AI makes mistakes a lot... it feels like it’s a science fiction AI that knows everything and answers instantly and always gets everything right. And it turns out that’s not what they are at all”
- “Really what these things are doing is they’re trying to give you something that sounds convincing. They’ve been trained to output convincing texts, but convincing isn’t the same thing as truth”
- “A bunch of lawyers have got caught out where they’ll in their lawsuits, they’ll say, and in the case, so-and-so versus so-and-so this thing happened. And then somebody looks it up and the case didn’t exist”
Customer service AI: the failed revolution (8:32)
Simon discusses Klarna’s reversal on AI customer service, explaining why human customers resist AI support and the ethical concerns around disclosure.
- “They announced a reversal of that. They said they’re hiring humans back again... because it turns out human beings hate talking to an AI as customer support”
- “I think it’s deeply unethical to present a customer with an AI support bot without letting them know that it’s AI”
- “If you’re talking to customer support, sometimes it’s because you’ve hit an edge case... which is that the thing that you’re trying to do just isn’t one of those normal things that the AI have been trained on”
The trucking industry and self-driving vehicles (10:58)
A sobering discussion about the future of trucking jobs in light of advances in self-driving technology, particularly Waymo’s success in San Francisco.
- “I’m more nervous about that now than I was a year ago, because like self driving cars have been coming soon in the future for like over a decade”
- “We now have these self driving taxis, which actually do work... They’ve been operating on the roads of San Francisco for a couple of years now. And they’re good”
- “Given how well Waymo is now working, it does feel to me like we might see functional self driving trucks at some point within the next five to 10 years”
Journalism and financial analysis: why human judgment matters (15:44)
Simon strongly defends journalism against AI replacement, explaining why human judgment and verification skills remain crucial in fields dealing with truth and trust.
- “The single biggest flaw of AI is that it is gullible... they have absolutely no instincts for telling if something is true or not”
- “Journalism is the art of absorbing information from a huge array of untrustworthy sources and figuring out what is the truth in amongst all of this”
- “If you want to analyze 10,000 police reports and figure out what the overall trends are... If the AI can read those 10,000 things and give you leads on which ones look most interesting, it almost doesn’t matter if it makes mistakes”
AI’s telltale signs: the “delve” phenomenon (17:49)
An fascinating (note: Claude used “an fascinating” rather than “a fascinating”, what a weird mistake!) explanation of how to spot AI-generated text, including the surprising linguistic influence of Nigerian English on AI models.
- “There’s this magical thing where the world delve is surprisingly common in AI generated text. If something says that it’s going to delve into something, that’s an instant red flag”
- “A lot of that work was outsourced to people in Nigeria a couple of years ago... Nigerian English is slightly different from American English. They use the word delve a whole lot more”
- “One of the thrilling things about this field is the people building this stuff don’t really understand how it works”
Voice cloning and scams: the dark side of AI (21:47)
Simon discusses the serious threat of voice cloning technology and romance scams, explaining how AI makes these scams cheaper and more scalable.
- “There are a lot of systems now that can clone your voice to a very high degree based on 10 to 15 seconds of samples”
- “When you hear somebody on the phone with a voice, you can no longer be at all sure that that person is the person that they sound like”
- “Romance scams... were being run by human beings... Now you don’t even need that. The AI models are extremely good at convincing messages”
AI-proofing your career: learning and adaptation (26:52)
Simon provides practical advice for workers concerned about AI, emphasizing how AI can actually help people learn new skills more easily.
- “One of the most exciting things to me personally about AI is that it reduces the barrier to entry on so many different things”
- “There’s never been a better time to learn to program. Because that frustration, that learning curve has been shaved down so much”
- “If you’re AI literate, if you can understand what these tools can do and how to apply them and you have literacy in some other field, that makes you incredibly valuable”
Safe sectors: the trades and human touch (30:01)
Discussion of jobs that are more resistant to AI disruption, particularly skilled trades and roles requiring physical presence.
- “The classic example is things like plumbing. Like plumbing and HVAC... it’s going to be a very long time until we have an AI plumber”
- “I don’t think AI eliminates many jobs. I think it greatly changes how they work”
- “You could be the AI-enabled botanist who helps all of the companies that run nurseries and so forth upgrade their processes”
Creative industries: the human advantage (34:37)
Simon explains why human creativity remains valuable despite AI’s capabilities, using examples from film and art.
- “Novelty is the one thing that AI can’t do because it’s imitating the examples that it’s seen already”
- “If a human being with taste filtered that, if it got the AI to write 20 stories and it said, okay, this is the most interesting and then added that human flavor on top, that’s the point where the thing starts to get interesting”
- “I love the idea that creative people can take on more ambitious projects, can tell even better stories”
AI security and the gullibility problem (46:51)
A deep dive into the unsolved security challenges of AI systems, particularly their susceptibility to manipulation.
- “We’re building these systems that you can talk to and they can do things for you... And we have no idea how to make this secure”
- “The AI security problem comes down to gullibility”
- “They don’t yet have a way of telling the difference between stuff that you tell them to do and stuff that other people tell them to do”
The global AI race and competition (52:14)
Simon discusses concerns about international AI competition and how it affects safety considerations.
- “The thing that frightens me most is the competition... In the past 12 months, there are, I counted, 18 organizations that are putting out these ChatGPT style models”
- “They’re all competing against each other, which means they’re taking shortcuts. The safety research they’re paying less attention to”
- “Chinese AI lab called DeepSeek came up with more optimized methods... they managed to produce a model that was as good as the OpenAI ones for like a 20th of the price”
Getting started with AI: practical tips (57:34)
Simon provides concrete advice for beginners wanting to explore AI tools safely and productively.
- “The best way to learn about this stuff is to play with it, is to try and do ridiculous things with it”
- “A friend of mine says you should always bring AI to the table. Like any challenge that you have, try it against the AI, even if you think it’s not going to work”
- “One exercise I really recommend is try and get an AI to make a mistake as early as possible... the first time you see it very confidently tell you something that’s blatantly not true, it sort of inoculates you”
More recent articles
- Large Language Models can run tools in your terminal with LLM 0.26 - 27th May 2025
- Highlights from the Claude 4 system prompt - 25th May 2025