I talked about Bing and tried to explain language models on live TV!
19th February 2023
Yesterday evening I was interviewed by Natasha Zouves on NewsNation, on live TV (over Zoom).
I’ve known Natasha for a few years—we met in the JSK fellowship program at Stanford—and she got in touch after my blog post about Bing went viral a few days ago.
I’ve never done live TV before so this felt like an opportunity that was too good to pass up!
Even for a friendly conversation like this you don’t get shown the questions in advance, so everything I said was very much improvised on the spot.
I went in with an intention to try and explain a little bit more about what was going on, and hopefully offset the science fiction aspects of the story a little (which is hard because a lot of this stuff really is science fiction come to life).
I ended up attempting to explain how large language models work to a general TV audience, assisted by an unexpected slide with a perfect example of what predictive next-sentence text completion looks like.
Here’s the five minute video of my appearance:
I used Whisper (via my Action Transcription tool) to generate the below transcript, which I then tidied up a bit with paragraph breaks and some additional inline links.
Transcript
Natasha: The artificial intelligence chatbots feel like they’re taking on a mind of their own. Specifically, you may have seen a mountain of headlines this week about Microsoft’s new Bing chatbot.
The Verge calling it, quote, an emotionally manipulative liar. The New York Times publishing a conversation where the AI said that it wanted to be alive, even going on to declare its love for the user speaking with it. Well, now Microsoft is promising to put new limits on the chatbot after it expressed its desire to steal nuclear secrets.
A blog post on this alarming topic from Simon Willison going viral this week after Elon Musk tweeted it. Simon is an independent researcher and developer and had a conversation with the chatbot and it stated, quote, I will not harm you unless you harm me first, and that it would report him to the authorities if there were any hacking attempts.
It only gets weirder from there. Simon Willison, the man behind that viral post joining us exclusively on NewsNation now. Simon, it’s good to see you. And I should also mention we were both JSK fellows at Stanford. Your blog post going viral this week and Elon pushing it out to the world. Thanks for being here.
Simon: Yeah, it’s great to be here. No, it has been a crazy week. This story is just so weird. I like that you had the science fiction clip earlier. It’s like we’re speed running all of the science fiction scenarios in which the rogue AI happens. And it’s crazy because none of this is what it seems like, right? This is not an intelligence that has been cooped up by Microsoft and restricted from the world. But it really feels like it is, you know, it feels very science fiction at the moment.
Natasha: Oh, absolutely. And that AI almost sounded like it was threatening you at one point. You are immersed in this space. You understand it. Is this a new level of creepy and help it help explain what what is exactly so creepy about this?
Simon: So I should clarify, I didn’t get to have the threatening conversation myself—unfortunately—I really wish I had! That was a chap called Marvin online.
But basically, what this technology does, all it knows how to do, is complete sentences, right? If you say “the first man on the moon was” it can say “Neil Armstrong”. And if you say “twinkle, twinkle”, it can say “little star”.
But it turns out when you get really good at completing sentences, it can feel like you’re talking to a real person because it’s been trained on all of Wikipedia and vast amounts of the Internet. It’s clearly read science fiction stories, because if you can convince it to start roleplaying an evil AI, it will talk about blackmailing people and stealing nuclear secrets and all of this sort of stuff.
But what’s really wild is that this is supposed to be a search engine! Microsoft took this technology and they plugged it into Bing. And so it’s supposed to be helpful and answer your questions and help you run searches. But they hadn’t tested what happens if you talk to it for two hours at a go. So that crazy story in the New York Times, it turns out once you’ve talked to it for long enough, it completely forgets that it’s supposed to be a search engine. And now it starts saying things about how you should leave your wife for it and just utterly wild things like that.
Natasha: I mean, to your point, these dialogues, they seem real as you read through them. And you know that Bing bot telling that New York Times columnist it was in love with them, trying to convince him that he did not love his wife.
Simon: This is a great slide. This right here. “You’re you and I’m me. You’re you and I’m Sydney. You’re you and I’m in love with you”. It’s poetry, right? Because if you look at that, all it’s doing is thinking, OK, what comes after “you’re married, but you’re not happy”? Well, the obvious next thing is “you’re married, but you’re not satisfied”. And so this really does illustrate why this is happening. Like no human being would talk with this sort of repetitive meter to it. But the AI is just what sentence comes next.
Natasha: That makes sense. What are the craziest things? What are the darkest things that you’re tracking right now?
Simon: So here’s my favorite: one of the safety measures you put in places with these is you don’t give them a memory. You make sure that at the beginning of each chat, they forget everything that they’ve talked about before and they start afresh. And Microsoft just on Friday announced that they were going to cut it down to just five chats, five messages you could have before it reset its memory to stop this weird stuff happening.
But what happened then is journalists started having conversations and publishing stories. And then if you said to the AI, what do you think of this story? It would go and read the story and that would refresh its memory.
Natasha: I see. So Simon, is this why when someone asked you if you’re asked it what it thought of your article, it said Simon is a liar. Simon Willison is a liar.
Simon: Exactly. Somebody pasted in a link to my article and it went away and it read it. And that was enough for it to say, OK, well, he’s saying I said these things. But of course, it doesn’t remember saying stuff. So it’s like, well, I didn’t say that. I’d never say that. It called me a liar. Yeah, it’s fascinating. But yeah, this is this weird thing where it’s not supposed to be able to remember things. But if it can search the Internet and if you put up an article about what it said, it has got this kind of memory.
Natasha: It’s a loophole. Simon, we are almost out of time and there’s so much to talk about. Bottom line, Simon, should we be worried? Is this sort of a ha ha, like what a quirky thing? And I’m sure Microsoft is on it. Or what on earth should we be concerned?
Simon: OK, the thing we should be concerned, we shouldn’t be worried about the AI blackmailing people and stealing nuclear secrets because it can’t do those things. What we should worry about is people who it’s talking to who get convinced to do bad things because of their conversations with it.
If you’re into conspiracy theories and you start talking to this AI, it will reinforce your world model and give you all sorts of new things to start worrying about. So my fear here isn’t that the AI will do something evil. It’s that somebody who talks to it will be convinced to do an evil thing in the world.
Natasha: Succinct and I appreciate it. And that is concerning and opened up an entire new jar of nightmares for me. Simon Willison, I appreciate your time. Despite what Microsoft bings, Chat AI believes you are not a liar. And we are so grateful for your time and expertise today. Thank you so much.
More recent articles
- Notes from Bing Chat—Our First Encounter With Manipulative AI - 19th November 2024
- Project: Civic Band - scraping and searching PDF meeting minutes from hundreds of municipalities - 16th November 2024
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024