Weeknotes: NICAR, and an appearance on KQED Forum
7th March 2023
I spent most of this week at NICAR 2023, the data journalism conference hosted this year in Nashville, Tennessee.
This was my third in-person NICAR and it’s an absolute delight: NICAR is one of my favourite conferences to go to. It brings together around a thousand journalists who work with data, from all over the country and quite a few from the rest of the world.
People have very different backgrounds and experiences, but everyone has one thing in common: a nerdy obsession with using data to find and tell stories.
I came away with at least a year’s worth of new ideas for things I want to build.
I also presented a session: an hour long workshop titled “Datasette: An ecosystem of tools for exploring data and collaborating on data projects”.
I demonstrated the scope of the project, took people through some hands-on exercises derived from the Datasette tutorials Cleaning data with sqlite-utils and Datasette and Using Datasette in GitHub Codespaces and invited everyone in the room to join the Datastte Cloud preview and try using datasette-socrata to import and explore some data from the San Francisco open data portal.
My goal for this year’s NICAR was to setup some direct collaborations with working newsrooms. Datasette is ready for this now, and I’m willing to invest significant time and effort in onboarding newsrooms, helping them start using the tools and learning what I need to do to help them be more effective in that environment.
If your newsroom is interested in that, please drop me an email at swillison@
Google’s email service.
KQED Forum
My post about Bing attracted attention from the production team at KQED Forum, a long-running and influential Bay Area news discussion radio show.
They invited me to join a live panel discussion on Thursday morning with science-fiction author Ted Chiang and Claire Leibowitz from Partnership on AI.
I’ve never done live radio before, so this was an opportunity that was too exciting to miss. I ducked out of the conference for an hour to join the conversation via Zoom.
Aside from a call with a producer a few days earlier I didn’t have much of an idea what to expect (similar to my shorter live TV appearance). You really have to be able to think on your feet!
A recording is available on the KQED site, and on Apple Podcasts.
I’m happy with most of it, but I did have one offensive and embarassing slip-up. I was talking about the Kevin Roose ChatGPT conversation from the New York Times, where Bing declared its love for him. I said (05:30):
So I love this particular example because it actually accidentally illustrates exactly how these things work.
All of these chatbots, all of these language models they’re called, all they can do is predict sentences.
They predict the next word that statistically makes sense given what’s come before.
And if you look at the way it talks to Kevin Roose, I’ve got a quote.
It says, “You’re married, but you’re not happy. You’re married, but you’re not satisfied. You’re married, but you’re not in love.”
No human being would talk like that. That’s practically a kind of weird poetry, right?
But if you’re thinking about in terms of, OK, what sentence should logically come after this sentence?
“You’re not happy, and then you’re not satisfied”, and then “you’re not in love”—those just work. So Kevin managed to get himself into the situation where this bot was way off the reservation.
This is one of the most monumental software bugs of all time.
This was Microsoft’s Bing search engine. They had a bug in their search engine where it would try and get a user to break up with their wife!
That’s absolutely absurd.
But really, all it’s doing is it had got itself to a point in the conversation where it’s like, Okay, well, I’m in the mode of trying to talk about how why a marriage isn’t working?
What comes next? What comes next? What comes next?
In talking about Bing’s behaviour I’ve been trying to avoid words like “crazy” and “psycho”, because those stigmatize mental illness. I try to use terms like “wild” and “inappropriate” and “absurd” instead.
But saying something is “off the reservation” is much worse!
The term is deeply offensive, based on a dark history of forced relocation of Native Americans. I used it here thoughtlessly. If you asked me to think for a moment about whether it was an appropriate phrase I would have identified that it wasn’t. I’m really sorry to have said this, and I will be avoiding this language in the future.
I’ll share a few more annotated highlights from the transcript, thankfully without any more offensive language.
Here’s my response to a question about how I’ve developed my own understanding of how these models actually work (19:47):
I’m a software engineer. So I’ve played around with training my own models on my laptop. I found an example where you can train one just on the complete works of Shakespeare and then have it spit out garbage Shakespeare, which has “thee” and “thus” and so forth.
And it looks like Shakespeare until you read a whole sentence and you realize it’s total nonsense.
I did the same thing with my blog. I’ve got like 20 years of writing that I piped into it and it started producing sentences which were clearly in my tone even though they meant nothing.
It’s so interesting seeing it generate these sequences of words in kind of a style but with no actual meaning to them.
And really that’s exactly the same thing as ChatGPT. It’s just that ChatGPT was fed terabytes of data and trained for months and months and months, whereas I fed in a few megabytes of data and trained it for 15 minutes.
So that really helps me start to get a feel for how these things work. The most interesting thing about these models is it turns out there’s this sort of inflection point in size where you train them and they don’t really get better up until a certain point where suddenly they start gaining these capabilities.
They start being able to summarize text and generate poems and extract things into bullet pointed lists. And the impression I’ve got from the AI research community is people aren’t entirely sure that they understand why that happens at a certain point.
A lot of AI research these days is just, let’s build it bigger and bigger and bigger and play around with it. And oh look, now it can do this thing. I just saw this morning that someone’s got it playing chess. It shouldn’t be able to play chess, but it turns out the Bing one can play chess and like nine out of ten of the moves it generates are valid moves and one out of ten are rubbish because it doesn’t have a chess model baked into it.
So this is one of the great mysteries of these things, is that as you train them more, they gain these capabilities that no one was quite expecting them to gain.
Another example of that: these models are really good at writing code, like writing actual code for software, and nobody really expected that to be the case, right? They weren’t designed as things that would replace programmers, but actually the results you can get out of them if you know how how to use them in terms of generating code can be really sophisticated.
One of the most important lessons I think is that these things are actually deceptively difficult to use, right? It’s a chatbot. How hard can it be? You just type things and it says things back to you.
But if you want to use it effectively, you have to understand pretty deeply what its capabilities and limitations are. If you try and give it mathematical puzzles, it will fail miserably because despite being a computer—and computers should be good at maths!—that’s not something that language models are designed to handle.
And it’ll make things up left, right, and center, which is something you need to figure out pretty quickly. Otherwise, you’re gonna start believing just garbage that it throws out at you.
So there’s actually a lot of depth to this. I think it’s worth investing a lot of time just playing games with these things and trying out different stuff, because it’s very easy to use them incorrectly. And there’s very little guidance out there about what they’re good at and what they’re bad at. It takes a lot of learning.
I was happy with my comparison of writing cliches to programming. A caller had mentioned that they had seen it produce an answer to a coding question that invented an API that didn’t exist, causing them to lose trust in it as a programming tool (23:11):
I can push back slightly on this example. That’s absolutely right. It will often invent API methods that don’t exist. But as somebody who creates APIs, I find that really useful because sometimes it invents an API that doesn’t exist, and I’ll be like, well, that’s actually a good idea.
Because the thing it’s really good at is consistency. And when you’re designing APIs, consistency is what you’re aiming for. So, you know, in writing, you want to avoid cliches. In programming, cliches are your friend. So, yeah, I actually use it as a design assistant where it’ll invent something that doesn’t exist. And I’ll be like, okay, well, maybe that’s the thing that I should build next.
A caller asked “Are human beings not also statistically created language models?”. My answer to that (at 35:40):
So I’m not a neurologist, so I’m not qualified to answer this question in depth, but this does come up a lot in AI circles. In the discourse, yeah.
Yes, so my personal feeling on this is there is a very small part of our brain that kind of maybe works a little bit like a language model. You know, when you’re talking, it’s pretty natural to think what word’s going to come next in that sentence.
But I’m very confident that that’s only a small fraction of how our brains actually work. When you look at these language models like ChatGPT today, it’s very clear that if you want to reach this mythical AGI, this general intelligence, it’s going to have to be a heck of a lot more than just a language model, right?
You need to tack on models that can tell truth from fiction and that can do sophisticated planning and do logical analysis and so forth. So yeah, my take on this is, sure, there might be a very small part of how our brains work that looks a little bit like a language model if you squint at it, but I think there’s a huge amount more to cognition than just the tricks that these language models are doing.
These transcripts were all edited together from an initial attempt created using OpenAI Whisper, running directly on my Mac using MacWhisper.
Releases this week
-
datasette-simple-html: 0.1—2023-03-01
Datasette SQL functions for very simple HTML operations -
datasette-app: 0.2.3—(5 releases total)—2023-02-27
The Datasette macOS application
TIL this week
More recent articles
- Project: Civic Band - scraping and searching PDF meeting minutes from hundreds of municipalities - 16th November 2024
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024
- Visualizing local election results with Datasette, Observable and MapLibre GL - 9th November 2024