What AI can do for you on the Theory of Change podcast
Matthew Sheffield invited me on his show Theory of Change to talk about how AI models like ChatGPT, Bing and Bard work and practical applications of things you can do with them.
The episode is available on SoundCloud and various podcast platforms (here’s Apple Podcasts), or you can watch it on YouTube. I’ve also embedded the video below.
Our full conversation is nearly an hour and twenty minutes long! There’s a transcript on the site which includes additional links.
I’ll quote one portion from towards the end of the interview, about ways to learn more about how to use these models:
WILLISON: Websites pop up every day that claim to help you with AI, to be honest, at a rate that’s too far to even evaluate them and figure out which ones are good and which ones are snake oil. The thing that matters is actually interacting with these systems. You should be playing with Google Bard, and ChatGPT, and Microsoft Bing, and trying things out with a very skeptical approach.
Always assume that anything that it does, it could be making things up. It could be tricking you into thinking that it’s capable of something that it’s not. But that’s where you have to learn to experiment. You have to try different things, give it a URL, and then give it a broken URL and see how it differs between them.
Because that really is the most reliable way to get stuff done here. To sort of build that crucial mental model of what these things can do, and what they can’t. And it’s full of pitfalls. It’s so easy to fall into traps. So you do need to read around this stuff and find communities of people who are experimenting in it with, with you and, and so on.
Unfortunately, I don’t think there’s an easy answer to the question yet of how to learn to use these effectively, partly because ChatGPT isn’t even four months old yet. It’s four-month birthday’s on the 30th of March. All of this stuff is so new, we’re all figuring it out together. The key thing is, because it’s all so new, you need to hang out with other people.
You need to get involved with communities who are figuring this out. Share what you learn, see what other people learn, and basically try and help society as a whole come to terms with what these things even are and what we can do with them.
So that’s, I think, one of my sort of big personal ethical concerns is you should share your prompts. There are websites where you can sell prompts to people. No, no, no, no. Don’t do that. Share your prompts with other people. You get them to share the prompts back. We are all in this together. And sharing the prompts that work for you and the prompts that don’t is the fastest way that you can learn, and the fastest way that you can help other people learn as well.
A shorter version of the above: share your prompts! We’re all in this together. We have so much that we still need to figure out.
More recent articles
- ChatGPT should include inline tips - 30th May 2023
- Lawyer cites fake cases invented by ChatGPT, judge is not amused - 27th May 2023
- llm, ttok and strip-tags - CLI tools for working with ChatGPT and other LLMs - 18th May 2023
- Delimiters won't save you from prompt injection - 11th May 2023
- Weeknotes: sqlite-utils 3.31, download-esm, Python in a sandbox - 10th May 2023
- Leaked Google document: "We Have No Moat, And Neither Does OpenAI" - 4th May 2023
- Midjourney 5.1 - 4th May 2023
- Prompt injection explained, with video, slides, and a transcript - 2nd May 2023
- download-esm: a tool for downloading ECMAScript modules - 2nd May 2023
- Let's be bear or bunny - 1st May 2023