Simon Willison’s Weblog

Subscribe

Stepping back, though, the very speed with which ChatGPT went from a science project to 100m users might have been a trap (a little as NLP was for Alexa). LLMs look like they work, and they look generalised, and they look like a product - the science of them delivers a chatbot and a chatbot looks like a product. You type something in and you get magic back! But the magic might not be useful, in that form, and it might be wrong. It looks like product, but it isn’t. [...]

LLMs look like better databases, and they look like search, but, as we’ve seen since, they’re ‘wrong’ enough, and the ‘wrong’ is hard enough to manage, that you can’t just give the user a raw prompt and a raw output - you need to build a lot of dedicated product around that, and even then it’s not clear how useful this is.

Benedict Evans