25th April 2024
I’ve been at OpenAI for almost a year now. In that time, I’ve trained a lot of generative models. [...] It’s becoming awfully clear to me that these models are truly approximating their datasets to an incredible degree. [...] What this manifests as is – trained on the same dataset for long enough, pretty much every model with enough weights and training time converges to the same point. [...] This is a surprising observation! It implies that model behavior is not determined by architecture, hyperparameters, or optimizer choices. It’s determined by your dataset, nothing else. Everything else is a means to an end in efficiently delivery compute to approximating that dataset.
Recent articles
- Adding TILs, releases, museums, tools and research to my blog - 20th February 2026
- Two new Showboat tools: Chartroom and datasette-showboat - 17th February 2026
- Deep Blue - 15th February 2026