There’s been a lot of strange reporting recently about how ‘scaling is hitting a wall’ – in a very narrow sense this is true in that larger models were getting less score improvement on challenging benchmarks than their predecessors, but in a larger sense this is false – techniques like those which power O3 means scaling is continuing (and if anything the curve has steepened), you just now need to account for scaling both within the training of the model and in the compute you spend on it once trained.
Recent articles
- JustHTML is a fascinating example of vibe engineering in action - 14th December 2025
- OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI - 12th December 2025
- GPT-5.2 - 11th December 2025