Scaling laws allow us to precisely predict some coarse-but-useful measures of how capable future models will be as we scale them up along three dimensions: the amount of data they are fed, their size (measured in parameters), and the amount of computation used to train them (measured in FLOPs). [...] Our ability to make this kind of precise prediction is unusual in the history of software and unusual even in the history of modern AI research. It is also a powerful tool for driving investment since it allows R&D teams to propose model-training projects costing many millions of dollars, with reasonable confidence that these projects will succeed at producing economically valuable systems.
Recent articles
- Teresa T is name of the whale in Pillar Point Harbor near Half Moon Bay - 8th September 2024
- Calling LLMs from client-side JavaScript, converting PDFs to HTML + weeknotes - 6th September 2024
- Building a tool showing how Gemini Pro can return bounding boxes for objects in images - 26th August 2024