After publishing this piece, I was contacted by Anthropic who told me that Sonnet 3.7 would not be considered a 10^26 FLOP model and cost a few tens of millions of dollars to train, though future models will be much bigger.
Recent articles
- Notes from my Accessibility and Gen AI podcast appearence - 2nd March 2025
- Hallucinations in code are the least dangerous form of LLM mistakes - 2nd March 2025
- Structured data extraction from unstructured content using LLM schemas - 28th February 2025