In my experience with AI coding, very large context windows aren't useful in practice. Every model seems to get confused when you feed them more than ~25-30k tokens. The models stop obeying their system prompts, can't correctly find/transcribe pieces of code in the context, etc.
Developing aider, I've seen this problem with gpt-4o, Sonnet, DeepSeek, etc. Many aider users report this too. It's perhaps the #1 problem users have, so I created a dedicated help page.
Very large context may be useful for certain tasks with lots of "low value" context. But for coding, it seems to lure users into a problematic regime.
Recent articles
- Exploring Promptfoo via Dave Guarino's SNAP evals - 24th April 2025
- AI assisted search-based research actually works now - 21st April 2025
- Maybe Meta's Llama claims to be open source because of the EU AI act - 19th April 2025