1st June 2025
I've been having some good results recently asking reasoning LLMs for "implementation plans" for features I'm working on.
I dump either the whole codebase in or the most relevent sections, then dump in my issue thread with several comments describing my planned feature, then ask it to provide an implementation plan for what I've outlined so far.
I'm finding this really valuable, because the model will often spot corners of the codebase that will need to be changed that I haven't thought about yet.
My two preferred models for this at the moment are Gemini 2.5 Flash and o4-mini, because they both have reasoning abilities, long context support (1m tokens for Flash, 200,000 for o4-mini) and they're both really cheap: most of the time the prompt costs me 15 cents or less, depending on the amount of code I feed in.
They rarely get the implementation plan exactly right, but that doesn't matter: what I'm looking for with these prompts is hints that tip me off to parts of the codebase I might not have considered yet.
(I wrote this as a draft in June 2025 but only noticed and hit publish on it in February 2026, I think it's an interesting time capsule of how I was using the models at the time.)
Recent articles
- Notes on the xAI/Anthropic data center deal - 7th May 2026
- Live blog: Code w/ Claude 2026 - 6th May 2026
- Vibe coding and agentic engineering are getting closer than I'd like - 6th May 2026