I wrote this recently in a conversation about whether coding agents can work as a replacement for human programmers.
The "agentic" coding tools we have right now work like this:
- A skilled individual with both deep domain understanding and deep understanding of the capabilities of the agent (including understanding what tools are available to that agent) poses a clear task to it.
- The agent writes some code relating to that task. It runs a tool to execute and test that code. It inspects the result, and if there are errors it edits the code and tries again.
- It may call other tools as well, for example a search tool to find related code or even to look up API documentation elsewhere (including via web search).
- It continues like this until it hits a loosely defined “done” state or gets stuck.
- The skilled individual then reviews what it has done and almost always finds that it has not solved the problem to their satisfaction... so they apply their expertise and domain understanding to prompt it again to try and get to that desired state.
Without the skilled individual, the “agent” is useless. It may as well not exist.
Recent articles
- Reverse engineering Codex CLI to get GPT-5-Codex-Mini to draw me a pelican - 9th November 2025
- Video + notes on upgrading a Datasette plugin for the latest 1.0 alpha, with help from uv and OpenAI Codex CLI - 6th November 2025
- Code research projects with async coding agents like Claude Code and Codex - 6th November 2025