Guides > Agentic Engineering Patterns
Agentic manual testing
The defining characteristic of a coding agent is that it can execute the code that it writes. This is what makes coding agents so much more useful than LLMs that simply spit out code without any way to verify it.
Never assume that code generated by an LLM works until that code has been executed.
Coding agents have the ability to confirm that the code they have produced works as intended, or iterate further on that code until it does.
Getting agents to write unit tests, especially using test-first TDD, is a powerful way to ensure they have exercised the code they are writing.
That's not the only worthwhile approach, though.
Just because code passes tests doesn't mean it works as intended. Anyone who's worked with automated tests will have seen cases where the tests all pass but the code itself fails in some obvious way - it might crash the server on startup, fail to display a crucial UI element, or miss some detail that the tests failed to cover.
Automated tests are no replacement for manual testing. I like to see a feature working with my own eye before I land it in a release.
I've found that getting agents to manually test code is valuable as well, frequently revealing issues that weren't spotted by the automated tests.
Mechanisms for agentic manual testing
How an agent should "manually" test a piece of code varies depending on what that code is.
For Python libraries a useful pattern is python -c "... code ...". You can pass a string (or multiline string) of Python code directly to the Python interpreter, including code that imports other modules.
The coding agents are all familiar with this trick and will sometimes use it without prompting. Reminding them to test using python -c can often be effective though:
Other languages may have similar mechanisms, and if they don't it's still quick for an agent to write out a demo file and then compile and run it. I sometimes encourage it to use /tmp purely to avoid those files being accidentally committed to the repository later on.
Many of my projects involve building web applications with JSON APIs. For these I tell the agent to exercise them using curl:
Telling an agent to "explore" often results in it trying out a bunch of different aspects of a new API, which can quickly cover a whole lot of ground.
If an agent finds something that doesn't work through their manual testing, I like to tell them to fix it with red/green TDD. This ensures the new case ends up covered by the permanent automated tests.
Using browser automation for web UIs
Having a manual testing procedure in place becomes even more valuable if a project involves an interactive web UI.
Historically these have been difficult to test from code, but the past decade has seen notable improvements in systems for automating real web browsers. Running a real Chrome or Firefox or Safari browser against an application can uncover all sorts of interesting problems in a realistic setting.
Coding agents know how to use these tools extremely well.
The most powerful of these today is Playwright, an open source library developed by Microsoft. Playwright offers a full-featured API with bindings in multiple popular programming languages and can automate any of the popular browser engines.
Simply telling your agent to "test that with Playwright" may be enough. The agent can then select the language binding that makes the most sense, or use Playwright's default CLI tool.
Coding agents work really well with dedicated CLIs. agent-browser by Vercel is a comprehensive CLI wrapper around Playwright specially designed for coding agents to use.
My own project Rodney serves a similar purpose, albeit using the Chrome DevTools Protocol to directly control an instance of Chrome.
Here's an example prompt I use to test things with Rodney:
uvx rodney --help" causes the agent to run rodney --help via the uvx package management tool, which automatically installs Rodney the first time it is called.
- The rodney --help command is specifically designed to give agents everything they need to know to both understand and use the tool. Here's that help text.
- Saying "look at screenshots" hints to the agent that it should use the rodney screenshot command and remind it that it can use its own vision abilities against the resulting image files to evaluate the visual appearance of the page.
That's a whole lot of manual testing baked into a short prompt!
Rodney and tools like it offer a wide array of capabilities, from running JavaScript on the loaded site to scrolling, clicking, typing, and even reading the accessibility tree of the page.
As with other forms of manual tests, issues found and fixed via browser automation can then be added to permanent automated tests as well.
Many developers have avoided too many automated browser tests in the past due to their reputation for flakiness - the smallest tweak to the HTML of a page can result in frustrating waves of test breaks.
Having coding agents maintain those tests over time greatly reduces the friction involved in keeping them up-to-date in the face of design changes to the web interfaces.
Have them take notes with Showboat
Having agents manually test code can catch extra problems, but it can also be used to create artifacts that can help document the code and demonstrate how it has been tested.
I'm fascinated by the challenge of having agents show their work. Being able to see demos or documented experiments is a really useful way of confirming that the agent has comprehensively solved the challenge it was given.
I built Showboat to facilitate building documents that capture the agentic manual testing flow.
Here's a prompt I frequently use:
showboat --help command teaches the agent what Showboat is and how to use it. Here's that help text in full.
The three key Showboat commands are note, exec, and image.
note appends a Markdown note to the Showboat document. exec records a command, then runs that command and records its output. image adds an image to the document - useful for screenshots of web applications taken using Rodney.
The exec command is the most important of these, because it captures a command along with the resulting output. This shows you what the agent did and what the result was, and is designed to discourage the agent from cheating and writing what it hoped had happened into the document.
I've been finding the Showboat pattern to work really well for documenting the work that has been achieved during my agent sessions. I'm hoping to see similar patterns adopted across a wider set of tools.