Building files-to-prompt entirely using Claude 3 Opus
8th April 2024
files-to-prompt is a new tool I built to help me pipe several files at once into prompts to LLMs such as Claude and GPT-4.
When combined with my LLM command-line tool it lets you do things like this:
files-to-prompt README.md files_to_prompt | llm -m opus \
--system 'Update this README to reflect this functionality'
I wrote files-to-prompt
almost entirely using Claude 3 Opus, llm-claude-3 and files-to-prompt
itself, once it was functional enough to be useful.
Building the initial tool
I started with my click-app cookiecutter template. This can quickly spin up a skeleton of a new Python command-line tool using the Click library:
cookiecutter gh:simonw/click-app
[1/6] app_name (): files-to-prompt
[2/6] description (): Concatenate a directory full of files into a single prompt for use with LLMs
[3/6] hyphenated (files-to-prompt):
[4/6] underscored (files_to_prompt):
[5/6] github_username (): simonw
[6/6] author_name (): Simon Willison
I opened the files_to_prompt/cli.py
file it created for me and used it to spec out how the initial version should work:
import click @click.command() @click.argument( "path", type=click.Path ) @click.option( "--include-hidden", is_flag=True, help="Include files and folders starting with .", ) @click.version_option() def cli(path, include_hidden): """ Takes a path to a folder and outputs every file in that folder, recursively, each one preceeded with its filename like this: path/to/file.py ---- Contents of file.py goes here --- path/to/file2.py --- ... """
This code defines a tool that takes a positional path
argument and an optional --include-hidden
flag, along with a Python docstring describing what it should do which will be displayed as the --help
output.
I’d originally intended to actually build the tool... but once I got to this point I realized that this might be enough information for Claude 3 Opus to do the rest of the work for me.
So I ran this command:
cat files_to_prompt/cli.py | llm -m opus \
--system 'finish this code for me'
You can see the full transcript here, but from this single prompt Claude 3 Opus produced the exact implementation I needed:
for root, dirs, files in os.walk(path): if not include_hidden: dirs[:] = [d for d in dirs if not d.startswith(".")] files = [f for f in files if not f.startswith(".")] for file in files: file_path = os.path.join(root, file) with open(file_path, "r") as f: file_contents = f.read() click.echo(file_path) click.echo("---") click.echo(file_contents) click.echo() click.echo("---")
I pasted that into by cli.py
module and tried running it... and it worked first time.
I tried a more challenging prompt next:
cat files_to_prompt/cli.py | llm -m opus '
modify this code to add a --ignore-gitignore flag which defaults toFalse.
Without this flag the code should read any .gitignore files and filter
content based on the rules within them. With the flag it should not do
that.'
I forgot to use the --system
option here, so the argument prompt I passed to llm
was concatenated directly onto the end of the code rather than being treated as a system/instruction prompt. This turned out not to matter.
.gitignore
is a tricky format to handle. Claude 3 Opus didn’t do a comprehensive implementation, but it did support the subset that I cared about for this step of the development process.
Here’s what it gave me (full transcript here):
import os import click from pathlib import Path from fnmatch import fnmatch def should_ignore(path, gitignore_rules): for rule in gitignore_rules: if fnmatch(path, rule): return True return False def read_gitignore(path): gitignore_path = os.path.join(path, ".gitignore") if os.path.isfile(gitignore_path): with open(gitignore_path, "r") as f: return [line.strip() for line in f if line.strip() and not line.startswith("#")] return [] @click.command() @click.argument( "path", type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), ) @click.option( "--include-hidden", is_flag=True, help="Include files and folders starting with .", ) @click.option( "--ignore-gitignore", is_flag=True, help="Ignore .gitignore files and include all files", ) @click.version_option() def cli(path, include_hidden, ignore_gitignore): """ Takes a path to a folder and outputs every file in that folder, recursively, each one preceded with its filename like this: path/to/file.py ---- Contents of file.py goes here --- path/to/file2.py --- ... """ gitignore_rules = [] if ignore_gitignore else read_gitignore(path) for root, dirs, files in os.walk(path): if not include_hidden: dirs[:] = [d for d in dirs if not d.startswith(".")] files = [f for f in files if not f.startswith(".")] if not ignore_gitignore: gitignore_rules.extend(read_gitignore(root)) dirs[:] = [d for d in dirs if not should_ignore(os.path.join(root, d), gitignore_rules)] files = [f for f in files if not should_ignore(os.path.join(root, f), gitignore_rules)] for file in files: file_path = os.path.join(root, file) with open(file_path, "r") as f: file_contents = f.read() click.echo(file_path) click.echo("---") click.echo(file_contents) click.echo() click.echo("---")
From that point on I was hooked: could I use the new files-to-prompt
tool to build the rest of itself?
Since I’d already bootstrapped it to a point where I could use it, I ran it for real for the first time:
files-to-prompt . | llm -m opus --system '
fill out the test_files_to_prompt.py with tests, use tmpdir fixture
to create example directories to run the command against - tests
should cover both of the optional flags'
Here I’m sending it quite a bit of code—the implementation, the README and the Apache 2 License are all included in that prompt.
It wrote me these tests:
import os from click.testing import CliRunner from files_to_prompt.cli import cli def test_basic_functionality(tmpdir): runner = CliRunner() with tmpdir.as_cwd(): os.makedirs("test_dir") with open("test_dir/file1.txt", "w") as f: f.write("Contents of file1") with open("test_dir/file2.txt", "w") as f: f.write("Contents of file2") result = runner.invoke(cli, ["test_dir"]) assert result.exit_code == 0 assert "test_dir/file1.txt" in result.output assert "Contents of file1" in result.output assert "test_dir/file2.txt" in result.output assert "Contents of file2" in result.output def test_include_hidden(tmpdir): runner = CliRunner() with tmpdir.as_cwd(): os.makedirs("test_dir") with open("test_dir/.hidden.txt", "w") as f: f.write("Contents of hidden file") result = runner.invoke(cli, ["test_dir"]) assert result.exit_code == 0 assert "test_dir/.hidden.txt" not in result.output result = runner.invoke(cli, ["test_dir", "--include-hidden"]) assert result.exit_code == 0 assert "test_dir/.hidden.txt" in result.output assert "Contents of hidden file" in result.output def test_ignore_gitignore(tmpdir): runner = CliRunner() with tmpdir.as_cwd(): os.makedirs("test_dir") with open("test_dir/.gitignore", "w") as f: f.write("ignored.txt") with open("test_dir/ignored.txt", "w") as f: f.write("This file should be ignored") with open("test_dir/included.txt", "w") as f: f.write("This file should be included") result = runner.invoke(cli, ["test_dir"]) assert result.exit_code == 0 assert "test_dir/ignored.txt" not in result.output assert "test_dir/included.txt" in result.output result = runner.invoke(cli, ["test_dir", "--ignore-gitignore"]) assert result.exit_code == 0 assert "test_dir/ignored.txt" in result.output assert "This file should be ignored" in result.output assert "test_dir/included.txt" in result.output
Then I ran them... and they failed. Here’s the error message:
def test_ignore_gitignore(tmpdir):
runner = CliRunner()
with tmpdir.as_cwd():
os.makedirs("test_dir")
with open("test_dir/.gitignore", "w") as f:
f.write("ignored.txt")
with open("test_dir/ignored.txt", "w") as f:
f.write("This file should be ignored")
with open("test_dir/included.txt", "w") as f:
f.write("This file should be included")
result = runner.invoke(cli, ["test_dir"])
assert result.exit_code == 0
> assert "test_dir/ignored.txt" not in result.output
E AssertionError: assert 'test_dir/ignored.txt' not in 'test_dir/in...red\n\n---\n'
E
E 'test_dir/ignored.txt' is contained here:
E uded
E
E ---
E test_dir/ignored.txt
E ---...
E
E ...Full output truncated (3 lines hidden), use '-vv' to show
I saved that message to `error.txt and ran this command to send it back to Claude 3 Opus:
cat error.txt | llm -c
The llm -c
command stands for “continue”—it will take any input or prompt and pass that back to the model along with the previous logged prompts and responses, equivalent to asking a follow-up question in an interface like ChatGPT.
Claude gave me back a corrected version of the should_ignore()
function:
def should_ignore(path, gitignore_rules): for rule in gitignore_rules: if fnmatch(os.path.basename(path), rule): return True if os.path.isdir(path) and fnmatch(os.path.basename(path) + "/", rule): return True return False
Sending follow-on prompts with llm -c
seemed to be working well, so I ran this:
llm -c 'Fill out the Usage section of the README for this project - just return that section as markdown'
And it gave me back a pretty decent README:
Usage
To use
files-to-prompt
, simply provide the path to the directory you want to process:files-to-prompt path/to/directoryThis will output the contents of every file in the directory and its subdirectories, with each file preceded by its relative path and separated by
---
.Options
--include-hidden
: Include files and folders starting with.
(hidden files and directories).files-to-prompt path/to/directory --include-hidden
--ignore-gitignore
: Ignore.gitignore
files and include all files.files-to-prompt path/to/directory --ignore-gitignoreExample
Suppose you have a directory structure like this:
my_directory/ ├── file1.txt ├── file2.txt ├── .hidden_file.txt └── subdirectory/ └── file3.txt
Running
files-to-prompt my_directory
will output:my_directory/file1.txt --- Contents of file1.txt --- my_directory/file2.txt --- Contents of file2.txt --- my_directory/subdirectory/file3.txt --- Contents of file3.txt ---
If you run
files-to-prompt my_directory --include-hidden
, the output will also include.hidden_file.txt
:my_directory/.hidden_file.txt --- Contents of .hidden_file.txt --- ...
I particularly liked the directory structure diagram.
Here’s the full transcript including my follow-ups.
I committed and pushed everything I had so far to GitHub.
After one last review of the README I noticed it had used the phrase “simply provide the path to the directory”. I don’t like using words like simply in documentation, so I fixed that.
And I shipped version 0.1 of the software! Almost every line of code, tests and documentation written by Claude 3 Opus.
Iterating on the project
I’ve added several features since that initial implementation, almost all of which were primarily written by prompting Claude 3 Opus.
Issue #2: Take multiple arguments for files and directories to include changed the tool such that files-to-prompt README.md tests/
would include both the README.md
file and all files in the tests/
directory.
The sequence of prompts to get there was as follows:
cat files_to_prompt/cli.py | llm -m opus --system '
Modify this file. It should take multiple arguments in a variable called paths.
Each of those argumets might be a path to a file or it might be a path to a
directory - if any of the arguments do not correspoind to a file or directory
it should raise a click error.
It should then do what it does already but for all files
files-recursively-contained-within-folders that are passed to it.
It should still obey the gitignore logic.'
Then these to update the tests:
files-to-prompt files_to_prompt tests | llm -m opus --system '
rewrite the tests to cover the ability to pass multiple files and
folders to the tool'
files-to-prompt files_to_prompt tests | llm -m opus --system '
add one last test which tests .gitignore and include_hidden against
an example that mixes single files and directories of files together
in one invocation'
I didn’t like the filenames it was using in that last test, so I used symbex to extract just the implementation of that test and told it to rewrite it:
symbex test_mixed_paths_with_options | llm -m opus --system '
rewrite this test so the filenames are more obvious, thinks like
ignored_in_gitignore.txt'
And this to add one last test that combined all of the options:
llm -c 'add a last bit to that test for
["test_dir", "single_file.txt", "--ignore-gitignore", "--include-hidden"]'
The issue includes links to the full transcripts for the above.
Updating a diff from a pull request
I quietly released files-to-prompt
two weeks ago. Dipam Vasani had spotted it and opened a pull request adding the ability to ignore specific files, by passing --ignore-patterns '*.md'
as an option.
The problem was... I’d landed some of my own changes before I got around to reviewing his PR—so it would no longer cleanly apply.
It turns out I could resolve that problem using Claude 3 Opus as well, by asking it to figure out the change from Dipam’s diff.
I pulled a copy of his PR as a diff like this:
wget 'https://github.com/simonw/files-to-prompt/pull/4.diff'
Then I fed both the diff and the relevant files from the project into Claude:
files-to-prompt 4.diff files_to_prompt/cli.py tests/test_files_to_prompt.py | \
llm -m opus --system \
'Apply the change described in the diff to the project - return updated cli.py and tests'
It didn’t quite work—it reverted one of my earlier changes. So I prompted:
llm -c 'you undid the change where it could handle multiple paths -
I want to keep that, I only want to add the new --ignore-patterns option'
And that time it worked! Transcript here.
I merged Claude’s work into the existing PR to ensure Dipam got credit for his work, then landed it and pushed it out in a release.
Was this worthwhile?
As an exercise in testing the limits of what’s possible with command-line LLM access and the current most powerful available LLM, this was absolutely worthwhile. I got working software with comprehensive tests and documentation, and had a lot of fun experimenting with prompts along the way.
It’s worth noting that this project was incredibly low stakes. files-to-prompt
is a tiny tool that does something very simple. Any bugs or design flaws really don’t matter. It’s perfect for trying out this alternative approach to development.
I also got the software built a whole lot faster than if I’d written it myself, and with features like .gitignore
support (albeit rudimentary) that I may not have bothered with working alone. That’s a good example of a feature that’s just fiddly enough that I might decide not to invest the time needed to get it to work.
Is this the best possible version of this software? Definitely not. But with comprehensive documentation and automated tests it’s high enough quality that I’m not ashamed to release it with my name on it.
A year ago I might have felt guilty about using LLMs to write code for me in this way. I’m over that now: I’m still doing the work, but I now have a powerful tool that can help accelerate the process.
Using this pattern for real work
I’ve since used the same pattern for some smaller modifications to some of my more significant projects. This morning I used it to upgrade my datasette-cors plugin to add support for new features I had added to the underlying asgi-cors library. Here’s the prompt sequence I used:
files-to-prompt ../asgi-cors/asgi_cors.py datasette_cors.py | llm -m opus -s \
'Output a new datasette_cors.py plugin that adds headers and methods and max_age config options'
files-to-prompt test_datasette_cors.py | llm -c \
'Update these tests to exercise the new options as well'
cat README.md | llm -c \
'Update the README to document the new config options'
And the full transcript.
I reviewed this code very carefully before landing it. It’s absolutely what I would have written myself without assistance from Claude.
Time elapsed for this change? The first prompt was logged at 16:42:11 and the last at 16:44:24, so just over two minutes followed by a couple more minutes for the review. The associated issue was open for five minutes total.
More recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024