Weeknotes: A new llm CLI tool, plus automating my weeknotes and newsletter
4th April 2023
I started publishing weeknotes in 2019 partly as a way to hold myself accountable but mainly as a way to encourage myself to write more.
Now that I’m writing multiple posts a week (mainly about AI)—and sending them out as a newsletter—my weeknotes are feeling a little less necessary. Here’s everything I’ve written here since my last weeknotes on 22nd March:
- I built a ChatGPT plugin to answer questions about data hosted in Datasette
- AI-enhanced development makes me more ambitious with my projects—and for another illustrative example of that effect, see my TIL Reading thermometer temperatures over time from a video
- What AI can do for you on the Theory of Change podcast
- Think of language models like ChatGPT as a “calculator for words”
- Semi-automating a Substack newsletter with an Observable notebook
(That list created using this SQL query.)
I’m going to keep them going though: I’ve had so much value out of the habit that I don’t feel it’s time to stop.
The llm CLI tool
This is one new piece of software I’ve released in the past few weeks that I haven’t written about yet.
I built the first version of llm, a command-line tool for running prompts against large language model (currently just ChatGPT and GPT-4), getting the results back on the command-line and also storing the prompt and response in a SQLite database.
It’s still pretty experimental, but it’s already looking like it will be a fun playground for trying out new things.
Here’s the 30s version of how to start using it:
# Install the tool
pipx install llm
# Put an OpenAI API key somewhere it can find it
echo 'your-OpenAI-API-key' > ~/.openai-api-key.txt
# Or you can set it as an environment variable:
# export OPENAI_API_KEY='...'
# Run a prompt
llm 'Ten names for cheesecakes'
This will output the response to that prompt directly to the terminal.
--stream option to stream results instead:
Prompts are run against ChatGPT’s inexpensive
gpt-3.5-turbo model by default. You can use
-4 to run against the GPT-4 model instead (if you have access to it), or
--model X to run against another named OpenAI model.
If a SQLite database file exists in
~/.llm/log.db any prompts you run will be automatically recorded to that database, which you can then explore using
The following command will create that database if it does not yet exist:
There’s more in the README.
There are plenty of other options for tools for running LLM prompts on your own machines, including some that work on the command-line and some that record your results.
llm is probably less useful than those alternatives, but it’s a fun space for me to try out new ideas.
Automating my weeknotes
I wrote at length about how I automated most of my newsletter using an Observable notebook and some Datasette tricks.
I realized the same trick could work for my weeknotes as well. The “releases this week” and “TILs this week” sections have previously been generated by hand, so I applied the same technique from the newsletter notebook to automate them as well.
It also fetches the full text of my most recent weeknotes post from my blog’s Datasette backup so it can calculate which releases and TILs are new since last time.
It uses various regular expression and array tricks to filter that content to just the new stuff, then assembles me a markdown string which I can use as the basis of my new post.
Here’s what that generated for me this week:
Releases since last time
Explain and validate SQL queries as you type them into Datasette
Access large language models from the command-line
Datasette plugin providing an automatic GraphQL API for your SQLite databases
TIL since last time
- Copy tables between SQLite databases—2023-04-03
- Reading thermometer temperatures over time from a video—2023-04-02
- Using the ChatGPT streaming API from Python—2023-04-01
- Interactive row selection prototype with Datasette—2023-03-30
- Using jq in an Observable notebook—2023-03-26
- Convert git log output to JSON using jq—2023-03-25
More recent articles
- Interesting ideas in Observable Framework - 3rd March 2024
- Weeknotes: Getting ready for NICAR - 27th February 2024
- The killer app of Gemini Pro 1.5 is video - 21st February 2024
- Weeknotes: a Datasette release, an LLM release and a bunch of new plugins - 9th February 2024
- LLM 0.13: The annotated release notes - 26th January 2024
- Weeknotes: datasette-test, datasette-build, PSF board retreat - 21st January 2024
- Talking about Open Source LLMs on Oxide and Friends - 17th January 2024
- Publish Python packages to PyPI with a python-lib cookiecutter template and GitHub Actions - 16th January 2024
- What I should have said about the term Artificial Intelligence - 9th January 2024