Simon Willison’s Weblog

Subscribe

Weeknotes: Datasette Cloud and zero downtime deployments

21st January 2020

Yesterday’s piece on Tracking FARA by deploying a data API using GitHub Actions and Cloud Run was originally intended to be my weeknotes, but ended up getting a bit too involved.

Aside from playing with GitHub Actions and Cloud Run, my focus over the past week has been working on Datasette Cloud. Datasette Cloud is the current name I’m using for my hosted Datasette product—the idea being that I’ll find it a lot easier to get feedback on Datasette from journalists if they can use it without having to install anything!

My MVP for Datasette Cloud is that I can use it to instantly provision a new, private Datasette instance for a journalist (or team of journalists) that they can then sign into, start playing with and start uploading their data to (initially as CSV files).

I have to solve quite a few problems to get there:

  • Secure, isolated instances of Datasette. A team or user should only be able to see their own files. I plan to solve this using Docker containers that are mounted such that they can only see their own dedicated volumes.
  • The ability to provision new instances as easily as possible—and give each one its own HTTPS subdomain.
  • Authentication: users need to be able to register and sign in to accounts. I could use datasette-auth-github for this but I’d like to be able to support regular email/password accounts too.
  • Users need to be able to upload CSV files and have them converted into a SQLite database compatible with Datasette.

Zero downtime deployments

I have a stretch goal which I’m taking pretty seriously: I want to have a mechanism in place for zero-downtime deployments of new versions of the software.

Arguable this is an unneccessary complication for an MVP. I may not fully implement it, but I do want to at least know that the path I’ve taken is compatible with zero downtime deployments.

Why do zero downtime deployments matter so much to me? Because they are desirable for rapid iteration, and crucial for setting up continuious deployment. Even a couple of seconds of downtime during a deployment leaves a psychological desire not to deploy too often. I’ve seen the productivity boost that deploying fearlessly multiple times a day brings, and I want it.

So I’ve been doing a bunch of research into zero downtime deployment options (thanks to some great help on Twitter) and I think I have something that’s going to work for me.

The first ingredient is Traefik—a new-to-me edge router (similar to nginx) which has a delightful focus on runtime configuration based on automatic discovery.

It works with a bunch of different technology stacks, but I’m going to be using it with regular Docker. Traefik watches for new Docker containers, reads their labels and uses that to reroute traffic to them.

So I can launch a new Docker container, apply the Docker label "traefik.frontend.rule": "Host:subdomain.mydomain.com" and Traefik will start proxying traffic to that subdomain directly to that container.

Traefik also has extremely robust built-in support for Lets Encrypt to issue certificates. I managed to issue a wildcard TLS certificate for my entire domain, so new subdomains are encrypted straight away. This did require me to give Traefik API access to modify DNS entries—I’m running DNS for this project on Digital Ocean and thankfully Traefik knows how to do this by talking to their API.

That solves provisioning: when I create a new account I can call the Docker API (from Python) to start up a new, labelled container on a subdomain protected by a TLS certificate.

I still needed a way to run a zero-downtime deployment of a new container (for example when I release a new version of Datasette and want to upgrade everyone). After quite a bit of research (during which I discovered you can’t modify the labels on a Docker container without restarting it) I settled on the approach described in this article.

Essentially you configure Traefik to retry failed requests, start a new, updated container with the same routing information as the existing one (causing Traefik to load balance HTTP requests across both), then shut down the old container and trust Traefik to retry in-flight requests against the one that’s still running.

Rudimentary testing with ab suggested that this is working as desired.

One remaining problem: if Traefik is running in a Docker container and proxying all of my traffic, how can I upgrade Traefik itself without any downtime?

Consensus on Twitter seems to be that Docker on its own doesn’t have a great mechanism for this (I was hoping I could re-route port 80 traffic to the host to a different container in an atomic way). But... iptables has mechanisms that can re-route traffic from one port to another—so I should be able to run a new Traefik container on a different port and re-route to it at the operating system level.

That’s quite enough yak shaving around zero time deployments for now!

datasette-upload-csvs

A big problem I’m seeing with the current Datasette ecosystem is that while Datasette offers a web-based user interface for querying and accessing data, the tools I’ve written for actually creating those databases are decidedly command-line only.

Telling journalists they have to learn to install and run software on the command-line is way too high a barrier to entry.

I’ve always intended to have Datasette plugins that can handle uploading and converting data. It’s time to actually build one!

datasette-upload-csvs is what I’ve got so far. It has a big warning not to use it in the README—it’s very alpha sofware at the moment—but it does prove that the concept can work.

It uses the asgi_wrapper plugin hook to intercept requests to the path /-/upload-csv and forward them on to another ASGI app, written using Starlette, which provides a basic upload form and then handles the upload.

Uploaded CSVs are converted to SQLite using sqlite-utils and written to the first mutable database attached to Datasette.

It needs a bunch more work (and tests) before I’m comfortable telling people to use it, but it does at least exist as a proof of concept for me to iterate on.

datasette-auth-django-cookies

No code for this yet, but I’m beginning to flesh it out as a concept.

I don’t particularly want to implement user registration and authentication and cookies and password hashing. I know how to do it, which means I know it’s not something you shuld re-roll for every project.

Django has a really well designed, robust authentication system. Can’t I just use that?

Since all of my applications will be running on subdomains of a single domain, my current plan is to have a regular Django application which handles registration and logins. Each subdomain will then run a custom piece of Datasette ASGI middleware which knows how to read and validate the Django authentication cookie.

This should give me single sign-on with a single, audited codebase for registration and login with (hopefully) the least amount of work needed to integrate it with Datasette.

Code for this will hopefully follow over the next week.

Niche Museums—now publishing weekly

I hit a milestone with my Niche Museums project: the site now lists details of 100 museums!

For the 100th entry I decided to celebrate with by far the most rewarding (and exclusive) niche museum experience I’ve ever had: Ray Bandar’s Bone Palace.

You should read the entry. The short version is that Ray Bandar collected 7,000 animals skulls over a sixty year period, and Natalie managed to score us a tour of his incredible basement mere weeks before the collection was donated to the California Academy of Sciences.

The basement full of skulls

Posting one museum a day was taking increasingly more of my time, as I had to delve into the depths of my museums-I-have-visited backlog and do increasing amounts of research. Now that I’ve hit 100 I’m going to switch to publishing one a week, which should also help me visit new ones quickly enough to keep the backlog full!

So I only posted four this week:

I also built a simple JavaScript image gallery to better display the 54 photos I published from our trip to Ray Bandar’s basement.