Recent entries
Animated choropleth of vaccinations by US county six days ago
Last week I mentioned that I’ve recently started scraping and storing the CDC’s per-county vaccination numbers in my cdc-vaccination-history GitHub repository. This week I used an Observable notebook and d3’s TopoJSON support to render those numbers on an animated choropleth map.
The full code is available at https://observablehq.com/@simonw/us-county-vaccinations-choropleth-map
From scraper to Datasette
My scraper for this data is a single line in a GitHub Actions workflow:
curl https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_county_condensed_data \
| jq . > counties.json
I pipe the data through jq
to pretty-print it, just to get nicer diffs.
My build_database.py script then iterates over the accumulated git history of that counties.json
file and uses sqlite-utils to build a SQLite table:
for i, (when, hash, content) in enumerate( iterate_file_versions(".", ("counties.json",)) ): try: counties = json.loads( content )["vaccination_county_condensed_data"] except ValueError: # Bad JSON continue for county in counties: id = county["FIPS"] + "-" + county["Date"] db[ "daily_reports_counties" ].insert( dict(county, id=id), pk="id", alter=True, replace=True )
The resulting table can be seen at cdc/daily_reports_counties.
From Datasette to Observable
Observable notebooks are my absolute favourite tool for prototyping new visualizations. There are examples of pretty much anything you could possibly want to create, and the Observable ecosystem actively encourages forking and sharing new patterns.
Loading data from Datasette into Observable is easy, using Datasette’s various HTTP APIs. For this visualization I needed to pull two separate things from Datasette.
Firstly, for any given date I need the full per-county vaccination data. Here’s the full table filtered for April 2nd for example.
Since that’s 3,221 rows Datasette’s JSON export would need to be paginated... but Datasette’s CSV export can stream all 3,000+ rows in a single request. So I’m using that, fetched using the d3.csv()
function:
county_data = await d3.csv(
`https://cdc-vaccination-history.datasette.io/cdc/daily_reports_counties.csv?_stream=on&Date=${county_date}&_size=max`
);
In order to animate the different dates, I need a list of available dates. I can get those with a SQL query:
select distinct Date
from daily_reports_counties
order by Date
Datasette’s JSON API has a ?_shape=arrayfirst
option which will return a single JSON array of the first values in each row, which means I can do this:
And get back just the dates as an array:
[
"2021-03-26",
"2021-03-27",
"2021-03-28",
"2021-03-29",
"2021-03-30",
"2021-03-31",
"2021-04-01",
"2021-04-02",
"2021-04-03"
]
Mike Bostock has a handy Scrubber implementation which can provide a slider with the ability to play and stop iterating through values. In the notebook that can be used like so:
viewof county_date = Scrubber(county_dates, {
delay: 500,
autoplay: false
})
county_dates = (await fetch(
"https://cdc-vaccination-history.datasette.io/cdc.json?sql=select%20distinct%20Date%20from%20daily_reports_counties%20order%20by%20Date&_shape=arrayfirst"
)).json()
import { Scrubber } from "@mbostock/scrubber"
Drawing the map
The map itself is rendered using TopoJSON, an extension to GeoJSON that efficiently encodes topology.
Consider the map of 3,200 counties in the USA: since counties border each other, most of those border polygons end up duplicating each other to a certain extent.
TopoJSON only stores each shared boundary once, but still knows how they relate to each other which means the data can be used to draw shapes filled with colours.
I’m using the https://d3js.org/us-10m.v1.json
TopoJSON file built and published with d3. Here’s my JavaScript for rendering that into an SVG map:
{
const svg = d3
.create("svg")
.attr("viewBox", [0, 0, width, 700])
.style("width", "100%")
.style("height", "auto");
svg
.append("g")
.selectAll("path")
.data(
topojson.feature(topojson_data, topojson_data.objects.counties).features
)
.enter()
.append("path")
.attr("fill", function(d) {
if (!county_data[d.id]) {
return 'white';
}
let v = county_data[d.id].Series_Complete_65PlusPop_Pct;
return d3.interpolate("white", "green")(v / 100);
})
.attr("d", path)
.append("title") // Tooltip
.text(function(d) {
if (!county_data[d.id]) {
return '';
}
return `${
county_data[d.id].Series_Complete_65PlusPop_Pct
}% of the 65+ population in ${county_data[d.id].County}, ${county_data[d.id].StateAbbr.trim()} have had the complete vaccination`;
});
return svg.node();
}
Next step: a plugin
Now that I have a working map, my next goal is to package this up as a Datasette plugin. I’m hoping to create a generic choropleth plugin which bundles TopoJSON for some common maps—probably world countries, US states and US counties to start off with—but also allows custom maps to be supported as easily as possible.
Datasette 0.56
Also this week, I shipped Datasette 0.56. It’s a relatively small release—mostly documentation improvements and bug fixes, but I’ve alse bundled SpatiaLite 5 with the official Datasette Docker image.
TIL this week
Releases this week
-
airtable-export: 0.6—(8 total releases)—2021-04-02
Export Airtable data to YAML, JSON or SQLite files on disk -
datasette: 0.56—(85 total releases)—2021-03-29
An open source multi-tool for exploring and publishing data
Weeknotes: SpatiaLite 5, Datasette on Azure, more CDC vaccination history 13 days ago
This week I got SpatiaLite 5 working in the Datasette Docker image, improved the CDC vaccination history git scraper, figured out Datasette on Azure and we closed on a new home!
SpatiaLite 5 for Datasette
SpatiaLite 5 came out earlier this year with a bunch of exciting improvements, most notably an implementation of KNN (K-nearest neighbours)—a way to efficiently answer the question “what are the 10 closest rows to this latitude/longitude point”.
I love building X near me websites so I expect I’ll be using this a lot in the future.
I spent a bunch of time this week figuring out how best to install it into a Docker container for use with Datasette. I finally cracked it in issue 1249 and the Dockerfile in the Datasette repository now builds with the SpatiaLite 5.0 extension, using a pattern I figured out for installing Debian unstable packages into a Debian stable base container.
When Datasette 0.56 is released the official Datasette Docker image will bundle SpatiaLite 5.0.
CDC vaccination history in Datasette
I’m tracking the CDC’s per-state vaccination numbers in my cdc-vaccination-history repository, as described in my Git scraping lightning talk.
Scraping data into a git repository to track changes to it over time is easy. What’s harder is extracting that data back out of the commit history in order to analyze and visualize it later.
To demonstrate how this can work I added a build_database.py script to that repository which iterates through the git history and uses it to build a SQLite database containing daily state reports. I also added steps to the GitHub Actions workflow to publish that SQLite database using Datasette and Vercel.
I installed the datasette-vega visualization plugin there too. Here’s a chart showing the number of doses administered over time in California.
This morning I started capturing the CDC’s per-county data too, but I’ve not yet written code to load that into Datasette. [UPDATE: that table is now available: cdc/daily_reports_counties]
Datasette on Azure
I’m keen to make Datasette easy to deploy in as many places as possible. I already have mechanisms for publishing to Heroku, Cloud Run, Vercel and Fly.io—today I worked out the recipe needed for Azure Functions.
I haven’t bundled it into a datasette-publish-azure
plugin yet but that’s the next step. In the meantime the azure-functions-datasette repo has a working example with instructions on how to deploy it.
Thanks go to Anthony Shaw for building out the ASGI wrapper needed to run ASGI applications like Datasette on Azure Functions.
iam-to-sqlite
I spend way too much time whinging about IAM on Twitter. I’m certain that properly learning IAM will unlock the entire world of AWS, but I have so far been unable to overcome my discomfort with it long enough to actually figure it out.
After yet another unproductive whinge this week I guilted myself into putting in some effort, and it’s already started to pay off: I figured out how to dump out all existing IAM data (users, groups, roles and policies) as JSON using the aws iam get-account-authorization-details
command, and got so excited about it that I built iam-to-sqlite as a wrapper around that command that writes the results into SQLite so I can browse them using Datasette!
I’m increasingly realizing that the key to me understanding how pretty much any service works is to pull their JSON into a SQLite database so I can explore it as relational tables.
A useful trick for writing weeknotes
When writing weeknotes like these, it’s really useful to be able to see all of the commits from the past week across many different projects.
Today I realized you can use GitHub search for this. Run a search for author:simonw created:>2021-03-20
and filter to commits, ordered by “Recently committed”.
Django pull request accepted!
I had a pull request accepted to Django this week! It was a documentation fix for the RawSQL query expression—I found a pattern for using it as part of an .filter(id__in=RawSQL(...))
query that wasn’t covered by the documentation.
And we found a new home
One other project this week: Natalie and I closed on a new home! We’re moving to El Granada, a tiny town just north of Half Moon Bay, on the coast 40 minutes south of San Francisco. We’ll be ten minutes from the ocean, with plenty of pinnipeds and pelicans. Exciting!
TIL this week
- Running gdb against a Python process in a running Docker container
- Tracing every executed Python statement
- Installing packages from Debian unstable in a Docker image based on stable
- Closest locations to a point
- Redirecting all paths on a Vercel instance
- Writing an Azure Function that serves all traffic to a subdomain
Releases this week
-
datasette-publish-vercel: 0.9.3—(15 releases total)—2021-03-26
Datasette plugin for publishing data using Vercel -
sqlite-transform: 0.5—(6 releases total)—2021-03-24
Tool for running transformations on columns in a SQLite database -
django-sql-dashboard: 0.5a0—(12 releases total)—2021-03-24
Django app for building dashboards using raw SQL queries -
iam-to-sqlite: 0.1—2021-03-24
Load Amazon IAM data into a SQLite database -
tableau-to-sqlite: 0.2.1—(4 releases total)—2021-03-22
Fetch data from Tableau into a SQLite database -
c64: 0.1a0—2021-03-21
Experimental package of ASGI utilities extracted from Datasette
Weeknotes: django-sql-dashboard widgets 20 days ago
A few small releases this week, for django-sql-dashboard
, datasette-auth-passwords
and datasette-publish-vercel
.
django-sql-dashboard widgets and permissions
django-sql-dashboard, my subset-of-Datasette-for-Django-and-PostgreSQL continues to come together.
New this week: widgets and permissions.
To recap: this Django app borrows some ideas from Datasette: it encourages you to create a read-only PostgreSQL user and grant authenticated users the ability to run one or more raw SQL queries directly against your database.
You can execute more than one SQL query and combine them into a saved dashboard, which will then show multiple tables containing the results.
This week I added support for dashboard widgets. You can construct SQL queries to return specific column patterns which will then be rendered on the page in different ways.
There are four widgets at the moment: “big number”, bar chart, HTML and Markdown.
Big number is the simplest: define a SQL query that returns two columns called label
and big_number
and the dashboard will display that result as a big number:
select 'Entries' as label, count(*) as big_number from blog_entry;
Bar chart is more sophisticated: return columns named bar_label
and bar_quantity
to display a bar chart of the results:
select
to_char(date_trunc('month', created), 'YYYY-MM') as bar_label,
count(*) as bar_quantity
from
blog_entry
group by
bar_label
order by
count(*) desc
HTML and Markdown are simpler: they display the rendered HTML or Markdown, after filtering it through the Bleach library to strip any harmful elements or scripts.
select
'## Ten most recent blogmarks (of '
|| count(*) || ' total)'
as markdown from blog_blogmark;
I’m running the dashboard application on this blog, and I’ve set up an example dashboard here that illustrates the different types of widget.
Defining custom widgets is easy: take the column names you would like to respond to, sort them alphabetically, join them with hyphens and create a custom widget in a template file with that name.
So if you wanted to build a widget that looks for label
and geojson
columns and renders that data on a Leaflet map, you would create a geojson-label.html
template and drop it into your Django templates/django-sql-dashboard/widgets
folder. See the custom widgets documentation for details.
Which reminds me: I decided a README wasn’t quite enough space for documentation here, so I started a Read The Docs documentation site for the project.
Datasette and sqlite-utils both use Sphinx and reStructuredText for their documentation.
For django-sql-dashboard
I’ve decided to try out Sphinx and Markdown instead, using MyST—a Markdown flavour and parser for Sphinx.
I picked this because I want to add inline help to django-sql-dashboard
, and since it ships with Markdown as a dependency already (to power the Markdown widget) my hope is that using Markdown for the documentation will allow me to ship some of the user-facing docs as part of the application itself. But it’s also a fun excuse to try out MyST, which so far is working exactly as advertised.
I’ve seen people in the past avoid Sphinx entirely because they preferred Markdown to reStructuredText, so MyST feels like an important addition to the Python documentation ecosystem.
HTTP Basic authentication
datasette-auth-passwords implements password-based authentication to Datasette. The plugin defaults to providing a username and password login form which sets a signed cookie identifying the current user.
Version 0.4 introduces optional support for HTTP Basic authentication instead—where the user’s browser handles the authentication prompt.
Basic auth has some disadvantages—most notably that it doesn’t support logout without the user entirely closing down their browser. But it’s useful for a number of reasons:
- It’s easy to protect every resource on a website with it—including static assets. Adding
"http_basic_auth": true
to your plugin configuration adds this protection, covering all of Datasette’s resources. - It’s much easier to authenticate with from automated scripts.
curl
androquests
andhttpx
all have simple built-in support for passing basic authentication usernames and passwords, which makes it a useful target for scripting—without having to install an additional authentication plugin such as datasette-auth-tokens.
I’m continuing to flesh out authentication options for Datasette, and adding this to datasette-auth-passwords
is one of those small improvements that should pay off long into the future.
A fix for datasette-publish-vercel
Datasette instances published to Vercel using the datasette-publish-vercel have previously been affected by an obscure Vercel bug: characters such as + in the query string were being lost due to Vercel unescaping encoded characters before the request got to the Python application server.
Vercel fixed this earlier this month, and the latest release of datasette-publish-vercel
includes their fix by switching to the new @vercel/python
builder. Thanks @styfle from Vercel for shepherding this fix through!
New photos on Niche Museums
My Niche Museums project has been in hiberation since the start of the pandemic. Now that vaccines are rolling out it feels like there might be an end to this thing, so I’ve started thinking about my museum hobby again.
I added some new photos to the site today—on the entries for Novelty Automation, DEVIL-ish Little Things, Evergreen Aviation & Space Museum and California State Capitol Dioramas.
Hopefully someday soon I’ll get to visit and add an entirely new museum!
Releases this week
-
django-sql-dashboard: 0.4a1—(10 releases total)—2021-03-21
Django app for building dashboards using raw SQL queries -
datasette-publish-vercel: 0.9.2—(14 releases total)—2021-03-20
Datasette plugin for publishing data using Vercel -
datasette-auth-passwords: 0.4—(9 releases total)—2021-03-19
Datasette plugin for authentication using passwords
Weeknotes: tableau-to-sqlite, django-sql-dashboard 27 days ago
This week I started a limited production run of my new backend for Vaccinate CA calling, built a tableau-to-sqlite
import tool and started working on a subset of Datasette for PostgreSQL and Django called django-sql-dashboard
.
Vaccinate CA backend progress
My key project at the moment is buiding out a new Django-powered backend for the Vaccinate CA call reporting application—where real human beings constantly call pharmacies and medical sites around California to build a comprehensive guide to where the Covid vaccine is available.
As of this week, the new backend is running for a subset of the overall call volume. It’s exciting! It’s also a reminder that the single hardest piece of logic in any crowdsourcing-style application is the logic that gives a human being their next task. I’m continuing to evolve that logic, which is somewhat harder when the system I’m modifying is actively being used.
tableau-to-sqlite
The Vaccinate CA project is constantly on the lookout for new sources of data that might indicate locations that have the vaccine. Some of this data is locked up in Tableau dashboards, which are notoriously tricky to scrape.
When faced with problems like this, I frequently turn to GitHub code search: I’ll find a unique looking token in the data I’m trying to wrangle and run searches to see if anyone on GitHub has written code to handle it.
In doing so, I came across Tableau Scraper—an open source Python library by Bertrand Martel which does a fantastic job of turning a Tableau dashboard into a Pandas DataFrame.
Writing a Pandas DataFrame to a SQLite database is a one-liner: df.to_sql("table-name", sqlite3.connect(db_path))
. So I spun up a quick command-line wrapper around the TableuaScraper
class called tableau-to-sqlite which lets you do the following:
% tableau-to-sqlite tableau.db https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations
Considering how much valuable data is trapped in government Tableau dashboards I’m really excited to point this tool at more sources. The README includes tips on combining this with sqlite-utils to get a CSV or JSON export which can then be tracked using Git scraping.
django-sql-dashboard
I’m continuing to ponder the idea of getting Datasette to talk to PostgreSQL in addition to SQLite, but in the meantime I have a growing Django application that runs against PostgreSQL and a desire to build some quick dashboards against it.
One of Datasette’s key features is the ability to bookmark a read-only SQL query and share that link with other people. It’s SQL injection attacks repurposed as a feature, and it’s proved to be incredibly useful over the past few years.
Here’s an example from earlier this week where I wanted to see how many GitHub issues I had opened and then closed within 60 seconds. The answer is 17!
django-sql-dashboard is my highly experimental exploration of what that idea looks like against a PostgreSQL database, wrapped inside a Django application
The key idea is to support executing read-only PostgreSQL statements with a strict timelimit (set using PostgreSQL’s statement_timeout
setting, described here). Users can execute SQL directly, bookmark and share queries and save them to a database table in order to construct persistent dashboards.
It’s very early days for the project yet, and I’m still not 100% convinced it’s a good idea, but early signs are very promising.
A fun feature is that it lets you have more than one SQL query on the same page. Here’s what it looks like running against my blog’s database, showing a count query and the months in which I wrote the most blog entries:
Releases this week
-
hacker-news-to-sqlite: 0.4—(5 releases total)—2021-03-13
Create a SQLite database containing data pulled from Hacker News -
django-sql-dashboard: 0.1a3—(3 releases total)—2021-03-13
Django app for building dashboards using raw SQL queries -
datasette-ripgrep: 0.7—(11 releases total)—2021-03-11
Web interface for searching your code using ripgrep, built as a Datasette plugin -
tableau-to-sqlite: 0.2—(3 releases total)—2021-03-11
Fetch data from Tableau into a SQLite database
TIL this week
- Pretty-printing all read-only JSON in the Django admin
- Flattening nested JSON objects with jq
- Converting no-decimal-point latitudes and longitudes using jq
- How to almost get facet counts in the Django admin
- Querying for GitHub issues open for less than 60 seconds
- Querying for items stored in UTC that were created on a Thursday in PST
Weeknotes: Datasette and Git scraping at NICAR, VaccinateCA one month ago
This week I virtually attended the NICAR data journalism conference and made a ton of progress on the Django backend for VaccinateCA (see last week).
NICAR 2021
NICAR stands for the National Institute for Computer Assisted Reporting—an acronym that reflects the age of the organization, which started teaching journalists data-driven reporting back in 1989, long before the term “data journalism” became commonplace.
This was my third NICAR and it’s now firly established itself at the top of the list of my favourite conferences. Every year it attracts over 1,000 of the highest quality data nerds—from data journalism veterans who’ve been breaking stories for decades to journalists who are just getting started with data and want to start learning Python or polish up their skills with Excel.
I presented an hour long workshop on Datasette, which I’m planning to turn into the first official Datasette tutorial. I also got to pre-record a five minute lightning talk about Git scraping.
I published the video and notes for that yesterday. It really seemed to strike a nerve at the conference: I showed how you can set up a scheduled scraper using GitHub Actions with just a few lines of YAML configuration, and do so entirely through the GitHub web interface without even opening a text editor.
Pretty much every data journalist wants to run scrapers, and understands the friction involved in maintaining your own dedicated server and crontabs and storage and backups for running them. Being able to do this for free on GitHub’s infrastructure drops that friction down to almost nothing.
The lightning talk lead to a last-minute GitHub Actions and Git scraping office hours session being added to the schedule, and I was delighted to have Ryan Murphy from the LA Times join that session to demonstrate the incredible things the LA Times have been doing with scrapers and GitHub Actions. You can see some of their scrapers in the datadesk/california-coronavirus-scrapers repo.
VaccinateCA
The race continues to build out a Django backend for the VaccinateCA project, to collect data on vaccine availability from people making calls on that organization’s behalf.
The new backend is getting perilously close to launch. I’m leaning heavily on the Django admin for this, refreshing my knowledge of how to customize it with things like admin actions and custom filters.
It’s been quite a while since I’ve done anything sophisticated with the Django admin and it has evolved a LOT. In the past I’ve advised people to drop the admin for custom view functions the moment they want to do anything out-of-the-ordinary—I don’t think that advice holds any more. It’s got really good over the years!
A very smart thing the team at VaccinateCA did a month ago is to start logging the full incoming POST bodies for every API request handled by their existing Netlify functions (which then write to Airtable).
This has given me an invaluable tool for testing out the new replacement API: I wrote a script which replays those API logs against my new implementation—allowing me to test that every one of several thousand previously recorded API requests will run without errors against my new code.
Since this is so valuable, I’ve written code that will log API requests to the new stack directly to the database. Normally I’d shy away from a database table for logging data like this, but the expected traffic is the low thousands of API requests a day—and a few thousand extra database rows per day is a tiny price to pay for having such a high level of visibility into how the API is being used.
(I’m also logging the API requests to PostgreSQL using Django’s JSONField, which means I can analyze them in depth later on using PostgreSQL’s JSON functionality!)
YouTube subtitles
I decided to add proper subtitles to my lightning talk video, and was delighted to learn that the YouTube subtitle editor pre-populates with an automatically generated transcript, which you can then edit in place to fix up spelling, grammar and remove the various “um” and “so” filler words.
This makes creating high quality captions extremely productive. I’ve also added them to the 17 minute Introduction to Datasette and sqlite-utils video that’s embedded on the datasette.io homepage—editing the transcript for that only took about half an hour.
TIL this week
Git scraping, the five minute lightning talk one month ago
I prepared a lightning talk about Git scraping for the NICAR 2021 data journalism conference. In the talk I explain the idea of running scheduled scrapers in GitHub Actions, show some examples and then live code a new scraper for the CDC’s vaccination data using the GitHub web interface. Here’s the video.
Notes from the talk
Here’s the PG&E outage map that I scraped. The trick here is to open the browser developer tools network tab, then order resources by size and see if you can find the JSON resource that contains the most interesting data.
I scraped that outage data into simonw/pge-outages—here’s the commit history (over 40,000 commits now!)
The scraper code itself is here. I wrote about the project in detail in Tracking PG&E outages by scraping to a git repo—my database of outages database is at pge-outages.simonwillison.net and the animation I made of outages over time is attached to this tweet.
Here’s a video animation of PG&E’s outages from October 5th up until just a few minutes ago pic.twitter.com/50K3BrROZR
- Simon Willison (@simonw) October 28, 2019
The much simpler scraper for the www.fire.ca.gov/incidents website is at simonw/ca-fires-history.
In the video I used that as the template to create a new scraper for CDC vaccination data—their website is https://covid.cdc.gov/covid-data-tracker/#vaccinations and the API I found using the browser developer tools is https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_data.
The new CDC scraper and the data it has scraped lives in simonw/cdc-vaccination-history.
You can find more examples of Git scraping in the git-scraping GitHub topic.