Simon Willison’s Weblog

On talks, scaling, opensource, certificates, jupyter, ...

 

Recent entries

Automatically playing science communication games with transfer learning and fastai 18 days ago

This weekend was the 9th annual Science Hack Day San Francisco, which was also the 100th Science Hack Day held worldwide.

Natalie and I decided to combine our interests and build something fun.

I’m currently enrolled in Jeremy Howard’s Deep Learning course so I figured this was a great opportunity to try out some computer vision.

Natalie runs the SciComm Games calendar and accompanying @SciCommGames bot to promote and catalogue science communication hashtag games on Twitter.

Hashtag games? Natalie explains them here—essentially they are games run by scientists on Twitter to foster public engagement around an animal or topic by challenging people to identify if a photo is a #cougarOrNot or participate in a #TrickyBirdID or identify #CrowOrNo or many others.

Combining the two… we decided to build a bot that automatically plays these games using computer vision. So far it’s just trying #cougarOrNot—you can see the bot in action at @critter_vision.

Training data from iNaturalist

In order to build a machine learning model, you need to start out with some training data.

I’m a big fan of iNaturalist, a citizen science project that encourages users to upload photographs of wildlife (and plants) they have seen and have their observations verified by a community. Natalie and I used it to build owlsnearme.com earlier this year—the API in particular is fantastic.

iNaturalist has over 5,000 verified sightings of felines (cougars, bobcats, domestic cats and more) in the USA.

The raw data is available as a paginated JSON API. The medium sized photos are just the right size for training a neural network.

I started by grabbing 5,000 images and saving them to disk with a filename that reflected their identified species:

Bobcat_9005106.jpg
Domestic-Cat_10068710.jpg
Bobcat_15713672.jpg
Domestic-Cat_6755280.jpg
Mountain-Lion_9075705.jpg

Building a model

I’m only one week into the fast.ai course so this really isn’t particularly sophisticated yet, but it was just about good enough to power our hack.

The main technique we are learning in the course is called transfer learning, and it really is shockingly effective. Instead of training a model from scratch you start out with a pre-trained model and use some extra labelled images to train a small number of extra layers.

The initial model we are using is ResNet-34, a 34-layer neural network trained on 1,000 labelled categories in the ImageNet corpus.

In class, we learned to use this technique to get 94% accuracy against the Oxford-IIIT Pet Dataset—around 7,000 images covering 12 cat breeds and 25 dog breeds. In 2012 the researchers at Oxford were able to get 59.21% using a sophisticated model—it 2018 we can get 94% with transfer learning and just a few lines of code.

I started with an example provided in class, which loads and trains images from files on disk using a regular expression that extracts the labels from the filenames.

My full Jupyter notebook is inaturalist-cats.ipynb—the key training code is as follows:

from fastai import *
from fastai.vision import *
cat_images_path = Path('/home/jupyter/.fastai/data/inaturalist-usa-cats/images')
cat_fnames = get_image_files(cat_images_path)
cat_data = ImageDataBunch.from_name_re(
    cat_images_path,
    cat_fnames,
    r'/([^/]+)_\d+.jpg$',
    ds_tfms=get_transforms(),
    size=224
)
cat_data.normalize(imagenet_stats)
cat_learn = ConvLearner(cat_data, models.resnet34, metrics=error_rate)
cat_learn.fit_one_cycle(4)
# Save the generated model to disk
cat_learn.save("usa-inaturalist-cats")

Calling cat_learn.save("usa-inaturalist-cats") created an 84MB file on disk at /home/jupyter/.fastai/data/inaturalist-usa-cats/images/models/usa-inaturalist-cats.pth—I used scp to copy that model down to my laptop.

This model gave me a 24% error rate which is pretty terrible—others on the course have been getting error rates less than 10% for all kinds of interesting problems. My focus was to get a model deployed as an API though so I haven’t spent any additional time fine-tuning things yet.

Deploying the model as an API

The fastai library strongly encourages training against a GPU, using pytorch and PyCUDA. I’ve been using n1-highmem-8 Google Cloud Platform instance with an attached Tesla P4, then running everything in a Jupyter notebook there. This costs around $0.38 an hour—fine for a few hours of training, but way too expensive to permanently host a model.

Thankfully, while a GPU is essential for productively training models it’s not nearly as important for evaluating them against new data. pytorch can run in CPU mode for that just fine on standard hardware, and the fastai README includes instructions on installing it for a CPU using pip.

I started out by ensuring I could execute my generated model on my own laptop (since pytorch doesn’t yet work with the GPU built into the Macbook Pro). Once I had that working, I used the resulting code to write a tiny Starlette-powered API server. The code for that can be found in in cougar.py.

fastai is under very heavy development and the latest version doesn’t quite have a clean way of loading a model from disk without also including the initial training images, so I had to hack around quite a bit to get this working using clues from the fastai forums. I expect this to get much easier over the next few weeks as the library continues to evolve based on feedback from the current course.

To deploy the API I wrote a Dockerfile and shipped it to Zeit Now. Now remains my go-to choice for this kind of project, though unfortunately their new (and brilliant) v2 platform imposes a 100MB image size limit—not nearly enough when the model file itself weights in at 83 MB. Thankfully it’s still possible to specify their v1 cloud which is more forgiving for larger applications.

Here’s the result: an API which can accept either the URL to an image or an uploaded image file: https://cougar-or-not.now.sh/—try it out with a cougar and a bobcat.

The Twitter Bot

Natalie built the Twitter bot. It runs as a scheduled task on Heroku and works by checking for new #cougarOrNot tweets from Dr. Michelle LaRue, extracting any images, passing them to my API and replying with a tweet that summarizes the results. Take a look at its recent replies to get a feel for how well it is doing.

Amusingly, Dr. LaRue frequently tweets memes to promote upcoming competitions and marks them with the same hashtag. The bot appears to think that most of the memes are bobcats! I should definitely spend some time tuning that model.

Science Hack Day was great fun. A big thanks to the organizing team, and congrats to all of the other participants. I’m really looking forward to the next one.

Plus… we won a medal!

How to Instantly Publish Data to the Internet with Datasette 22 days ago

I spoke about my Datasette project at PyBay in August and they’ve just posted the video of my talk.

I’ve also published the annotated slides from the talk, using a similar format to my Redis tutorial from back in 2010.

How I moderated the State of Django panel at DjangoCon US. 25 days ago

On Wednesday last week I moderated the State of Django panel as the closing session for DjangoCon US 2018.

I think it went well, so I’m writing some notes on exactly how we did it. In my experience it’s worth doing this for things like public speaking: in six months time I might moderate another panel and I’ll be desperately trying to remember what went right last time.

Panels are hard. Bad panels are way too common, to the point that some good conferences actively avoid having panels at all.

In my opinion, a good panel has a number of important attributes:

  • The panel needs a coherent theme. It shouldn’t just be several independent speakers that happen to be sharing the same time slot.
  • Panels need to be balanced. Having just one or two of the speakers monopolize the conversation is bad for the audience and bad for the panelists themselves.
  • The moderator is there to facilitate the conversation, not to be the center of attention. I love public speaking so I feel the need to be particularly cautious here.
  • Panelists need to have diverse perspectives on the topics under discussion. A panel where everyone agrees with each other and makes the same points is a very boring panel indeed.

I originally pitched the panel to the DjangoCon US organizing committee as a “State of Django” conversation where core maintainers would talk about the current state of the project.

They countered with a much better idea: a panel that encompassed both the state of the Django framework and the community and ecosystem that it exists within. Since DjangoCon is primarily about bringing that community together this was a much better fit for the conference, and would make for a much more interesting and relevant discussion.

I worked together with the conference organizers to find our panelists. Nicholle James in particular was the driving force behind assembling the panelists and ensuring everything was in place for the panel to succeed.

We ended up with a panel representing a comprehensive cross-section of the organizations that make the Django community work:

  • Andrew Godwin, representing Django Core
  • Anna Makarudze, representing the DSF, Django Girls and Python Africa
  • Frank Wiles, President of the DSF
  • Jeff Triplett, President of DEFNA, member of the Board of Directors for the PSF
  • Josue Balandrano Coronel, representing DEFNA
  • Katherine Michel, representing DEFNA and DjangoCon US Website Chair
  • Kojo Idrissa, DEFNA North American Ambassador
  • Rachell Calhoun, representing Django Girls and PyLadies

As it was the closing session for the conference, I wanted the panel to both celebrate the progress of the community and project and to explore what we need to do next: what should we be working on to improve Django and it’s community in the future?

I had some initial thoughts on topics, but since the panel was scheduled for the last session of the conference I decided to spend the conference itself firming up the topics that would be discussed. This was a really good call: we got to create an agenda for the panel that was almost entirely informed by the other conference sessions combined with hot topics from the halfway track. We also asked conference attendees for their suggestions via an online form, and used those suggestions to further inform the topics that were discussed.

I made sure to have a 10-15 minute conversation one-on-one with each of the panelists during the conference. We then got together for an hour at lunch before the panel to sync up with the topics and themes we would be discussing.

Topic and themes

Our pre-panel conversations highlighted a powerful theme for the panel itself, which I ended up summarizing as “What can the Django project learn from the Django community?” This formed the framework for the other themes of the panel.

The themes themselves were:

One of the hardest parts for me was figuring out the order in which we would tackle these themes. I ended up settling on the above order about half an hour before the panel started.

Opening and closing

With eight panelists, ensuring that introductions didn’t drag on too long was particularly important. I asked each panelist to introduce themselves with a couple of sentences that highlighted the organizations they were affiliated with that were relevant to the panel. For our chosen group of panelists this was definitely the right framing.

I then asked each panelist to be prepared to close the panel with a call to action: something an audience member could actively do that would support the community going forward. This worked really well: it provided a great, actionable note to end both the panel and the conference.

Preparing and running the panel

We used our panel lunch together to check that no one would have calls to action that overlapped too much, and to provide a rough indication of who had things to say about each topic we planned to discuss.

This turned out to be essential: I’ve been on smaller panels where the panelists have been able to riff easily on each other’s points, but with eight panelists it turned out not everyone could even see each other, so as panel moderator it fell on me to direct questions to individuals and then prompt others for follow-ups. Thankfully the panel lunch combined with the one-to-one conversations gave me the information I needed for this.

I had written down a selection of questions for each of the themes. Having a selection turned out to be crucial: a few times the panelists talked about material that I had planned to cover in a later section, so I had to adapt as we went along. In the future I’ll spend more time on this: these written ideas were a crucial component in keeping the panel flowing in the right direction.

With everything in place the panel itself was a case of concentrating on what everyone was saying and using the selection of the next questions (plus careful ad-libbing) to guide the conversation along the preselected themes. I also tried to keep mental track of who had done the most speaking so I could ensure the conversation stayed as balanced as possible by inviting other panelists into the conversation.

The video of the panel should be out in around three weeks time, at which point you can evaluate for yourself if we managed to do a good job of it. I’m really happy with the feedback we got after the panel, and I plan to use a similar process for panels I’m involved with in the future.

The interesting ideas in Datasette one month ago

Datasette (previously) is my open source tool for exploring and publishing structured data. There are a lot of ideas embedded in Datasette. I realized that I haven’t put many of them into writing.

Publishing read-only data
Bundling the data with the code
SQLite as the underlying data engine
Far-future cache expiration
Publishing as a core feature
License and source metadata
Facet everything
Respect for CSV
SQL as an API language
Optimistic query execution with time limits
Keyset pagination
Interactive demos based on the unit tests
Documentation unit tests

Publishing read-only data

Datasette provides a read-only API to your data. It makes no attempt to deal with writes. Avoiding writes entirely is fundamental to a plethora of interesting properties, many of which are expanded on further below. In brief:

  • Hosting web applications with no read/write persistence requirements is incredibly cheap in 2018—often free (both ZEIT Now and a Heroku have generous free tiers). This is a big deal: even having to pay a few dollars a month is enough to dicentivise sharing data, since now you have to figure out who will pay and ensure the payments don’t expire in the future.
  • Being read-only makes it trivial to scale: just add more instances, each with their own copy of the data. All of the hard problems in scaling web applications that relate to writable data stores can be skipped entirely.
  • Since the database file is opened using SQLite’s immutable mode, we can accept arbitrary SQL queries with no risk of them corrupting the data.

Any time your data changes, you need to publish a brand new copy of the whole database. With the right hosting this is easy: deploy a brand new copy of your data and application in parallel to your existing live deployment, then switch over incoming HTTP traffic to your API at the load balancer level. Heroku and Zeit Now both support this strategy out of the box.

Bundling the data with the code

Since the data is read-only and is encapsulated in a single binary SQLite database file, we can bundle the data as part of the app. This means we can trivially create and publish Docker images that provide both the data and the API and UI for accessing it. We can also publish to any hosting provider that will allow us to run a Python application, without also needing to provision a mutable database.

The datasette package command takes one or more SQLite databases and bundles them together with the Datasette application in a single Docker image, ready to be deployed anywhere that can run Docker containers.

SQLite as the underlying data engine

Datasette encourages people to use SQLite as a standard format for publishing data.

Relational database are great: once you know how to use them, you can represent any data you can imagine using a carefully designed schema.

What about data that’s too unstructured to fit a relational schema? SQLite includes excellent support for JSON data—so if you can’t shape your data to fit a table schema you can instead store it as text blobs of JSON—and use SQLite’s JSON functions to filter by or extract specific fields.

What about binary data? Even that’s covered: SQLite will happily store binary blobs. My datasette-render-images plugin (live demo here) is one example of a tool that works with binary image data stored in SQLite blobs.

What if my data is too big? Datasette is not a “big data” tool, but if your definition of big data is something that won’t fit in RAM that threshold is growing all the time (2TB of RAM on a single AWS instance now costs less than $4/hour).

I’ve personally had great results from multiple GB SQLite databases and Datasette. The theoretical maximum size of a single SQLite database is around 140TB.

SQLite also has built-in support for surprisingly good full-text search, and thanks to being extensible via modules has excellent geospatial functionality in the form of the SpatiaLite extension. Datasette benefits enormously from this wider ecosystem.

The reason most developers avoid SQLite for production web applications is that it doesn’t deal brilliantly with large volumes of concurrent writes. Since Datasette is read-only we can entirely ignore this limitation.

Far-future cache expiration

Since the data in a Datasette instance never changes, why not cache calls to it forever?

Datasette sends a far future HTTP cache expiry header with every API response. This means that browsers will only ever fetch data the first time a specific URL is accessed, and if you host Datasette behind a CDN such as Fastly or Cloudflare each unique API call will hit Datasette just once and then be cached essentially forever by the CDN.

This means it’s safe to deploy a JavaScript app using an inexpensively hosted Datasette-backed API to the front page of even a high traffic site—the CDN will easily take the load.

Zeit added Cloudflare to every deployment (even their free tier) back in July, so if you are hosted there you get this CDN benefit for free.

What if you re-publish an updated copy of your data? Datasette has that covered too. You may have noticed that every Datasette database gets a hashed suffix automatically when it is deployed:

https://fivethirtyeight.datasettes.com/fivethirtyeight-c9e67c4

This suffix is based on the SHA256 hash of the entire database file contents—so any change to the data will result in new URLs. If you query a previous suffix Datasette will notice and redirect you to the new one.

If you know you’ll be changing your data, you can build your application against the non-suffixed URL. This will not be cached and will always 302 redirect to the correct version (and these redirects are extremely fast).

https://fivethirtyeight.datasettes.com/fivethirtyeight/alcohol-consumption%2Fdrinks.json

The redirect sends an HTTP/2 push header such that if you are running behind a CDN that understands push (such as Cloudflare) your browser won’t have to make two requests to follow the redirect. You can use the Chrome DevTools to see this in action:

Chrome DevTools showing a redirect initiated by an HTTP/2 push

And finally, if you need to opt out of HTTP caching for some reason you can disable it on a per-request basis by including ?_ttl=0 in the URL query string. —for example, if you want to return a random member of the Avengers it doesn’t make sense to cache the response:

https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+[avengers%2Favengers]+order+by+random()+limit+1&_ttl=0

Publishing as a core feature

Datasette aims to reduce the friction for publishing interesting data online as much as possible.

To this end, Datasette includes a “publish” subcommand:

# deploy to Heroku
datasette publish heroku mydatabase.db
# Or deploy to Zeit Now
datasette publish now mydatabase.db

These commands take one or more SQLite databases, upload them to a hosting provider, configure a Datasette instance to serve them and return the public URL of the newly deployed application.

Out of the box, Datasette can publish to either Heroku or to Zeit Now. The publish_subcommand plugin hook means other providers can be supported by writing plugins.

License and source metadata

Datasette believes that data should be accompanied by source information and a license, whenever possible. The metadata.json file that can be bundled with your data supports these. You can also provide source and license information when you run datasette publish:

datasette publish fivethirtyeight.db \
    --source="FiveThirtyEight" \
    --source_url="https://github.com/fivethirtyeight/data" \
    --license="CC BY 4.0" \
    --license_url="https://creativecommons.org/licenses/by/4.0/"

When you use these options Datasette will create the corresponding metadata.json file for you as part of the deployment.

Facet everything

I really love faceted search: it’s the first tool I turn to whenever I want to start understanding a collection of data. I’ve built faceted search engines on top of Solr, Elasticsearch and PostgreSQL and many of my favourite tools (like Splunk and Datadog) have it as a core feature.

Datasette automatically attempts to calculate facets against every table. You can read more about the Datasette Facets feature here—as a huge faceted search fan it’s one of my all-time favourite features of the project. Now I can add SQLite to the list of technologies I’ve used to build faceted search!

Respect for CSV

CSV is by far the most common format for sharing and publishing data online. Almost every useful data tool has the ability to export to it, and it remains the lingua franca of spreadsheet import and export.

It has many flaws: it can’t easily represent nested data structures, escaping rules for values containing commas are inconsistently implemented and it doesn’t have a standard way of representing character encoding.

Datasette aims to promote SQLite as a much better default format for publishing data. I would much rather download a .db file full of pre-structured data than download a .csv and then have to re-structure it as a separate piece of work.

But interacting well with the enormous CSV ecosystem is essential. Datasette has deep CSV export functionality: any data you can see, you can export—including the results of arbitrary SQL queries. If your query can be paginated Datasette can stream down every page in a single CSV file for you.

Datasette’s sister-tool csvs-to-sqlite handles the other side of the equation: importing data from CSV into SQLite tables. And the Datasette Publish web application allows users to upload their CSVs and have them deployed directly to their own fresh Datasette instance—no command line required.

SQL as an API language

A lot of people these days are excited about GraphQL, because it allows API clients to request exactly the data they need, including traversing into related objects in a single query.

Guess what? SQL has been able to do that since the 1970s!

There are a number of reasons most APIs don’t allow people to pass them arbitrary SQL queries:

  • Security: we don’t want people messing up our data
  • Performance: what if someone sends an accidental (or deliberate) expensive query that exhausts our resources?
  • Hiding implementation details: if people write SQL against our API we can never change the structure of our database tables

Datasette has answers to all three.

On security: the data is read-only, using SQLite’s immutable mode. You can’t damage it with a query—INSERT and UPDATEs will simply throw harmless errors.

On performance: SQLite has a mechanism for canceling queries that take longer than a certain threshold. Datasette sets this to one second by default, though you can alter that configuration if you need to (I often bump it up to ten seconds when exploring multi-GB data on my laptop).

On hidden implementation details: since we are publishing static data rather than maintaining an evolving API, we can mostly ignore this issue. If you are really worried about it you can take advantage of canned queries and SQL view definitions to expose a carefully selected forward-compatible view into your data.

Optimistic query execution with time limits

I mentioned Datasette’s SQL time limits above. These aren’t just there to avoid malicious queries: the idea of “optimistic SQL evaluation” is baked into some of Datasette’s core features.

Consider suggested facets—where Datasette inspects any table you view and tries to suggest columns that are worth faceting against.

The way this works is Datasette loops over every column in the table and runs a query to see if there are less than 20 unique values for that column. On a large table this could take a prohibitive amount of time, so Datasette sets an aggressive timeout on those queries: just 50ms. If the query fails to run in that time it is silently dropped and the column is not listed as a suggested facet.

Datasette’s JSON API provides a mechanism for JavaScript applications to use that same pattern. If you add ?_timelimit=20 to any Datasette API call, the underlying query will only get 20ms to run. If it goes over you’ll get a very fast error response from the API. This means you can design your own features that attempt to optimistically run expensive queries without damaging the performance of your app.

Keyset pagination

SQL pagination using OFFSET/LIMIT has a fatal flaw: if you request page number 300 at 20 per page the underlying SQL engine needs to calculate and sort all 6,000 preceding rows before it can return the 20 you have requested.

This does not scale at all well.

Keyset pagination (often known by other names, including cursor-based pagination) is a far more efficient way to paginate through data. It works against ordered data. Each page is returned with a token representing the last record you saw, then when you request the next page the engine merely has to filter for records that are greater than that tokenized value and scan through the next 20 of them.

(Actually, it scans through 21. By requesting one more record than you intend to display you can detect if another page of results exists—if you ask for 21 but get back 20 or less you know you are on the last page.)

Datasette’s table view includes a sophisticated implementation of keyset pagination.

Datasette defaults to sorting by primary key (or SQLite rowid). This is perfect for efficient pagination: running a select against the primary key column for values greater than X is one of the fastest range scan queries any database can support. This allows users to paginate as deep as they like without paying the offset/limit performance penalty.

This is also how the “export all rows as CSV” option works: when you select that option, Datasette opens a stream to your browser and internally starts keyset-pagination over the entire table. This keeps resource usage in check even while streaming back millions of rows.

Here’s where Datasette gets fancy: it handles keyset pagination for any other sort order as well. If you sort by any column and click “next” you’ll be requesting the next set of rows after the last value you saw. And this even works for columns containing duplicate values: If you sort by such a column, Datasette actually sorts by that column combined with the primary key. The “next” pagination token it generates encodes both the sorted value and the primary key, allowing it to correctly serve you the next page when you click the link.

Try clicking “next” on this page to see keyset pagination against a sorted column in action.

Interactive demos based on the unit tests

I love interactive demos. I decided it would be useful if every single release of Datasette had a permanent interactive demo illustrating its features.

Thanks to Zeit Now, this was pretty easy to set up. I’ve actually taken it a step further: every successful push to master on GitHub is also deployed to a permanent URL.

Some examples:

The database that is used for this demo is the exact same database that is created by Datasette’s unit test fixtures. The unit tests are already designed to exercise every feature, so reusing them for a live demo makes a lot of sense.

You can view this test database on your own machine by checking out the full Datasette repository from GitHub and running the following:

python tests/fixtures.py fixtures.db metadata.json
datasette fixtures.db -m metadata.json

Here’s the code in the Datasette Travis CI configuration that deploys a live demo for every commit and every released tag.

Documentation unit tests

I wrote about the Documentation unit tests pattern back in July.

Datasette’s unit tests include some assertions that ensure that every plugin hook, configuration setting and underlying view class is mentioned in the documentation. A commit or pull request that adds or modifies these without also updating the documentation (or at least ensuring there is a corresponding heading in the docs) will fail its tests.

Learning more

Datasette’s documentation is in pretty good shape now, and the changelog provides a detailed overview of new features that I’ve added to the project. I presented Datasette at the PyBay conference in August and I’ve published my annotated slides from that talk. I was interviewed about Datasette for the Changelog podcast in May and my notes from that conversation include some of my favourite demos.

Datasette now has an official Twitter account—you can follow @datasetteproj there for updates about the project.

Letterboxing on Lundy one month ago

Last week Natalie and I spent a delightful two days with our friends Hannah and Adam on the beautiful island of Lundy in the Bristol Channel, 12 miles off the coast of North Devon.

I’ve been wanting to visit Lundy for years. The island is managed by the Landmark Trust, a UK charity who look after historic buildings and make them available as holiday rentals.

Our first experience with the Landmark Trust was the original /dev/fort back in 2008 when we rented a Napoleonic Sea Fortress on Alderney in the Channel Islands. Ever since then I’ve been keeping an eye out for opportunities to try out more of their properties: just two weeks ago we stayed in Wortham Manor and used it as a staging ground to help prepare a family wedding.

I cannot recommend the Landmark Trust experience strongly enough: each property is unique and fascinating, they are kept in great condition and if you split the cost of a larger rental among a group of friends the price can be comparable to a youth hostel.

Lundy is their Crown Jewels: they’ve been looking after the island since the 1960s and now offer 23 self-catering properties there.

We took the ferry out on Tuesday morning (a truly horrific two hour voyage) and back again on Thursday evening (thankfully much calmer). Once on Lundy we stayed in Castle Keep South, a two bedroom house in the keep of a castle built in the 13th century by Henry III, after he retook the island from the apparently traitorous William de Marisco (who was then hanged, drawed and quartered for good measure—apparently one of the first ever uses of that punishment). Lundy has some very interesting history attached to it.

The island itself is utterly spectacular. Three miles long, half a mile wide, surrounded by craggy cliffs and mostly topped with ferns and bracken. Not a lot of trees except for the more sheltered eastern side. A charming population of sheep, goats, Lundy Ponies and some highland cattle with extremely intimidating horns.

A highland cow that looks like Boris Johnson

(“They’re complete softies. We call that one Boris because he looks like Boris Johnson”—a lady who works in the Tavern)

Lundy has three light houses (two operational, one retired), the aforementioned castle, a charming little village, a church and numerous fascinating ruins and isolated buildings, many of which you can stay in. It has the remains of two crashed WWII German Heinkel He 111 bombers (which we eventually tracked down).

It also hosts what is quite possibly the world’s best Letterboxing trail.

Letterboxing?

Letterboxing is an outdoor activity that is primarily pursued in the UK. It consists of weatherproof boxes hidden in remote locations, usually under a pile of rocks, containing a notebook and a custom stamp. The location of the boxes is provided by a set of clues. Given the clues, your challenge is to find all of the boxes and collect their stamps in your notebook.

On Lundy the clues can be purchased from the village shop.

I had dabbled with Letterboxing a tiny bit in the past but it hadn’t really clicked with me until Natalie (a keen letterboxer) encouraged us to give it a go on Lundy.

It ended up occupying almost every waking moment of our time there, and taking us to every far-flung corner of the island.

There are 28 letterboxes on Lundy. We managed to get 27 of them—and we would have got them all, if the last one hadn’t been located on a beach that was shut off from the public due to grey seals using it to raise their newly born pups! The pups were cute enough that we forgave them.

To give you an idea for how it works, here’s the clue for letterbox 27, “The Ugly”:

The Ugly: From the lookout hut near the flagpole on the east side of Millcombe, walk in the direction of the South Light until it is on a bearing of 130°, Bramble Villa 210° and Millcombe due west. The letterbox is beneah you.

There were letterboxes in lighthouses, letterboxes in ruins, letterboxes perilously close to cliff-faces, letterboxes in church pews, letterboxes in quarries, letterboxes in caves. If you thought that letterboxing was for kids, after scrabbling down more perilous cliff paths than I can count I can assure you it isn’t!

On Thursday I clocked up 24,000 steps walking 11 miles and burned 1,643 calories. For comparison, when I ran the half marathon last year I only burned 1,222. These GPS tracks from my Apple Watch give a good impression of how far we ended up walking on our second day of searching.

When we checked the letterboxing log book in the Tavern on Wednesday evening we found most people who attempt to hit all 28 letterboxes spread it out over a much more sensible timeframe. I’m not sure that I would recommend trying to fit it in to just two days, but it’s hard to imagine a better way of adding extra purpose to an exploration of the island.

Should you attempt letterboxing on Lundy (and if you can get out there you really should consider it), a few tips:

  • If in doubt, look for the paths. Most of the harder to find letterboxes were at least located near an obvious worn path.
  • “Earthquake” is a nightmare. The clue really didn’t help us—we ended up performing a vigorous search of most of the area next to (not inside) the earthquake fault.
  • The iPhone compass app is really useful for finding bearings. We didn’t use a regular compass at all.
  • If you get stuck, check for extra clues in the letterboxing log book in the tavern. This helped us crack Earthquake.
  • There’s more than one pond. The quarry pond is very obvious once you find it.
  • Take as many different maps as you can find—many of the clues reference named landmarks that may not appear on the letterboxing clue map. We forgot to grab an offline copy of Lundy in the Google Maps app and regretted it.

If you find yourself in Ilfracombe on the way to or from Lundy, the Ilfracombe Museum is well worth your time. It’s a classic example in the genre of “eccentric collects a wide variety of things, builds a museum for them”. Highlights include a cupboard full of pickled bats and a drawer full of 100-year-old wedding cake samples.

The subset of reStructuredText worth committing to memory two months ago

reStructuredText is the standard for documentation in the Python world.

It’s a bit weird. It’s like Markdown but older, more feature-filled and in my experience signifcantly harder to remember.

There are plenty of guides and cheatsheets out there, but when writing simple documentation for software projects I think there’s a subset that is worth committing to memory. I’ll describe that subset here.

First though: when writing reStructuredText having a live preview render is extremely useful. I use rst.ninjs.org for this. If you don’t trust that hosted version (it round-trips your documentation through the server in order to render it) you can run a local copy instead using the underlying source code.

Paragraphs

Paragraphs work the same way as Markdown and plain text. They are nice and easy.

This is the first paragraph. No need to wrap the text (though you can wrap at e.g. 80 characters without affecting rendering).

This is the second paragraph.

Headings

reStructuredText section headings are a little surprising.

Markdown has multiple levels of heading, each with a different number of prefix hashes:

# Markdown heading level 1
## Markdown heading level 2
..
###### Markdown heading fevel 6

In reStructuredText there is no single format for these different levels. Instead, the format you use first will be treated as an H1, the next format as an H2 and so on. Here’s the description from the official documentation:

Sections are identified through their titles, which are marked up with adornment: “underlines” below the title text, or underlines and matching “overlines” above the title. An underline/overline is a single repeated punctuation character that begins in column 1 and forms a line extending at least as far as the right edge of the title text. Specifically, an underline/overline character may be any non-alphanumeric printable 7-bit ASCII character. […] There may be any number of levels of section titles, although some output formats may have limits (HTML has 6 levels).

This is deeply confusing. I suggest instead standardizing on the following:

=====================
 This is a heading 1
=====================

This heading has = signs both above and below, and they extend past the text by a single character in each direction.

This is a heading 2
===================

This is a heading 3
-------------------

This is a heading 4
~~~~~~~~~~~~~~~~~~~

If you need more levels, you can invent them using whatever character you like—but try to stay consistent within your project.

Bulleted lists

As with headings, you can use a variety of characters for these. I suggest sticking with asterisks.

A blank line is required before starting a bulleted list.

* A bullet point
* Another bullet point

If you decide to wrap your text (I tend not to) you must maintain the indentation on the wrapped lines:

* A bulleted list item. Since the text is wrapped each subsequent
  line of text must be indented by two spaces.
* Second list item.

Nested lists are supported, but you MUST leave a blank line above the first inner list bullet point or they won’t work:

* This is the first bullet list item. Here comes a sub-list:

  * Hello sublist
  * Sublist two

* Back to the parent list.

Inline markup

I only use three inline markup features: bold, italic and code.

**Bold text** is surrounded by two asterisks.

*Italic text* is one asterisk.

``inline code`` uses two backticks at either side of the code.

Links

Links are my least favorite feature of reStructuredText. There are several different ways of including them, but the one I use most often (and hence have committed to memory) is this one:

`a link, note the trailing underscores <http://example.com>`__

So that’s a backtick at the start, then the link text, then the URL contained in greater than / less than symbols, then another backtick and then TWO underscores to finish it off.

Why two underscores? Because if you only use one, the text part of the link is remembered and can be used to duplicate your link later on—see example below. In my experience this is more trouble than it’s worth.

A more complex link syntax example (documented here) looks like this:

See the `Python home page`_ for info.

This link_ is an alias to the link above.

.. _Python home page: http://www.python.org
.. _link: `Python home page`_

I can’t remember this at all, so I stick with the anonymous hyperlink syntax instead.

Code blocks

The easiest way to embed a block of code is like this:

::

    # This is a code example
    print("It needs to be indented")

The :: indicates that a code block is coming up. The blank line after the :: before the indentation starts is required.

Most renderers have the ability to apply syntax highlighting. To specify that a block should have syntax highlighting for a specific language, replace the :: in the above example with one of the following:

.. code-block:: sql

.. code-block:: javascript

.. code-block:: python

Images

There are plenty of options for embedding images, but the most basic syntax (worth remembering) looks like this:

.. image:: full_text_search.png

This will embed an image of that filename that sits in the same directory as the document itself.

Internal references

In my opinion this is the key feature that makes reStructuredText more powerful than Markdown for larger documentation projects.

Again, there in a vast and complex array of options around this, but the key thing to remember is how to add a reference name to a specific section and how to link to that section later on.

Names are applied to section headings, by adding some magic text before the heading itself. For example:

.. _full_text_search:

Full-text search
================

Note the format: two periods, then a space, then an underscore, then the label, then a colon at the end.

The label full_text_search is now associated with that heading. I can link to it from any page in my documentation project like so:

:ref:`full_text_search`

Note that the leading underscore isn’t included in this reference.

The link text displayed will be the text of the heading, in this case “Full-text search”. If I want to replace that link text with something custom, I can do so like this:

Learn about the :ref:`search feature <full_text_search>`.

This syntax is similar to the inline hyperlink syntax described above.

Learning more

I extracted the patterns I describe in this post from the Datasette documentation—I encourage you to dig around in the source code to see how it all works.

The definitive guide to reStructuredText is the reStructuredText Markup Specification. My favourite of the various quick references is the Restructured Text (reST) and Sphinx CheatSheet by Thomas Cokelaer.

I’m a huge fan of Read the Docs for hosting documentation—it’s the key reason I use reStructuredText in my projects. Unsurprisingly, they offer extensive documentation to help you make the most of their platform.

Elsewhere

12th November 2018

  • Squoosh. This is by far the most useful example of web assembly I’ve seen so far: Squoosh is a progressive web app for image optimization (JPEG, PNG, GIF, SVG and more) which uses emscripten-compiled versions of best in breed image codec implementations to provide a browser interface for applying and previewing those optimizations. #

10th November 2018

  • The premise of “The Good Place” is absurdly high concept. It sounds less like the basis of a prime-time sitcom than an experimental puppet show conducted, without a permit, on the woodsy edge of a large public park.

    Sam Anderson #

9th November 2018

6th November 2018

  • A Netflix Web Performance Case Study (via) Fascinating description of how Netflix knocked the 3G loading times of their homepage in half for logged-out users by rendering the React templates on the server-side and using the bare amount of vanilla JavaScript necessary to get the homepage interactive—then XHR prefetching the full React code needed to power the subsequent signup flow. Via Alex Russell, who tweets “I’m increasingly optimistic that we can cap JS emissions by quarantining legacy frameworks to the server side.” #
  • Optimizing Django Admin Paginator. The Django admin paginator uses a count(*) to calculate the total number of rows, so it knows how many pages to display. This makes it unpleasantly slow over large datasets. Haki Benita has an ingenious solution: drop in a custom paginator which uses the PostgreSQL “SET LOCAL statement_timeout TO 200” statement first, then if a timeout error is raised returns 9999999999 as the count instead. This means small tables get accurate page counts and giant tables load display in the admin within a reasonable time period. #

5th November 2018

  • 11 barriers to coding in the open and how to overcome them (via) “Terence Eden, open standards lead at GDS, also gave a talk about overcoming barriers to coding in the open”—an intriguing recap of that talk revealing exactly how the UK government have been encouraging a culture of coding in the open and going open source first. #
  • Twitter conversation about long-term pre-paid archival storage. I kicked off a conversation on Twitter yesterday about long-time archival storage of web content: “Anyone know of a web hosting provider where I can pay a lump sum of money to host a file at a reliable URL essentially forever? Is this even remotely feasible?”. The thread is really interesting—this is definitely an unsolved problem, and it’s clear that the challenge is more organizational (how do you create an entity that can keep this kind of promise—does it need to be some kind of foundation or trust?) than technical. #

3rd November 2018

  • If you stop thinking in terms of MVC you might notice that at its core, React is a runtime for effectful functions that don’t execute “once”, but run continuously while being anchored to a call tree.

    Dan Abramov #

  • Apple’s New Map (via) Map nerds rejoice! Justin O’Beirne had written another spectacularly illustrated essay about web cartography, this time examining the iOS 12 upgrade to Apple Maps in most of California and a little bit of Nevada. #
  • Every bitcoin proof-of-work mined is an incremental addition to a vast distributed summoning ritual powering the demon-soul at the heart of the maze, the computational equivalent of a Buddhist prayer wheel spinning in a Himalayan breeze.

    Charles Stross #

  • Svelte RFC 0001: Reactive assignments (via) Svelte is a really interesting JavaScript framework: it offers a similar component-based developer experience to React but does it while delivering a tiny amount of code-to-the-browser thanks to being built entirely around a compiler that generates the minimum code necessary. In this RFC the Svelte team propose taking this approach even further, by generating code to accompany every relevant variable assignment that can trigger the corresponding view update. The document also has a very clear explanation of how React, Vue and current Svelte differ in their solutions to the challenge of updating the visible HTML view when the corresponding state changes. #

2nd November 2018

  • jantic/DeOldify (via) “A Deep Learning based project for colorizing and restoring old images”. Delightful (and well documented) project that uses a Self-Attention Generative Adversarial Network to colorize old black and white photos, with extremely impressive results. Built on an older version of the fastai library, and trained by running for several days on a 1080TI graphics card. #

1st November 2018

  • Among other things at Netflix the Mantis Query Language (MQL an SQL for streaming data) which ferries around approximately 2 trillion events every day for operational analysis (SPS alerting, quality of experience metrics, debugging production, etc) is written entirely in Clojure.

    diab0lic on Hacker News #

31st October 2018

  • Reinforcement Learning with Prediction-Based Rewards (via) Fascinating result: by teaching a reinforcement learning agent that plays video games to optimize for “unfamiliar states”—states where it cannot predict what will happen next—the agent does a much better job of playing some games. “... for the first time exceeds average human performance on Montezuma’s Revenge. RND achieves state-of-the-art performance, periodically finds all 24 rooms and solves the first level without using demonstrations or having access to the underlying state of the game.” #
  • October 21 post-incident analysis (via) Legitimately fascinating post-mortem by GitHub. They run database masters in multiple data centers with raft for leader election... but when they had an unexpected network split between east and west coast they ended up with several seconds of write that had not been correctly replicated. Cleaning up the resulting mess took the best part of 24 hours! Distributed systems are hard. #
  • Making Sense of React Hooks (via) Dan Abramov provides the most comprehensive justification I’ve seen so far for the new React hooks API. #
  • This Is How We Radicalized The World (via) Don’t be put off by the click-baity title: this article by Ryan Broderick is absolutely worth your time. Ryan has been traveling the world covering the global rise of populism, which has been driven in a great part by new patterns of social media usage and distrust of the news media. Ryan ties together stories from a bunch of different countries over the last few years and make a compelling case that we need to come to terms with social media radicalization as a global problem and figure out how to respond to it and deal with the fallout. #
  • matthewp/haunted: React's Hooks API implemented for web components (via) It’s been fascinating over the past few days watching various frontend web stacks start playing with the new ideas introduced by the proposed React hooks API. lit-html is one of my favourite React alternatives—it’s built on web components and makes really clever use of ES6 template literals (in place of React’s JSX, which requires an additional compilation step). With Haunted Matthew Phillips explores the combination of lit-html, web components and hooks-style state management. #

28th October 2018

  • python-twitter/get_access_token.py. Creating an OAuth token for accessing a specific Twitter account is way harder than it needs to be. I was about to write my own command-line script for doing this using PIN-based authentication (where you pop open a browser showing the Twitter login flow, then get a PIN number at the end which you paste back into your script) when I discovered that the python-twitter library already ships with a script to do exactly that. Just run “python get_access_token.py”, paste in your app’s consumer key and secret, follow a link, enter the resulting PIN and the script will spit out the consumer_key / consumer_secret / access_token_key / access_token_secret combo you need to start using the Twitter API. #