Automatically playing science communication games with transfer learning and fastai
29th October 2018
This weekend was the 9th annual Science Hack Day San Francisco, which was also the 100th Science Hack Day held worldwide.
Natalie and I decided to combine our interests and build something fun.
I’m currently enrolled in Jeremy Howard’s Deep Learning course so I figured this was a great opportunity to try out some computer vision.
Natalie runs the SciComm Games calendar and accompanying @SciCommGames bot to promote and catalogue science communication hashtag games on Twitter.
Hashtag games? Natalie explains them here—essentially they are games run by scientists on Twitter to foster public engagement around an animal or topic by challenging people to identify if a photo is a #cougarOrNot or participate in a #TrickyBirdID or identify #CrowOrNo or many others.
Combining the two… we decided to build a bot that automatically plays these games using computer vision. So far it’s just trying #cougarOrNot—you can see the bot in action at @critter_vision.
Training data from iNaturalist
In order to build a machine learning model, you need to start out with some training data.
I’m a big fan of iNaturalist, a citizen science project that encourages users to upload photographs of wildlife (and plants) they have seen and have their observations verified by a community. Natalie and I used it to build owlsnearme.com earlier this year—the API in particular is fantastic.
iNaturalist has over 5,000 verified sightings of felines (cougars, bobcats, domestic cats and more) in the USA.
The raw data is available as a paginated JSON API. The medium sized photos are just the right size for training a neural network.
I started by grabbing 5,000 images and saving them to disk with a filename that reflected their identified species:
Bobcat_9005106.jpg
Domestic-Cat_10068710.jpg
Bobcat_15713672.jpg
Domestic-Cat_6755280.jpg
Mountain-Lion_9075705.jpg
Building a model
I’m only one week into the fast.ai course so this really isn’t particularly sophisticated yet, but it was just about good enough to power our hack.
The main technique we are learning in the course is called transfer learning, and it really is shockingly effective. Instead of training a model from scratch you start out with a pre-trained model and use some extra labelled images to train a small number of extra layers.
The initial model we are using is ResNet-34, a 34-layer neural network trained on 1,000 labelled categories in the ImageNet corpus.
In class, we learned to use this technique to get 94% accuracy against the Oxford-IIIT Pet Dataset—around 7,000 images covering 12 cat breeds and 25 dog breeds. In 2012 the researchers at Oxford were able to get 59.21% using a sophisticated model—it 2018 we can get 94% with transfer learning and just a few lines of code.
I started with an example provided in class, which loads and trains images from files on disk using a regular expression that extracts the labels from the filenames.
My full Jupyter notebook is inaturalist-cats.ipynb—the key training code is as follows:
from fastai import *
from fastai.vision import *
cat_images_path = Path('/home/jupyter/.fastai/data/inaturalist-usa-cats/images')
cat_fnames = get_image_files(cat_images_path)
cat_data = ImageDataBunch.from_name_re(
cat_images_path,
cat_fnames,
r'/([^/]+)_\d+.jpg$',
ds_tfms=get_transforms(),
size=224
)
cat_data.normalize(imagenet_stats)
cat_learn = ConvLearner(cat_data, models.resnet34, metrics=error_rate)
cat_learn.fit_one_cycle(4)
# Save the generated model to disk
cat_learn.save("usa-inaturalist-cats")
Calling cat_learn.save("usa-inaturalist-cats")
created an 84MB file on disk at /home/jupyter/.fastai/data/inaturalist-usa-cats/images/models/usa-inaturalist-cats.pth
—I used scp
to copy that model down to my laptop.
This model gave me a 24% error rate which is pretty terrible—others on the course have been getting error rates less than 10% for all kinds of interesting problems. My focus was to get a model deployed as an API though so I haven’t spent any additional time fine-tuning things yet.
Deploying the model as an API
The fastai library strongly encourages training against a GPU, using pytorch and PyCUDA. I’ve been using n1-highmem-8 Google Cloud Platform instance with an attached Tesla P4, then running everything in a Jupyter notebook there. This costs around $0.38 an hour—fine for a few hours of training, but way too expensive to permanently host a model.
Thankfully, while a GPU is essential for productively training models it’s not nearly as important for evaluating them against new data. pytorch can run in CPU mode for that just fine on standard hardware, and the fastai README includes instructions on installing it for a CPU using pip.
I started out by ensuring I could execute my generated model on my own laptop (since pytorch doesn’t yet work with the GPU built into the Macbook Pro). Once I had that working, I used the resulting code to write a tiny Starlette-powered API server. The code for that can be found in in cougar.py.
fastai is under very heavy development and the latest version doesn’t quite have a clean way of loading a model from disk without also including the initial training images, so I had to hack around quite a bit to get this working using clues from the fastai forums. I expect this to get much easier over the next few weeks as the library continues to evolve based on feedback from the current course.
To deploy the API I wrote a Dockerfile and shipped it to Zeit Now. Now remains my go-to choice for this kind of project, though unfortunately their new (and brilliant) v2 platform imposes a 100MB image size limit—not nearly enough when the model file itself weights in at 83 MB. Thankfully it’s still possible to specify their v1 cloud which is more forgiving for larger applications.
Here’s the result: an API which can accept either the URL to an image or an uploaded image file: https://cougar-or-not.now.sh/—try it out with a cougar and a bobcat.
The Twitter Bot
Natalie built the Twitter bot. It runs as a scheduled task on Heroku and works by checking for new #cougarOrNot tweets from Dr. Michelle LaRue, extracting any images, passing them to my API and replying with a tweet that summarizes the results. Take a look at its recent replies to get a feel for how well it is doing.
Amusingly, Dr. LaRue frequently tweets memes to promote upcoming competitions and marks them with the same hashtag. The bot appears to think that most of the memes are bobcats! I should definitely spend some time tuning that model.
Science Hack Day was great fun. A big thanks to the organizing team, and congrats to all of the other participants. I’m really looking forward to the next one.
Plus… we won a medal!
Enjoyed #scienceHackday this weekend, made & launched a cool machine learning hack to process images & work out if they have a cougar in them or not! #CougarOrNot @critter_vision
— Natbat (@Natbat) October 29, 2018
... we won a medal!
Bot code: https://t.co/W2jZcGCnFr
Machine learning API: https://t.co/swNiKlcTp0 pic.twitter.com/dcdIhNZy63
More recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024