What are optimal workflows for deploying one’s web application?
14th January 2012
My answer to What are optimal workflows for deploying one’s web application? on Quora
The absolute first step is to automate your deployments. It’s absolutely crucial that deploying the site is a single command. I’ve found Fabric (an automation tool written in Python) works extremely well for this—Capistrano is a popular alternative that uses Ruby instead.
A simple policy that can help is this: never run a command on your remote server without putting it in a Fabric file first. This way your deploy and setups will always be repeatable, with almost no additional effort on your part.
At Lanyrd our deployment process has evolved significantly over the past 18 months—our most recent setup, designed by our engineer Tom Insam, is pretty much my ideal situation. Here’s (roughly) how it works.
Our servers run on EC2 and are configured with Puppet. The Puppet roles are assigned as EC2 tags, which means we can start a new EC2 instance, assign tags to it (e.g. application-server, redis, cron-runner) and Puppet will do the rest of the setup.
When we deploy, we specify an environment (e.g. development, staging or live) and a git branch or tag (default is master). Setting up our staging environment is trivial because of our Puppet-powered server configs—this also means makes it a no-brainer to turn it off when we’re not using it (e.g. over the weekend) to save on EC2 server costs.
We also have Jenkins set up to run our test suite every time someone commits to our git repository. Finally, we have a convenient “deploy master to live” button (we try to keep master shippable at all time, developing potentially site breaking features in a feature branch).
There’s a fair bet more to it than the above—we have the ability to roll back to a previous version by flipping a symlink should we break the site, and there’s some extra stuff to handle pushing static assets to S3 and running database migrations. Overall it’s an enormous improvement on my previous hacked-together fabric scripts, which were themselves a huge improvement over ad-hoc deploys via mucking around with SSH.
More recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024