Why did Twitter move away from being a single-page application?
3rd January 2014
My answer to Why did Twitter move away from being a single-page application? on Quora
Twitter is still a single page application, it’s just built properly now (one result of which is that you can’t easily tell).
When Twitter first went single page, the only way to do so while still maintaining functioning hyperlinks and a working back button was to use a vile hack involving fragment URLs—things like https://twitter.com/#!simonw/sta...
This works, but comes with a whole bag of downsides:
- Anyone who clicks a link to Twitter has to download several megabytes of JavaScript before the site can figure out what information it should be displaying. In the case of a tweet permalink that’s several megabytes downloaded just to display a 140 character tweet!
- Pages don’t work at all without JavaScript.
- A single JavaScript bug in an obscure mobile phone browser only available on Chinese handsets can prevent the site from loading any content at all.
- Screen readers may fail to understand what’s going on.
- Only some search engines can index the pages, again using a nasty hack.
- These aren’t “real” hypertext URLs. If you perform a GET operation against them you don’t get the content the URL refers to, you just get the Twitter homepage. This is bad for the health of the Web.
- One relevant example: Quora’s feature where a URL pasted in is automatically converted to the title of that page can’t work if the URL doesn’t point at real HTML, rather than a homepage and a big chunk of JavaScript.
- These links start to spread. Other sites have to use them in links to Twitter, which means that even if Twitter gets rid of them they’ll need to keep some JavaScript on their homepage that knows how to handle them forever.
These hashbang URLs are nasty, nasty hacks.
Thankfully the hashbang hack isn’t necessary any more, thanks to a neat part of HTML5 called the History API. This allows JavaScript to update the visible URL in the browser’s URL bar without actually navigating to a completely new page—essentially, it allows a single page application to pretend that it’s a real website.
How does this fix the problems above? It means that Twitter can start serving real webpages again. Try running this command:
curl "https://twitter.com/simonw/status/419261896957505536"
It returns HTML! In fact, it returns the exact HTML needed to display a page that shows the content of that tweet. Then, at the very bottom of that HTML it includes a single script tag (loaded after the page has been displayed in the browser) which loads a huge bunch of JavaScript and converts that beautiful, light HTML page in to a giant single page web app.
Now, when you click on a link on the page, Twitter can fetch the new page contents using JavaScript and display it to you without doing a complete reload of the whole UI—but they can use the HTML5 history API to update the URL in the page so that, as far as you can tell, you’ve navigated somewhere else. It’s a signle page web application hiding in plain sight.
But what about browsers that don’t support HTML5 history, like older versions of IE? Here’s the smart part: they don’t get the fancy new JavaScript, they just get links which, when clicked, navigate them to a brand new page. This means that even browsers that don’t support JavaScript at all can access and navigate Twitter, but browsers with JavaScript get a much enhanced experience.
Dan Webb, an engineer at Twitter, wrote some very insightful pieces about this issue. In It’s About The Hashbangs (May 2011) he explained why he believed Twitter’s hashbang implementation was the wrong way to go. A year later in May 2012 he published Improving performance on twitter.com on the official Twitter blog explaining the new history technique they were using and how it had dramatically improved the performance of the site.
In fairness to Twitter, I should point out that when they launched their hashbang single page app back in 2010 the HTML5 history API wasn’t widely available—IE didn’t support it at all, and I seem to remember Safari 2.0 had a bug which made their implementation unusable. Thankfully this is no longer the case today.
More recent articles
- Notes from Bing Chat—Our First Encounter With Manipulative AI - 19th November 2024
- Project: Civic Band - scraping and searching PDF meeting minutes from hundreds of municipalities - 16th November 2024
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024