Simon Willison’s Weblog

Subscribe

Items tagged facebook, scaling

Filters: facebook × scaling × Sorted by date


Migrating Messenger storage to optimize performance (via) Fascinating case-study of a truly gargantuan migration. Messenger has over a billion users, and Facebook successfully migrated its backend storage from HBase to their MyRocks database (a fork of MySQL with a storage engine built on their SSD-optimized RocksDB key/value library) without any user-visible downtime. They ended up using two migration paths: one for the 99.9% of regular accounts, and a separate path for extremely high volume accounts (businesses with very active chat bots or support systems). # 27th June 2018, 3:05 pm

Did Mark Zuckerberg have any knowledge on building scalable social networks prior to starting work on Facebook?

I’m going to bet he didn’t have this knowledge, simply because back when he launched Facebook in 2004 almost NO ONE had this knowledge—there simply weren’t enough “web scale” products around for the patterns needed to run them to be widely discussed.

[... 143 words]

Up and running with Cassandra. Twitter are beginning to use Cassandra, the open source branch of Facebook’s BigTable-like non-relational database. Evan Weaver explains how to get started with it, but warns that it’s not yet a good idea to trust data to it without having a full backup in an unrelated storage engine. # 7th July 2009, 11:18 am

Streams, affordances, Facebook, and rounding errors. I asked Kellan about scaling activity streams the other day. Here he suggests the best technique is not to promise a perfect stream (like Twitter does)—Facebook used to get away with 80% loss of update messages, but their new redesign has changed the contract with their users. # 19th March 2009, 2:02 pm

Scaling memcached at Facebook. Fascinating techie details on how Facebook forked memcache to use UDP and increase performance from 50,000 requests a second to 200,000. Now running on 800 servers with 28 TB of memory, and their code is on GitHub. (They may scale like crazy, but they can’t put their blog entry title in the title element?) # 13th December 2008, 10:08 am

Facebook engineering notes on Scaling Out. Jason Sobel explains a couple of tricks Facebook use to deal with consistency between their California and Virginia data centres. The first is to hijack the MySQL replication stream to include information about memcached records to invalidate; the second is to use Layer 7 load balancers which inspect a “last modification time” cookie and send users to the masters in California if they have updated their profile in the past 20 seconds. # 20th August 2008, 11:51 pm

Dark Launches, Gradual Ramps and Isolation: Testing the Scalability of New Features on your Web Site. Smart advice from Dare Obasanjo that extend the “dark launch” idea illustrated by Facebook chat a few weeks ago. # 29th June 2008, 2:22 pm

Engineering @ Facebook: Facebook Chat. The new Facebook Chat uses Comet (long polling with a hidden iframe) against a custom web / chat server written in Erlang, designed to handle a launch to all 70 million users at once. It was tested using a “dark launch” period where live pages simulated chat request traffic without showing any visible UI. # 15th May 2008, 7:55 am

The GigaOM Interview: Mark Zuckerberg. Some interesting titbits on Facebook’s architecture. # 11th March 2008, 5:41 am

iLike: Holy cow... 6mm users and growing 300k/day! (via) Facebook platform offers a viral distribution mechanism for free. Downside: you have to double your capacity every few days. # 13th June 2007, 9:02 am

... Facebook has roughly 200 dedicated memcached servers in its production environment, plus a small number of others for development and so on. A few of those 200 are hot spares. They are all 16GB 4-core AMD64 boxes, just because that’s where the price/performance sweet spot is for us right now.

Steve Grimm # 3rd May 2007, 10:36 pm