The tech stack

Since launching faavorite a couple of weeks ago Harry and I have had quite a few questions about the technology which powers the app which I’ll hope to answer as fully as possible in this article. It’s a little self indulgent and rather long but bear with me—I wanted to explain the rationale behind each part of the stack and share some of the things learned and experiences gained along the way.

TL;DR Linux, Apache, MySQL, PHP, NodeJS, ZeroMQ, Redis, WebSockets.

Software architecture

A few years ago this would have been a no-brainer for me: PHP 5.x, MySQL 5.x running on Linux and served through Apache. Job done. Like many other web developers I've been working with fairly typical LAMP stacks for years. They're tried and tested. They have a wealth of tools available. They're robust. They're predictable. They're comfortable.

Nowadays life isn't so simple. From the outset the only absolute certainty was that Linux would be the OS of choice. NoSQL databases are many and varied. Numerous programming languages exist which demand the attention of even the most avid PHP developer. Even Apache doesn't have it easy, with rising stars like nginx adding yet more complexity to an already dizzying array of decisions to be made.

In the end familiarity and predictability won the day; faavorite would indeed be built on a traditional LAMP stack. That was the plan, anyway. I was perfectly happy to consider other technologies where they made the most sense, but as far as possible the site would be built on the same technologies which have been powering some of the biggest websites in the world for years; the same technologies I know how to work with in the real world and the same technologies I know I can rapidly prototype, test and deploy with. Learning new stuff is fantastic, but there's a lot to be said for staying within your comfort zone when you've got a mountain to climb and not a lot of time to do it in.

Application architecture

With the core software decided upon it was time to think about how the various parts of the application itself would work together. I'm not talking about MVC here—let's take that as a given—I'm talking about how distinct tiers of the overall app communicate with each other. In many websites this conceptual separation isn't needed—request comes in, request is processed, database might get prodded, response is returned. Rinse & repeat. However, from the outset I knew we'd hit serious bottlenecks if we tried to accomplish everything within this tight, on demand lifecycle. I initially broke the app into (very roughly) the following mental pieces:

  • frontend web layer (e.g. request a URL, get some HTML back)
  • favorite import process (one off process on signup—long running daemon)
  • favorite sync process (repeated process—cron job)
  • URL resolution ( ->
  • URL spidering (embedded content)

I knew that all but the first of these tasks was best served by a background process. Clearly I haven't paid enough attention during my career because I'm not entirely sure what the correct terminology is here, but that doesn't particularly matter because I quickly invented my own: the Iceberg Index. I think a diagram is probably best here, but as I can't draw you'll have to settle for a photo of an Iceberg and some imagination instead:

The analogy goes like this: the tip of the iceberg—the only bit visible above the water—is what the user gets when they request a page. It's fairly simple, limited in scope and easily manageable. That's our request-response cycle. All the background tasks make up the looming, unwieldy mass hidden from view beneath the water's surface. This summed up fairly accurately how I felt about about the project—most of the hard work lay out-of-sight, down in the murky depths of the various background tasks. Plus, it let me invent my own terminology about "sub surface" issues and make jokes about "choppy waters ahead" and stuff. Which, if anyone had been present to hear them, would have been hilarious.

So then: how are these various parts of the Iceberg going to talk to each other? (The Iceberg analogy is a little fragile here since I don't think Icebergs communicate much at all, but bear with me). I already knew the answer to this one: ZeroMQ.

I won't go into huge detail about why I chose ØMQ here—I just needed a simple messaging system to connect the various parts of the app. There are plenty of other message queuing systems out there and plenty of other ways I could have got my Iceberg talking, but I've used ØMQ for various experiments over the past year and always enjoyed it. It's performant, quick to set up, has bindings available in tons of languages and simple to use.

Basic communication: importing a user's favorites

The first natural area of icerberg-intercommunication to look at was importing a user's favorites from Twitter upon initial signup. It seemed easy enough—greet a new user, bung a message on a queue and process that message with a long running PHP daemon. The daemon itself would simply fetch the first few hundred of the user's favorites via Twitter's REST API.

And so it was… for a while at least. However, with a maximum of 200 favorites returned per API request and a target of 600 per initial import (coincidentally the amount of favorites I had at the time) this meant 3 potential HTTP requests to begin to import a user's favorites. 3 requests in PHP meant 3 sequential, synchronous requests, which on a bad day could take anywhere up to 6 seconds (the Twitter API is seriously overworked).

Let's think about that for a moment. One user. Six seconds. Just to fetch a few hundred initial favorites from Twitter (not including any actual processing we needed to do in the app). All the while blocking the daemon from receiving any more messages. This was not going to scale well at all.


At this stage it was time for the current darling of the web development world to make its first appearance. This isn't the place for arguing the pros and cons of node.js but for me this was an obvious use-case: lots of I/O (the http requests) and not a huge amount of CPU required, plus the huge benefit of being able to make each HTTP request asynchronously and allow the daemon to continue to receive more messages whilst others were currently being processed. Node's asynchronicity makes stuff like this an absolute breeze—you get a natural performance boost by performing multiple requests simultaneously, whilst the evented architecture means you aren't just chewing up CPU while you wait for various responses.

If you cast your eye back to the rough application breakdown you'll see that in fact all of the sub surface tasks are incredibly well suited to node.js. Consequently, that's exactly what I wrote them in. I'm not personally convinced about node's merit as a web server, so I didn't use it as one. I was never convinced about PHP's merit as a URL resolution tool, so I didn't use it as one. The takeaway: use what works when it makes sense.

NodeJS detour—CoffeeScript

I finally started using CoffeeScript at some point when building faavorite. I loved it so much I got thoroughly carried away and rewrote all the underwater tasks in it. I even stopped compiling the CoffeeScript into JS before running it (the coffee executable lets you do that on-the-fly) to avoid having the source and compiled versions of each script in the git repository. For some of the offline tasks this was a mistake. Every time they started up (usually via a cron job), the coffee executable would spend a short amount of time (usually < 1s) compiling them. In human terms that's not a lot, but it's a lifetime of CPU cycles, and completely unnecessary. Eventually I settled for compiling my CoffeeScript for cron tasks and I felt better for it. I still just run *.coffee files for the long running daemons since the startup time is less crucial.

Testing things in the murky depths

I've already touched on some of the rationale for sticking to a LAMP stack where possible: comfort. I've developed enough sites in PHP over the years to get a reasonable feel for not only how it performs, but why it performs in a given way. More importantly, in recent years I've learned how to test it too. I'm reasonably good at estimating how long something will take, how I'll do it, and how I'll test it. These things make me feel comfortable.

My experience with node.js, particularly in the form of long running daemons or offline cron tasks, is significantly limited when compared to my PHP knowledge. Writing testable PHP comes naturally to me. Writing testable JavaScript slightly less so. Writing testable offline node.js daemons was almost entirely alien. So I committed the cardinal sin: I wrote hardly any tests for my node.js code.

Consequently, every bit of code which slipped below the surface simultaneously disappeared from the testing radar, out of sight and out of my comfort zone. All of a sudden I had huge (icey) chunks of code performing hugely important tasks covered by some threadbare tests. I knew I had to do something about it, but I lacked the domain specific expertise to do it. What testing framework would I use? Would it properly integrate with my existing test reports in Jenkins? How could I measure code coverage? How do you write testable, asynchronous node.js code, some of which interacts with a third party service (Twitter's REST API)?

As I said: comfort. You can be an awesome programmer, but don't underestimate the difficulty applying your skills in a new domain. Eventually I got the testing situation a little more under control using vows.js and used its xunit output mode so Jenkins could aggregate my node.js test results with those from PHPUnit. I'm still refactoring things to make my node.js logic more testable and improve its code coverage, but it's getting there (a good day's focus would go a long way to sorting it out completely).

Premature optimisation?

You may be wondering why I caused myself all this unnecessary bother in the first place—I was optimising against as yet unproven performance concerns. The mantra of "premature optimization is the root of all evil" is well worn and was not lost on me. Similarly, I could see the spectres of Agile methodology (something I loosely subscribe to) nagging me that I should be burning (down) towards that MVP, focussing on "working software" and being prepared to "respond to change" when necessary and not before.

That said, one has to be pragmatic. Dogma alone is not enough to justify a lack of forethought and planning. I knew PHP wasn't going to cut the mustard for what I wanted to do in the way I wanted to do it, so it was better to bite the bullet sooner rather than later. I'm not sure wheeling out some well trodden soundbites would have got me off the hook when the whole thing came crashing down should people start using the app when we launched. I'm no expert but I had a rough idea where the bottlenecks would be, so diligence compelled me to tackle those areas head on before they overwhelmed the project.

Hooking things together

So, how does all this stuff work? The vague answer is a combination of long running daemons, regular cron jobs and ØMQ flavoured glue. The detailed answer is, well, very detailed and would make an already long article significantly longer; watch out for subsequent writeups focussing on discreet application chunks instead. I'll go through a high level explanation of the initial import and URL resolution processes but the rest will have to wait.

Initial import

  • New user registers
  • PHP process (tip of the icerberg) places message on ØMQ 'import' queue, shows user a welcome page
  • Node.js import daemon (murky depths) retrieves message, fetches up to 3 pages of favorites from Twitter API, processes favorites (this is usually complete by the time a user has read the welcome page)
  • If user has > 600 favorites, place them on a slow running drip import process
  • Import daemon places message on ØMQ 'resolve_url' queue
  • Node.js URL resolution daemon retrieves message, examines all URLs in new favorites and begins resolving them asynchronously. User is never made aware of this step—they never need to know

Let's take a look at this process in a rather haphazardly drawn diagram:

URL resolution

In addition to the description above, visualising the URL resolution process is pretty straightforward, albeit no more attractive due to my lamentable drawing talents:

There are two important things to note here. First of all, we naturally have to follow redirects. For starters, the vast majority of URLs we resolve are actually Twitter's links, which guarantee we're in for a redirect. Of course, other URL shorteners ( et al) are still in widespread use, meaning we could be redirected a good few times before we uncover the actual destination URL (there is a maximum redirect threshold not shown in the diagram, to prevent things getting out of control).

Secondly—and crucially—we only ever issue HEAD requests. This process isn't interested in the content of each URL; its sole purpose is to work out what the destination URL is and categorise it according to any 'known' URL patterns whose content we're interesting in embedding when a user views a faavorite.

Embedded content

If you imagine for a moment that the process to sync an existing user's favorites from Twitter is similar to the above (it is) then one of the few missing pieces of the puzzle is the question of how we spider any 'known' URLs turned up by the URL resolution process in order to embed content when viewing a faavorite. The answer to this is reasonably simple: a regular cron job which checks for any new 'known' URLs and queries the relevant API.

For example, if we have a newly identified URL which looks like a Github repository, we'd take the relevant parts (the username and repository name) out of the URL and feed them into Github's developer API. If we get a successful response, we update a metadata field against the URL with the contents of that response: et voila.

Each type of 'known' URL contains additional data including a TTL value, meaning that some are only ever spidered once (e.g. instagram photos, spotify albums) whereas some are spidered semi regularly (Github repositories, Stackoverflow questions). This prevents content becoming out-of-date in some circumstances.

Getting serious: approaching ‘beta’ and creating a user whitelist

I hate the concept of a perpetual beta—so much so that initially I was against the idea of any sort of phased rollout. Either things were going to work, or they weren't. However, as d-day approached I started to lose faith in this rather gung-ho attitude and eventually realised that some sort of beta mode—where I had control over user numbers and access—was essential. I still didn't want to call it a beta though, so we set up instead. Much better.

I was loathe to write too much throwaway code to create a whitelisted environment but in the end found a satisfactory solution at the expense of very little codebase pollution. The basic outline was this:

  • load an additional settings file if the domain was (jaoss lets you do this easily). Bear in mind by this point we had a 'live' site insofar as we had a holding page on, hence the necessity for a different domain and an override file
  • in this settings file, set 'whitelist_mode' to true
  • in this settings file have a hard coded list of usernames who had access to the beta trial
  • when authenticating the user, perform a simple PHP in_array check to see if their username is in the whitelist
  • if it is, continue as normal
  • if not, redirect to the holding domain with a special GET parameter which triggers some blurb about the site not being ready for everyone yet.

I maintained this settings file on the server, outside of our git repository and outside of the release process, to allow quick and easy additions of users on an ad-hoc basis. It worked extremely well and took very little time to write. The trial period proved absolutely invaluable from both Harry's and my own perspective, so I would strongly encourage anyone to consider your own mechanism before launch. No developer likes temporary code graffitied all over their codebase but with a bit of thought a solution might present itself which turns out to be small, self contained and hugely effective—and ultimately well worth doing. Note that in our case the typical 'invite only' model didn't really work—we don't take email addresses so the signup form would have had to take a twitter username instead, and I really didn't fancy writing a form which I'd have to test and validate. The solution I went with felt like the best comprimise.

Extreme seriousness: Release 1.0.0 and controlling user signups

At some point in February when it felt like ages away, Harry and I agreed to launch faavorite on Sunday 18th March. We got the beta out on Wednesday 14th (later than we'd hoped) which meant we were either in for a very long weekend or a missed deadline. In the end, we got there a few hours late: we launched around 2pm Monday 19th. I wasn't happy unleashing a complex system we hadn't performance tested without some sort of escape route so at some point reworked the beta logic a little, such that when we flicked the switch and made the full site live a 'capped registrations' mode kicked in. In many ways this was even simpler than the whitelist logic: I could control the absolute number of permitted rows in the users table via an unversioned settings file which simply contained a key and an integer value. If you tried to register and the number of users had hit this value, you saw a 'sorry' page. If it was less than the value, in you came.

So, Monday lunchtime rolled around and we snuck out release 1.0.0. I set up a redirect to force to, set the cap to allow 50 new registrations, and Harry, myself and the @faavoriteapp twitter account started spreading the word.

We filled the 50 slots in under 10 minutes, imported roughly 10,000 tweets and resolved & spidered about 5,000 URLs. Everything worked. So we went to the pub.

Capped users: the good

Having control over the absolute number of users permitted in the database had two huge benefits:

  1. I could take emergency action if the registrations looked like they were swamping the system. Registration is pretty much our most intensive time due to the initial import and URL resolution logic, so I was a bit anxious about it
  2. Almost more importantly, I could measure the background impact that users and their imported favorites had on various parts of the site (mainly the database). Faavorite is unusual in that each 'new' user brings with them a boat load of data (sometimes thousands of tweets and URLs) so I wanted to analyse the ongoing performance of the site in a controlled manner.

Capped users: the bad

I should have seen this one coming. Capped registrations coupled with Harry's strong following on Twitter created an almost siege like impact every time we rolled out another 50 or so registrations. I'd tweet, he'd tweet, @faavoriteapp would tweet, and we'd get a slew of people trying to get a space as quickly as possible. I'd unwittingly created a mini DDoS situation all of my own doing, which meant that each time more registrations were rolled out I sat anxiously watching the server to make sure everything went smoothly. This pattern was particularly painful whatsoever during the early days of launch when I was unsure how the system would scale, so it's something to be mindful of.

On site performance: database stuff

I've talked a lot about what happens below the surface but not yet really touched on on site performance—what users actually experience when browsing round the site. There are a few areas of interest here.

Generating a user's feed

When starting the project I had no idea quite what a complex, widely discussed area of interest feed generation was. How hard can it be, right? The basic idea is to simply take a subset of all tweets (well, favorites in our case) authored by a user who the current user is following. Hmm. Sounding a bit more complex already. The MySQL query itself isn't too bad at all, but it doesn't take much to imagine how performance can degrade significantly over time as user and tweet numbers increase. During development I set aside some time to add dummy data into the site to continually test the performance of this query and coupled with using MySQL's EXPLAIN functionality refined it such that it was performant up to an acceptably high number of users and tweets. I'm no DBA so must admit this was a protracted period of trial and error coupled with much googling, though as is often the case with this sort of thing I didn't really know what to google for. I eventually stumbled on a Quora topic which gave me the terminology I was after: fan out on read and fan out on write (I strongly encourage you to read the topic for the full lowdown).

Generating a user's feed: fan out on read Vs fan out on write

As I said—it turns out feed generation is a huge area of focus (not surprisingly given that Facebook, Twitter et al depend on it). Just knowing of the terms 'fan out on read/write' gave me a huge amount of reading to do and helped steer my research.

One of the most enlightening things I found was this talk from 2010: “Big Data in Real-Time at Twitter”. It touches upon the difficulty of generating a user's feed on demand (fan out on read) and outlines what Twitter did about it (hint: switched to fan out on write). But the biggest source of comfort was these two points from this slide:

  • All engineering solutions are transient
  • Nothing's perfect but some solutions are good enough for a while

That was good enough for me. Twitter ran with fan out on read for a while until it stopped working for them. It was unlikely to stop working for faavorite for a very long time, so I stopped worrying about it and stuck with my approach. The user feed generation you currently see is all generated on demand: fan out on read.

Real-time mini events feed: fan out on write. And Redis. And WebSockets.

With a brain stuffed full of information I could barely remember, I had in mind the perfect excuse to use fan out on write: the real-time mini events feed present on various pages throughout the site. I'd been eyeing up Redis for a while too, so I decided to use that as the data store for it.

So, why a completely different implementation for a very similar looking piece of functionality? The answer isn't particularly professional: the mini feed simply isn't as important as the main feed so I could afford to have a little fun with it. Secondly, a user's mini feed is always clipped to the last few events meaning the fan out on write model won't swamp redis with too much data over time. Thirdly, the events feed is allowed to be more transient—it contains comments, notifications when friends join faavorite, notifications when people favorite things, etc. It doesn't have to be 100% accurate—I don't ever delete an event from the mini feed (if someone faavorites then unfaavorites something, for example), so fan out on write just felt like a good pattern to try out.

The full details of how the events feed works, including the real-time websocket notification system, are complex. I'll write them up in full in a later article.

On site performance: page load speed

One comment we've received numerous times since launch is the fast page load speed. Hearing this is part flattering and part embarrassing, because the truth is we're not doing anything particularly clever to cause it. There's some rudimentary query caching and both Harry and I always try to write performant code in our respective disciplines, but there's not a lot to it except for one key thing: asynchronous page loads.

Asynchronous page loads: PJAX

I'm not going to write too much about PJAX—you can read about how this site uses it instead. It's worth being clear here: the speed benefit comes from asynchronously loading less page content. The 'P' in PJAX takes care of the rest - namely push and pop state (to update your browser's address bar), but it's not going to instantly give you a massive performance boost unless your backend architecture knows how to interpret and respond to a request containing the X-PJAX header. I spent a while tweaking this specifically to faavorite's needs—we use a custom version of the PJAX library and do some clever things on the backend to try and return as little data as possible per page load—a good example of this is that the events feed is not regenerated for a PJAX request if it was already present on the page.

Integrating asynchronous page loads was not as easy as on and I must admit I thought about pulling it at various points, but in the end the reward is absolutely worth it. I think a large part of the responsiveness of the site actually boils down to it feeling responsive due to a lack of a full page refresh. The rest is down to ongoing hard work elsewhere to make sure the codebase is as performant as possible.

The physical stack

I suppose the last area of relevance (for now) is the hosting which powers There's not a great deal to say other than it currently runs on a Rackspace Cloud server with 2GB RAM (rackspace don't publish clock speeds, but I know the VM has four cores). I've got a rough mental roadmap of how to scale the site across multiple machines as and when the need arises, and if cost wasn't an issue we'd already be running separate MySQL and Redis servers, but it's keeping us afloat for now.

Summary: key technologies

Let's have a run down of all the technologies used on, in rough as-you-meet-them order (I've left out Linux):

  • Apache 2.2
  • PHP 5.3 (I tried 5.4, got segfaults, got scared, ran away)
  • JAOSS (PHP MVC framework)
  • MySQL 5.5 (ubuntu users: the repo only contains 5.0. Upgrade!)
  • NodeJS 0.6.12
  • Countless awesome node.js modules
  • Redis 2.2 (needs upgrading)
  • 0.9.4 (not a technology as such, but important to highlight)
  • ZeroMQ 2.1

Future articles

If you made it this far then thanks—I appreciate you sticking with it. Believe it or not this is actually the short version—there's too much fun stuff to write about in one article. Keep an eye out for future articles detailing:

  • ZeroMQ architecture—how does the messaging layer all fit together?
  • Events processing—how do we generate events and push them out in real time to any interested, active users?
  • Keeping Twitter favorites synced for 1,000+ users using their REST API
  • Monitoring—making sure things don't go wrong, and being alert when they do.

The easiest way to keep up to date is to follow me or @faavoriteapp. If you aren't already, get following Harry and keep an eye out for his writeup about the mobile-first approach taken when building the frontend architecture too. In the mean time if you're not already then please—sign up for faavorite!


Very interesting article Nick, decent read

Comments are now closed.