The async programming style promoted by vanilla node.js development is called continuation passing. If you’ve ever used node.js you’re no doubt familiar with the style:
1 2 3 4 5 6
The continuation being passed (how the program should continue) is the function(err, result) that needs to be executed when the current function completes. This is not massively difficult to work with, and with judicious refactoring, it’s even possible, to avoid what is known as ‘callback hell’, by grouping functions into coherent building blocks.
However, visually, it’s not very intuitive. You’re indicating something that should happen after current function as an argument of the function, almost inside the function call.
What really happens is that the async function completes, and then you go on to the continuation. And this essential visual change is exactly what promises allow you to do.
What are promises
I was first made aware of the possibility of using promises in this context by James Coglan, who has vocally advocated this style over standard continuation passing. He’s compared promises to monads of the asynchronous world which makes sense if you consider the piping behavior I’ll explain a little bit lower.
Promises are concurrency constructs
that acts as a proxy for a result that is initially unknown, usually because the computation of its value is yet incomplete.
Which fits very well with async programming, where the control flow is waiting (and potentially yielded to other request) while mostly non-blocking IO actions complete.
I’ve been using the Q library for promises, and the code equivalent to the one I gave above looks like this:
1 2 3 4 5 6
As you can see, there is a very clear sequence of events – “do this, and THEN do the next thing”.
The Q library essentially plans for 2 outcomes for every action, one is error and one is success. When a function has failed, the promise is ‘rejected’, when a function has succeeded, the promise is ‘resolved’ (fulfilled). The ‘then’ can take two functions for every one of those outcomes (success first, error after).
Concatenating ‘then’ allows you to do ‘promise pipelining’, having promises that take the result of previous functions and carry on. The error is also bubbled down the promise pipeline, and the first error callback to be provided will catch it.
1 2 3 4 5 6 7 8 9 10
In this example, an error occurring in action1 or action2 will be handled in the errorHandling function in the last ‘then’.
It takes some getting used to working with promises, since every function you want to chain in this manner has to return a promise itself. Another change in the signature of functions is that we no longer expect a callback as argument. Fortunately, the Q library also gives a convenient helper function, which will transform any standard node async function (expecting a callback that takes as arguments errors and results) into a promise-resolving function. example: go from
1 2 3 4 5 6
To the equivalent promise-returning version:
1 2 3 4 5
defer.makeNodeResolver() rejects the promise if there is an error, and resolves the promise if the function succeeds.
More complex flow with qx
qx is a library that adds some familiar constructs to the promise pipeline, allowing you to execute arrays of promises in various ways. map:
1 2 3 4 5 6 7 8
The ‘then’ is only executed when all promises are fulfilled, and allResults is an array containing the results of all the promises, in the order given in the program. q also offers a convenience function ‘spread’, so you can name the results individually.
1 2 3 4 5 6 7 8
Another convenient use of map, is when you have an unknown number of arguments you want to all process the same way:
1 2 3 4
something which is usually a bit harder to carry out in continuation-passing style.
qx has other interesting functions, like every() or any(), which will execute the next step only if all or any of the promises resolves to true. It doesn’t have reduce (yet), but by digging around in the code, I found a reduce in the q library itself. In short, you can combine promises in ways that should be familiar from standard map-reduce-… type operations.
This allows the promise pipeline to take on a more complex flow. Combining parallel (as in for non-blocking IO – node.js remains single-threaded) and sequential actions becomes easy.
The most important gotcha in working with q is to always, always make sure there is error handling at the end of the pipe. I would urge you to use tests to make sure of that. What happens when there is no error handling? Well, that’s the problem: nothing. If you are writing a web application, like I was, the request just hangs, no explanation given.
On my wishlist for q would be a solution for this, a default fallback error handler, throwing a neat stack trace, if the execution has no explicit error handler.
The Q documentation advises to add a ‘done’ function at the end of your chain as a stopgap.
1 2 3 4 5
After developing a fairly complex node.js application using Q and Qx, I must say that I’m hooked. I find it much clearer to have a sequential path of events, with two clear possible outcomes, than to have to wrangle the usual passing around of callbacks. I’m not completely sure why continuation passing was chosen over promises by node.js core.
There are other promise libraries around (see the Promises/A page), however, after a bit of initial wrestling, I’ve found Q to be a good fit.