JavaScript
Article

How to Create a Node.js Cluster for Speeding Up Your Apps

By Behrooz Kamali

Node.js is becoming more and more popular as a server-side run-time environment, especially for high traffic websites, as statistics show. Also, the availability of several frameworks make it a good environment for rapid prototyping. Node.js has an event-driven architecture, leveraging a non-blocking I/O API that allows requests being processed asynchronously.

One of the important and often less highlighted features of Node.js is its scalability. In fact, this is the main reason why some large companies with heavy traffic are integrating Node.js in their platform (e.g., Microsoft, Yahoo, Uber, and Walmart) or even completely moving their server-side operations to Node.js (e.g., PayPal, eBay, and Groupon).

Each Node.js process runs in a single thread and by default it has a memory limit of 512MB on 32-bit systems and 1GB on 64-bit systems. Although the memory limit can be bumped to ~1GB on 32-bit systems and ~1.7GB on 64-bit systems, both memory and processing power can still become bottlenecks for various processes.

The elegant solution Node.js provides for scaling up the applications is to split a single process into multiple processes or workers, in Node.js terminology. This can be achieved through a cluster module. The cluster module allows you to create child processes (workers), which share all the server ports with the main Node process (master).

In this article you’ll see how to create a Node.js cluster for speeding up your applications.

Node.js Cluster Module: what it is and how it works

A cluster is a pool of similar workers running under a parent Node process. Workers are spawned using the fork() method of the child_processes module. This means workers can share server handles and use IPC (Inter-process communication) to communicate with the parent Node process.

he master process is in charge of initiating workers and controlling them. You can create an arbitrary number of workers in your master process. Moreover, remember that by default incoming connections are distributed in a round-robin approach among workers (except in Windows). Actually there is another approach to distribute incoming connections, that I won’t discuss here, which hands the assignment over to the OS (default in Windows). Node.js documentation suggests using the default round-robin style as the scheduling policy.

Although using a cluster module sounds complex in theory, it is very straightforward to implement. To start using it, you have to include it in your Node.js application:

var cluster = require('cluster);

A cluster module executes the same Node.js process multiple times. Therefore, the first thing you need to do is to identify what portion of the code is for the master process and what portion is for the workers. The cluster module allows you to identify the master process as follows:

if(cluster.isMaster) { ... }

The master process is the process you initiate, which in turn initialize the workers. To start a worker process inside a master process, we’ll use the fork() method:

cluster.fork();

This method returns a worker object that contains some methods and properties about the forked worker. We’ll see some examples in the following section.

A cluster module contains several events. Two common events related to the moments of start and termination of workers are the online and the exit events. online is emitted when the worker is forked and sends the online message. exit is emitted when a worker process dies. Later, we’ll see how we can use these two events to control the lifetime of the workers.

Let’s now put together everything we’ve seen so far and show a complete working example.

Examples

This section features two examples. The first one is a simple application showing how a cluster module is used in a Node.js application. The second one is an Express server taking advantage of Node.js cluster module, which is part of a production code I generally use in large-scale projects. Both the examples can be downloaded from GitHub.

How a Cluster Module is Used in a Node.js App

In this first example, we set up a simple server that responds to all incoming requests with a message containing the worker process ID that processed the request. The master process forks four workers. In each of them, we start listening the port 8000 for incoming requests.

The code that implements what I’ve just described, is shown below:

var cluster = require('cluster');
var http = require('http');
var numCPUs = 4;

if (cluster.isMaster) {
    for (var i = 0; i < numCPUs; i++) {
        cluster.fork();
    }
} else {
    http.createServer(function(req, res) {
        res.writeHead(200);
        res.end('process ' + process.pid + ' says hello!');
    }).listen(8000);
}

You can test this server on your machine by starting it (run the command node simple.js) and accessing the URL http://127.0.0.1:8000/. When requests are received, they are distributed one at a time to each worker. If a worker is available, it immediately starts processing the request; otherwise it’ll be added to a queue.

There are a few points that are not very efficient in the above example. For instance, imagine if a worker dies for some reason. In this case, you lose one of your workers and if the same happens again, you will end up with a master process with no workers to handle incoming requests. Another issue is related to the number of workers. There are different number of cores/threads in the systems that you deploy your application to. In the mentioned example, to use all of the system’s resources, you have to manually check the specifications of each deployment server, find how many threads there are available, and update it in your code. In the next example, we’ll see how to make the code more efficient through an Express server.

How to Develop a Highly Scalable Express Server

Express is one the most popular web application frameworks for Node.js (if not the most popular). On SitePoint we have covered it a few times. If you’re interested in knowing more about it, I suggest you to read the articles Creating RESTful APIs with Express 4 and Build a Node.js-powered Chatroom Web App: Express and Azure.

This second example shows how we can develop a highly scalable Express server. It also demonstrates how to migrate a single process server to take advantage of a cluster module with few lines of code.

var cluster = require('cluster');

if(cluster.isMaster) {
    var numWorkers = require('os').cpus().length;

    console.log('Master cluster setting up ' + numWorkers + ' workers...');

    for(var i = 0; i < numWorkers; i++) {
        cluster.fork();
    }

    cluster.on('online', function(worker) {
        console.log('Worker ' + worker.process.pid + ' is online');
    });

    cluster.on('exit', function(worker, code, signal) {
        console.log('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal);
        console.log('Starting a new worker');
        cluster.fork();
    });
} else {
    var app = require('express')();
    app.all('/*', function(req, res) {res.send('process ' + process.pid + ' says hello!').end();})

    var server = app.listen(8000, function() {
        console.log('Process ' + process.pid + ' is listening to all incoming requests');
    });
}

The first addition to this example is getting the number of the CPU cores using the Node.js os module. The os module contains a cpus() function, which returns an array of CPU cores. Using this approach, we determine the number of the workers to fork dynamically, based on the server specifications to maximize the utilization.

A second and more important addition is handling a worker’s death. When a worker dies, the cluster module emits an exit event. It can be handled by listening for the event and executing a callback function when it’s emitted. You can do that by writing a statement like cluster.on('exit', callback);. In the callback, we fork a new worker in order to maintain the intended number of workers. This allows us to keep the application running, even if there are some unhandled exceptions.

In this example, I also set a listener for an online event, which is emitted whenever a worker is forked and ready to receive incoming requests. This can be used for logging or other operations.

Performance Comparison

There are several tools to benchmark APIs, but here I use Apache Benchmark tool to analyze how using a cluster module can affect the performance of your application.

To set up the test, I developed an Express server that has one route and one callback for the route. In the callback, a dummy operation is performed and then a short message is returned. There are two versions of the server: one with no workers, in which everything happens in the master process, and the other with 8 workers (as my machine has 8 cores). The table below shows how incorporating a cluster module can increase the number of processed requests per second.

Concurrent Connections 1 2 4 8 16
Single Process 654 711 783 776 754
8 Workers 594 1198 2110 3010 3024

(Requests processed per second)

Advanced Operations

While using cluster modules is relatively straightforward, there are other operations you can perform using workers. For instance, you can achieve (almost!) zero down-time in your application using cluster modules. We’ll see how to perform some of these operations in a while.

Communication Between Master and Workers

Occasionally you may need to send messages from the master to a worker to assign a task or perform other operations. In return, workers may need to inform the master that the task is completed. To listen for messages, an event listener for the message event should be set up in both master and workers:

worker.on('message', function(message) {
    console.log(message);
});

The worker object is the reference returned by the fork() method. To listen for messages from the master in a worker:

process.on('message', function(message) {
    console.log(message);
});

Messages can be strings or JSON objects. To send a message from the master to a specific worker, you can write a code like the on reported below:

worker.send('hello from the master');

Similarly, to send a message from a worker to the master you can write:

process.send('hello from worker with id: ' + process.pid);

In Node.js, messages are generic and do not have a specific type. Therefore, it is a good practice to send messages as JSON objects with some information about the message type, sender, and the content itself. For example:

worker.send({
    type: 'task 1',
    from: 'master',
    data: {
        // the data that you want to transfer
    }
});

An important point to note here is that message event callbacks are handled asynchronously. There isn’t a defined order of execution. You can find a complete example of communication between the master and workers on GitHub.

Zero Down-time

One important result that can be achieved using workers is (almost) zero down-time servers. Within the master process, you can terminate and restart the workers one at a time, after you make changes to your application. This allows you to have older version running, while loading the new one.

To be able to restart your application while running, you have to keep two points in mind. Firstly, the master process runs the whole time, and only workers are terminated and restarted. Therefore, it’s important to keep your master process short and only in charge of managing workers.

Secondly, you need to notify the master process somehow that it needs to restart workers. There are several methods for doing this, including a user input or watching the files for changes. The latter is more efficient, but you need to identify files to watch in the master process.

My suggestion for restarting your workers is to try to shut them down safely first; then, if they did not safely terminate, forcing to kill them. You can do the former by sending a shutdown message to the worker as follows:

workers[wid].send({type: 'shutdown', from: 'master'});

And start the safe shutdown in the worker message event handler:

process.on('message', function(message) {
    if(message.type === 'shutdown') {
        process.exit(0);
    }
});

To do this for all the workers, you can use the workers property of the cluster module that keeps a reference to all the running workers. We can also wrap all the tasks in a function in the master process, which can be called whenever we want to restart all the workers.

function restartWorkers() {
    var wid, workerIds = [];

    for(wid in cluster.workers) {
        workerIds.push(wid);
    }

    workerIds.forEach(function(wid) {
        cluster.workers[wid].send({
            text: 'shutdown',
            from: 'master'
        });

        setTimeout(function() {
            if(cluster.workers[wid]) {
                cluster.workers[wid].kill('SIGKILL');
            }
        }, 5000);
    });
};

We can get the ID of all the running workers from the workers object in the cluster module. This object keeps a reference to all the running workers and is dynamically updated when workers are terminated and restarted. First we store the ID of all the running workers in a workerIds array. This way, we avoid restarting newly forked workers.

Then, we request a safe shutdown from each worker. If after 5 seconds the worker is still running and it still exists in the workers object, we then call the kill function on the worker to force it shutdown. You can find a practical example on GitHub.

Conclusions

Node.js applications can be parallelized using cluster modules in order to use the system more efficiently. Running multiple processes at the same time can be done using few lines of code and this makes the migration relatively easy, as Node.js handles the hard part.

As I showed in the performance comparison, there is potential for noticeable improvement in the application performance by utilizing system resources in a more efficient way. In addition to performance, you can increase your application’s reliability and uptime by restarting workers while your application is running.

That being said, you need to be careful when considering the use of a cluster module in your application. The main recommended use for cluster modules is for web servers. In other cases, you need to study carefully how to distribute tasks between workers, and how to efficiently communicate progress between the workers and the master. Even for web servers, make sure a single Node.js process is a bottleneck (memory or CPU), before making any changes to your application, as you might introduce bugs with your change.

One last thing, Node.js website has a great documentation for cluster module. So, be sure to check it out!

Behrooz Kamali
Meet the author
Behrooz is a full stack developer specializing in the MEAN stack. He holds a PhD in industrial and systems engineering, with expertise in the areas of operations research, data analytics, algorithm design, and efficiency. When not developing, he enjoys teaching and learning new things.
  • Green Cloud

    Thank you for sharing!

    • Behrooz Kamali

      You’re welcome!

  • http://bugpedia.ir/ Mahdi Dibaiee

    Great tutorial, thanks!

    • Behrooz Kamali

      Glad you liked it!

  • http://www.manueru.mx Manueru.mx

    Excelent tutorial Behrooz, thanks for sharing. :)

    • Behrooz Kamali

      You are welcome!

  • http://jordankasper.com/ Jordan Kasper

    Ooooor you could use one of the many process managers ( http://strong-pm.io/compare/ ) out there and reliably be able to scale and monitor your application without injecting devops code into source code.

  • Sam Roberts

    This is a nice overview of how node cluster works, but the sample code above has a number of problems. It sucks 100% cpu busy looping on restarting exited workers if they are dieing on startup, it does “zero-downtime” restart by assuming the new code is deployed ON TOP of the old code… do you really want to do that? Can your app withstand having all its files and resources (css, etc.) re-written underneath its feet? And what if you want to rollback? It also seems to misunderstand some aspects of cluster: sending a message to a worker so that the worker can do `process.exit()` is a poor way of doing a clean shutdown, cluster has a disconnect that will wait for open connections to the worker to be done before closing. Also, cluster.kill(‘SIGKILL’) doesn’t send SIGKILL to a worker as the author thinks it does, it waits for all open connections in the worker to be done, then it sends the signal (see the node cluster docs for more info).

    All this possible nitpicking aside, don’t build cluster into your app. Use an external runner to run your app in a clustered mode, there are a dozen. My company has strong-supervisor, there is also Isaac Schluter’s old cluster-master, naught, cluster2, pm2, etc. All of which will apply clustering better and more robustly than the gists here do, without you having to learn about the pitfalls and gotchas of the node cluster API with your production apps.

    • Behrooz Kamali

      @disqus_ZB8VsaU2Y4:disqus, thanks for your thorough comment. I don’t see how a failing worker here can cause an issue different from any other process management. A failing worker keeps restarting, unless you force it to stop after a certain number of failed attempts. Also, I don’t see how there is an issue with updating or rolling back your running node application. It is very simple using Git or any other revision control system.

      Regarding your second point, I am not arguing that you should use the sample code here in your production code. I am not comparing modules with more than thousand lines of code to less than 50 lines of code. The sample here is just for demonstrating the concepts and how cluster module can be used for prototyping or small projects. I would definitely like to cover a process management module such as PM2 in future for more advanced operations.

      • Sam Roberts

        Process managers put a delay, sometimes exponential, before restarting, this sample code doesn’t. Thus the busy loop, it will suck 100% cpu if it can.

        Pushing code with git/scp/what-have-you is easy, but I explained above why its not safe (for all apps, a pure-js app with no futher fs interactions is fine). Usual upgrade procedure is to put code in new location, change a ‘current’ symlink to point to it, then restart.

        You don’t do a clean restart, sending a message to the worker that it has to handle explicitly by exiting is an unusual strategy. Cluster does this already: see disconnect() docs. Moreover, it does it cleaner: it refuses new connections, waits for existing connections to close, then exits.

        Nice to see a clarification on the intended use of your sample code, butI was responding to the blog, and it reads as a how-to, I did not get the impression it was intended just as a tutorial on cluster mechanics. And fwiw, https://github.com/isaacs/cluster-master/blob/master/cluster-master.js is 507 lines.

        • El Sioufy

          @disqus_ZB8VsaU2Y4:disqus … Man I neither know you nor I know the guy who wrote the post, but your manner holds a lot of negativity while I personally believe this is one of the very nice posts I’ve read this year. I think that you need to learn, that If u have additions to post them gently and in a nice manner; at the end the guy took off his time to educate many of the people (I personally benefited a lot), not for you to come promote your companies products; which from your manner I can tell ‘it sucks and no one is using it’ … cheers !

    • Behrooz Kamali

      @disqus_ZB8VsaU2Y4:disqus, thanks for your thorough comment. I don’t see how a failing worker here can cause an issue different from any other process management. A failing worker keeps restarting, unless you force it to stop after a certain number of failed attempts. Also, I don’t see how there is an issue with updating or rolling back your running node application. It is very simple using Git or any other revision control system.

      Regarding your second point, I am not arguing that you should use the sample code here in your production code. I am not comparing modules with more than thousand lines of code to less than 50 lines of code. The sample here is just for demonstrating the concepts and how cluster module can be used for prototyping or small projects. I would definitely like to cover a process management module such as PM2 in future for more advanced operations.

    • Behrooz Kamali

      @disqus_ZB8VsaU2Y4:disqus, thanks for your thorough comment. I don’t see how a failing worker here can cause an issue different from any other process management. A failing worker keeps restarting, unless you force it to stop after a certain number of failed attempts. Also, I don’t see how there is an issue with updating or rolling back your running node application. It is very simple using Git or any other revision control system.

      Regarding your second point, I am not arguing that you should use the sample code here in your production code. I am not comparing modules with more than thousand lines of code to less than 50 lines of code. The sample here is just for demonstrating the concepts and how cluster module can be used for prototyping or small projects. I would definitely like to cover a process management module such as PM2 in future for more advanced operations.

  • PaulTopping

    A very useful post!

    I have a question. Why only relate the number of workers to the number of processor cores? Aren’t worker processes also useful when requests may be blocked for other reasons such as file i/o? All four of your workers could be blocked waiting for disk access. Cores would still be available to process incoming requests.

    • Behrooz Kamali

      @PaulTopping:disqus, that’s a good point. Relating workers to the number of cores is the best practice, it is even suggested by the Node.js manual. But, as you mentioned, based on different use cases, you can manually adjust number of the workers. I would suggest sticking to the best practice, as all your workers (even if you have twice the number of cores) could be blocked by I/O operations with no noticeable CPU load.

    • Behrooz Kamali

      @PaulTopping:disqus, that’s a good point. Relating workers to the number of cores is the best practice, it is even suggested by the Node.js manual. But, as you mentioned, based on different use cases, you can manually adjust number of the workers. I would suggest sticking to the best practice, as all your workers (even if you have twice the number of cores) could be blocked by I/O operations with no noticeable CPU load.

    • Behrooz Kamali

      @PaulTopping:disqus, that’s a good point. Relating workers to the number of cores is the best practice, it is even suggested by the Node.js manual. But, as you mentioned, based on different use cases, you can manually adjust number of the workers. I would suggest sticking to the best practice, as all your workers (even if you have twice the number of cores) could be blocked by I/O operations with no noticeable CPU load.

    • Sam Roberts

      Paul, node does most emphatically NOT block on disk access, its entire purpose can be summed up by “non-blocking i/o”.

      • PaulTopping

        I realize that node.js doesn’t block but my comment was mostly about thinking things through rather than just taking “best practices” for granted. “Best practices” are often wrong in certain contexts or even in the main context.

        Multiple node.js processes could still all be idle at some point because, from each process’ point of view, there is nothing to do until some file i/o completes. I guess they are still available to process a new request, assuming there’s an available core to run on so I can see the thinking behind the best practice.

        Of course, the non-blocking of node.js relies on cooperation. It is still possible for some task to be compute-bound or simply not make use of asynchronicity. Node.js programmers are advised to break compute-bound tasks into small chunks but that is not always easy to do.

        It also depends on the project. If the system can’t count on tasks being cooperative this way, then node.js processes will be busy and not available to take a new incoming request. However, even in that scenario where all the node.js processes are compute-bound, we can’t do any better than a process per core. If a new request comes in and all node.js processes are not available, there is still no core available to run a new, non-busy node.js process.

        Sorry for taking the time to think this through.

        • evanplaice

          @PaulTopping:disqus Putting each instance on one core reduces context switching (ie using more instance than # of cores) and cache thrashing (ie using fewer instances than the number of cores).

          The code you run on the main context is single-threaded and asynchronous but there are a number of background workers that handle I/O requests. By firing one instance per core, you’re essentially isolating the instance and attached threads to one core, minimizing cache invalidation that happens when memory is read across multiple cores/processors.

          File I/O will always be inherently slow and context switching will always come with additional overhead, that’s why it’s important to limit the number of instances you run. There are other solutions to reduce/eliminate File I/O altogether (ex request caching, file caching in memory, etc), but that topic is too broad for a comment.

          For tasks that are truly compute-bound (ex media encoding, data analysis, etc) it’s probably better to build a separate service with a producer/consumer queue and specialized instances to handle processing.

          The concept I see a lot of people from an OOP background fight with is separating concerns at the network level. You can create a mega monolithic application that includes a HTTP server, data persistence, view generation, etc… all running on a huge Rude Goldberg Machine of threads signaling/synchronizing complex data structures. Plus, fine grained API access controls down to the member level.

          Or you can split things up into multiple services, ditch access control at the language level, design your API to be exposed as REST endpoints, and control public access via authentication/authorization.

      • PaulTopping

        I realize that node.js doesn’t block but my comment was mostly about thinking things through rather than just taking “best practices” for granted. “Best practices” are often wrong in certain contexts or even in the main context.

        Multiple node.js processes could still all be idle at some point because, from each process’ point of view, there is nothing to do until some file i/o completes. I guess they are still available to process a new request, assuming there’s an available core to run on so I can see the thinking behind the best practice.

        Of course, the non-blocking of node.js relies on cooperation. It is still possible for some task to be compute-bound or simply not make use of asynchronicity. Node.js programmers are advised to break compute-bound tasks into small chunks but that is not always easy to do.

        It also depends on the project. If the system can’t count on tasks being cooperative this way, then node.js processes will be busy and not available to take a new incoming request. However, even in that scenario where all the node.js processes are compute-bound, we can’t do any better than a process per core. If a new request comes in and all node.js processes are not available, there is still no core available to run a new, non-busy node.js process.

        Sorry for taking the time to think this through.

      • evanplaice

        @disqus_ZB8VsaU2Y4:disqus Node does block on I/O (ie all I/O requests are inherently synchronous), just not on the main context. I/O requests are handled internally via background worker threads and fire a callback on the main context when they finish.

  • Peyman Afraz

    Thank you Behrooz. It will be excellent if you go further on the problems of scaling Node.js Apps which are using bi-directional technologies like Websockets. Specially using libraries such as Socket.io. There are lots of unresolved issues with scaling these apps using Node Clusters, e.g. using a proper load balancer, sharing sessions between workers, enabling sticky sessions, horizontal scaling and vertical scaling.

    Thanks in advance

  • Damian Płaza

    Great article. What do you think about combining cluster, node.js and IoT stuff? Thanks for your opinion :-)

  • Behrooz Kamali

    @jakerella:disqus, this tutorial is not advocating replacing the use of advanced process management modules, especially in large scale projects (although I have used cluster module manually in a similar fashion in some projects with no problem). This article mainly serves as an introduction to the Node.js cluster module and how it works under the hood.

    By the way, thanks for the great link.

  • Behrooz Kamali

    Sure, those are great ideas for follow-up articles. I’ll definitely keep them in mind.

  • Behrooz Kamali

    Sure, those are great ideas for follow-up articles. I’ll definitely keep them in mind.

  • Llewellyn Lovell

    So many use cases, thanks for the article

  • SWR

    This is a great article! One issue and question I have is:

    When console.logging data inside the forks, it actually outputs that console logged data the same amount of times as how many cpu cores (or threads running), is there a way to just console log something once?

    • Behrooz Kamali

      If I understand the question correctly, each time you assign a task to a worker, only that worker will log data. Except for messages such as set up and exit, typically no other messages should be logged by all the workers. In case your use case involves multiple assignment of the same task to different workers, you can use a shared file or a data store such as Redis, to track if some action needs to be logged or not. In my experience, Redis is the fastest and most reliable data structure for communication between different workers and in general, different node applications.

  • Manny

    Thanks for the great insight on using clusters. I have been looking for performance enhancements and have looked at clusters and even load balanced node apps running on different servers.

    I tried running some larger benchmarks using Apache Benchmark on a simple hello world with cluster (4 workers used) and without clustering and came up with these results:

    1. Concurrent 100 total requests – Non cluster (2834 reqs/second – Median response 3ms)
    Cluster (1245 reqs/second – Median response 4ms)

    2. Concurrent 10000 total requests – Non cluster(5647.71 reqs/second – Median response 43ms)
    Cluster(4233.92 reqs/second – Median response 220ms)

    3. Concurrent 100000 total requests – Non Cluster( 6309 reqs/second – Median response 32ms)
    Cluster(4841 reqs/second – Median response 204ms)

    Overall I do see that clustered performance lagged a bit. I am assuming that context switching scenario might be to blame here or other programs using other cores on my computer. As I read some where, clusters might perform much better if there is some CPU intensive work to be done as part of workers though, that is not the generally preferred use case for Node.

    It would be great to hear ways to optimize cluster based performance

  • Blackhat

    سلام جناب کمالی . می خواستم ببینم که شما مقاله های فارسی هم می نویسید یا اینکه فقط انگیلسی کار می کنید . اگر سایت یا وبلاگی در این زمینه دارید ممنون میشم معرفی کنید تا بتونیم در اختباز دیگر دوستان هم قرار بدیم و اون ها هم بتونن مطالعه کنن . ممنون میشم اگر تو این زمینه به ما کمک کنید و بتونیم از تجربه های ارزشمند شما زمینه نود هم استفاده کنیم . من به تازگی دارم روی این پلتفرم کار می کنم و خیلی دوست دارم که بتونم تو این زمینه پیشرفت کنم . اگر تجربه ایی یا نکته ایی توی این زمینه هست خوشحال میشم از زبان شما بشنوم . با نشکر از شما

    • Behrooz Kamali

      با سلام دوست عزیز، من تا به حال مقاله فارسی ننوشتم، ولی خوشحال می شم اگه کمکی از دستم بر بیاد.

  • Nasa Nguyen

    Great tuts, thanks u man!

    • Behrooz Kamali

      You’re very welcome.

  • El Sioufy

    Great Post … thanks a lot for sharing :)

    • Behrooz Kamali

      Glad you liked it.

  • Behrooz Kamali

    Good catch, thanks. Will fix it.

  • Systemparanoia

    Thank you for this post Behrooz, This helped me to maximise the performance out of the node server on my Raspberry pi 3

Recommended

Learn Coding Online
Learn Web Development

Start learning web development and design for free with SitePoint Premium!

Get the lastest in JavaScript, once a week, for free.