How NodeJS web server work with reference to Apache server

backend5 Min to Read05 Aug 17

When we write front-end, we might not care much about performance and memory but when we write back-end we must take a great care of performance and memory because they are the server(back-end), they serve millions and billions of user(client) across the globe. So to write a high performant and memory efficient back-end application we must know the internal working and limitation of the underlying platform.

Node.JS is one such platform that we use to build high performant, scalable back-end application. In this article, we will understand how Node.JS work and what is the best use case of Node.JS.

Example comparing Apache

Sometimes it is easy to understand a thing when we compare it to another thing. So, in this section, we will try to understand the working of Node.JS server with a comparison to Apache server.

In case, you don’t know, Apache is popular HTTP server that you might have used for one of your LAMP stack application. Apache can run in a various mode like process-based, thread-based, or event-based. But in this article, we will consider Apache’s thread mode for comparison.

Node.JS being single-threaded can outweigh multi-threaded Apache by 5-10 times. For example: if Apache can serve 1000 users per second then Node.JS can serve 9000 users per second. Wonder why? Read below to know more.

Event-loop vs Thread

Main work of any server is to give a response to client’s request. So let's see how Node.JS and Apache manage the request? When any request comes to Apache server, it creates a thread. Every subsequent new request creates a new thread. While in Node.Js, when a request comes to Node.JS server, it puts the request handler in event-loop. Every subsequent new request put new request handler in existing event-loop. Why don’t Node.JS create a new thread to serve new request? because Node.JS is single threaded.

Event-loop is the core and heart of Node.JS. It is a way to maintain concurrency in a Node.JS application, even after being single threaded. Every blocking code(Input/Output) in Node.JS is run in an asynchronous way using a callback. Each callback is added into and removed from an event-loop queue depending on call stack sequence.Example of Blocking Input/Output(I/O): reading from or writing to file, database, network etc.

Let’s take a below pseudo-code example: suppose a user is requesting for Titanic movie, size of the file is 2gb, and reading 2gb file takes 2 minutes. So you might think that Node.JS server will not be able to handle any new request in-between this 2 minute. If you think so, then you are wrong. Notice line no 2: it is blocking operation(I/O) but Node.JS execute it in a non-blocking way. How? we will see but before that do you know, what do we mean by ‘non-blocking way’? let me explain.

function requestHandler(request) {
  read2GBFile("titanic.mp4", callback)
  console.log("1 request came from user")
}

function callback(file) {
  response.send(file)
}

‘Blocking way’ means blocking other code from running, even if CPU is sitting idle. for example: if we run above code in Apache server, it will execute line 2 in blocking a way. It means line 3 will not be executed(printed) by CPU until line 2 has finished. Moreover, if you set your Apache server to have maximum 1 thread and then run above code in Apache server, Now Apache server will not be able to handle any new request in-between 2 minute hence the maximum capacity of Apache server will be 1 request per 2 seconds (0.5r/s), which is really bad. But the same code in Node.JS server can give a performance of 1000+r/s. How?

when Node.JS start running line 2, it will put function callback in event-loop and start executing next line 3 immediately without even waiting for line 2 to finish. Because input(file read here) is handled by ‘hard disk’ not the CPU. OS let Node.JS know when it finishes reading from hard disk. Once Node.JS know that file read operation read2GBFile has finished, it pops the function callback from event-loop and executes it.

Node.JS faster response

For a single request, response time will almost be same on both server but as the number of request increases, average response time will decrease(i.e faster response) in Node.JS server. The reason is simple: Event-loop is lighter and faster than Thread.

Thread management, like creating a new thread, destroying existing thread and switching between thread takes time and memory compares to Event-loop management like, putting new request handler in event-loop, removing existing request handler in event-loop, and switching between request handlers.

Node.JS don't block I/O

We already discussed how Node.JS run I/O operation in a non-blocking way and how non-blocking I/O code utilizes CPU's time in a better way.

Node.JS poor computing

Node.JS runs I/O code in a non-blocking way but doesn’t run CPU code in non-blocking way. Node.JS is single threaded, so doing any CPU intensive work can block the subsequent code from running, it can even block the Node.JS event-loop system, making the whole application unresponsive.

Example of CPU intensive work: 'running for loop for a long time','long time taking sorting/searching'. Actually, any code that is not I/O code is CPU code. So Node.JS is non-blocking I/O but blocking CPU. Use Node.JS for light computing.

So Node.JS server is event-based, single threaded, asynchronous, non-blocking I/O while Apache server is multi threaded, synchronous, blocking I/O.

Use case

So from above discussion, we should be able to tell the use case of Node.JS application. We should use Node.JS if we are building I/O intensive application. We should not use Node.JS if we are building CPU intensive application.

Best Use Case example

The best use case example that I can think of is: Chat application server, backend API server, streaming server etc. These applications are I/O intensive. In fact, most web apps are I/O intensive, So Node.JS will be suitable for most of the general web application.

If you loved this post, Please share it on social media.