Full MERN Stack App: 0 to deployment on Kubernetes — part 2

In the second part, I will talk about setting up a NodeJS backend with ExpressJS and SocketIO that is ready for containerization.

Kavindu Chamiran
6 min readSep 12, 2019
Things we are going to talk about!

Welcome back to the second part of the series. Today we will in-detail talk about setting up the back-end with NodeJS, ExpressJS, and SocketIO, then connect it to our MongoDB docker container. Without further ado, let’s dive in.

If you haven’t read my part 1 yet, please follow the link below.

Initializing the back-end

In the project root directory (where we have client folder), create another folder “server” and open a terminal inside that folder. Then “init” a new NPM project and install the dependencies.

npm initnpm install --save express socket.io body-parser cors mongoose
Initialize NodeJS project and install dependencies

Open the folder with your favorite editor.

server.js

Create a new “server.js” file at the root. This is the entry point of our NodeJS server. Let’s start coding by importing the required dependencies.

Importing the dependencies

body-parser extract the entire body portion of incoming REST requests and exposes it on req.body so we can easily access whatever data we passed with the request inside the server. cors is required to accept requests from different origins, i.e. our client will be served on one pod and the server, on another pod because we need to have the ability to scale them independently. Thus the server will see that the requests from the client are originating from a different origin. To allow these requests into the server, we need cors. Finally, mongoose is used to connect to MongoDB.

ExpressJS

Let’s create the ExpressJS server now.

Creating the ExpressJS server

First, we create an express instance. Then we tell it to use the dependencies we imported earlier. Lines 11 and 12 are required to tell the server how to handle incoming requests regarding user registration and user login. I told in the first part that I am using REST only for user registration and user login because it is easier to implement authentication and protected routes this way. We are telling the server that REST requests regarding user authentication will come at a separate route /api/users and forward them to /routes/api/users.js where the authentication logic resides. This improves the cohesiveness and readability of the code and reduces the cluttering in “server.js”.

Line 14 is not really needed but it is added for debugging purposes, just to check if the server is up and running. This will come in handy once the app is deployed because we can easily make a GET request to the server and check if it is running. If we GET a request to ‘/’ which is the root route of the server, the server will reply back with the message “ClouDL server up and running”. Later we will add /health and /ready routes to the server which are how Kubernetes will determine if the server is ready to serve requests.

SocketIO

Let’s create the SocketIO server now.

Creating the SocketIO server

There are two ways to create a SocketIO server while using ExpressJS. Here we are following the practice adviced in the official SocketIO documentation. If you do not want to use the http server, here is the other way. Usually, I like to end my server.js files with server.listen(). Let’s talk about what’s happening here.

To create a SocketIO instance, we need to pass in a server object, which we get by http.createServer(). We need to keep a map of sockets we create (per each client connected) with userId as keys. For each user that logs into our app, there is a new “front-end” so, in order to send results back to the front-end, we need to know the right socket that connects that particular “front-end” and our back-end, thus keep track of each client’s socket.

I told you earlier that I use python-written workers and they also connect to the back-end using SocketIO. For now, we are talking about having only one worker. To communicate with the worker, we need to know its socket. We identify it by including a header in the socket {name: ‘python’} from the python SocketIO client, and checking if the newly connected socket contains the said header. If the python socket is found, then save that socket in the python_socket variable. The other type of connections we may get are from users but there is a bug in SocketIO that when extra headers are included, some browsers do not make the connection so we are not able to identify users’ sockets as we did for python clients. A workaround is to emit a “userId” event from the client as soon as the connection is made (without any headers) and add the user’s id in the event’s payload. See here. We can then listen to this event on our server (see server.js line #16)

We can define different events where we might receive data through the socket which SocketIO clients can emit. At line 21, we are telling the socket what to do any client fires up the newJob event. Its second argument is the callback function that describes what to do with the data that comes with the event. Here we are sending the information we received from the client with the event to the python worker so it can start working on the new job. The python client is expecting a new_job event on its end with id and link as data.

If the worker successfully started the new job, then it tells so to the server to and the server conveys the happy news to the client. As I explained earlier, the correct socket is identified from the sockets object and that socket is emitting the job_add_success event which the client will be listening to. I need to wrap any code that uses python_socket inside an “if” block to prevent the server crashing in case the python worker was down or not made yet. You can define as many events you want and how the server should behave upon each event.

MongoDB

Now it is time to connect to MongoDB.

MongoDB using Mongoose

This is nothing difficult. The only thing I wanted to point out is line #3 where it says mongodb-service instead of the IP address of the pod MongoDB is running. Once deployed, our MongoDB container will be running in a separate pod. We do not know the IP address of this pod until it is deployed and also the pod may take different IP addresses each time it is deployed. So we can’t possibly hardcode it. The only thing that will remain consistent throughout the pod’s life cycle is the name of the service that exposes the pod into the cluster. There is a Kubernetes feature called DNS resolving that resolves the service name into an IP address which is the address of the pod.

The big launch

Now everything is set up. We can launch our server now. Add the following two lines to the very bottom of the server.

const port = process.env.PORT || 5000; server.listen(port, () => console.log(`Server up and running on port         ${port} !`));

This starts a new server with ExpressJS listening for REST requests and SocketIO listening for socket requests. Our server will be running on localhost at port 5000 unless otherwise specified in the PORT environment variable.

Your final server.js will look like this.

server.js

This is the minimal ExpressJS and SocketIO setup.

Conclusion

In the next article which is the third part of the series, I am going to talk about containerizing both client and server and pushing them to Docker Hub. I hope this article was interesting and you will also read the third part. Just click the link below. See you then!

PS: Claps are appreciated!

--

--