Full MERN Stack App: 0 to deployment on Kubernetes — part 3

In the third part, I will talk about containerizing our app with Docker and pushing the images Docker Hub.

Kavindu Chamiran
11 min readSep 12, 2019
Things we are going to talk about!

Welcome back to the third part of the series. Today we will in-detail talk about containerizing our app. We are going to build Docker images from Dockerfiles, then push them to Docker hub. Without further ado, let’s dive in.

If you haven’t read my second part yet, please follow the link below.

Let’s get ready

To follow up with me through this article, you need a few things first.

You can easily set up these following instructions on their websites. If you already have these at hand, then jump onto the next section.

Docker and Containerization

Containerization is a major step in DevOps practices and Docker is the #1 containerization software so much that it has managed to coin its own term Dockerization. To create a Docker container of an app, we need to tell what are the steps needed to deploy this app on an Ubuntu environment. Then Docker follows these steps and creates an image of the app which can be deployed on any platform (Windows, Mac or Linux) and once run, the app will work the same way on all three! No more compiling source code and fixing dependencies (which can become a pain real quickly).

Above I told you that Docker needs instructions on how to build the image. We create a Dockerfile for every app we want to containerize and then in that file, we include these instructions, line by line. I am going to containerize my client and server separately so I am going to create two Dockerfiles. Let’s get started!

Dockerizing our server

First, open the server folder in your favorite editor and then create a new file named Dockerfile (no extension) at the root. Before start filling up this file, let’s think about for a minute what we need to do if we were to manually deploy our NodeJS server on an Ubuntu host. Suppose we have Ubuntu installed but nothing else. No NodeJS. No NPM. Just plain old Ubuntu. If I were you, I would

  1. Install NodeJS and NPM.
  2. Clone the project to a directory on my host from my Git repo.
  3. Install the dependencies from package.json
  4. Start the server.
  5. Maybe expose the server ports so it can be accessed from outside?

That sums it up. Depending on the OS, the actual way we execute these steps might differ but they will be the same steps. Now suppose every time we are deploying our app, it is going to be on an Ubuntu host. Then both the steps and how we execute these steps will not change. This is exactly what happens inside a Docker container. When we fire up a new Docker container (by default), it installs Ubuntu and then executes the steps in our Dockerfile. Once it’s done, we would have successfully deployed our app and it would work on any platform no matter Windows, Mac or Linux. Please note that I am being seriously simplistic here. Let’s get started with the Dockerfile now.

Dockerfile for back-end

We are creating a Docker image here which is used to fire up Docker containers. We are telling Docker to install NodeJS version 10 in the image, then change our working directory into /cloudl-server (same as cd terminal command). Instead of cloning from the Git repo, Docker’s way is to copy files from our local directory into the image. We first copy the package.json only and once the packages are installed, then we copy the rest of our source files. We do so because of a reason. Suppose we make changes to our source code and we need to rebuild our image. Unless our package.json is changed (you installed new packages or removed any), there is no need to re-run the “npm install” step. See this StackOverflow answer from David Maze below.

Say this happens while you’re working on the package, though. You’ve changed some src/*.js file, but haven't changed the package.json. You run npm test and it looks good. Now you re-run docker build. Docker notices that the package*.json files haven't changed, so it uses the same image layer it built the first time without re-running anything, and it also skips the npm install step (because it assumes running the same command on the same input filesystem produces the same output filesystem). So this makes the second build run faster. — David Maze on stackoverflow.

Continuing with building our image, once the dependencies are installed, and all the source code is copied into the /cloudl-server directory, all we have left is to execute the deployed app. In our server.js file, we made the server to run on port 5000. Once the server is run, it is running on port 5000 inside the Docker container but not on our local OS. So we need to bind our local port 5000 to container port 5000 but to do that, the container needs to expose its port 5000 first, which is what the EXPOSE command at line #11 does. And finally, we fire up the server by running the command “npm start”.

docker build

docker build in action

Now that our Dockerfile is written, we need to execute the following command to build the image. Before that, we need to prevent unnecessary files from being copied into the Docker image so we can keep our Docker image small. This is done using a .dockerignore file. You can think of it as an equivalent to .gitignore file for Git repositories. Create this file at the root and add the following line. This will prevent the “node_modules” folder from getting copied when we run the COPY . . command.

/node_modules

Now we can build the image. Execute this command at the folder root.

docker build -t cloudl-server .

The first parameter we pass in by -t is the tag of the image. Each docker image has a repository and a tag. When we are re-building a Docker image from changed source files, we are essentially building a new image of the same repository, but with a new tag. Unless a tag is specified, “latest” is added as the default tag. We can use tags to name different versions of the image.

docker network

I was going to demonstrate the docker run command by firing up a container with our freshly built image but then I remembered that we are running our MongoDB on a separate container. So trying to connect through localhost won’t work. I am going to take a little detour and quickly talk about Docker networks.

If both our back-end and MongoDB were run on our host, then they can talk to each other through the localhost without a problem. This will be true even when those two were running in the same container. But when two containers need to communicate i.e. our server on one container and MongoDB on another, we need to define a virtual Docker network and then connect our containers to that network. This also allows us to protect our containers from intruders! Let’s create a network now.

docker network create cloudl-network

I created a network named “cloudl-network”. Now I am going to connect my MongoDB container to the network.

docker network connect cloudl-network mongodb-service

“mongodb-service” was the name given by me when I created the MongoDB container earlier. Now we can start our back-end in a new container.

docker run

I hope you remember when I talked about DNS resolving when setting up MongoDB in the second part. Here we follow the same method. Instead of the service name, we have the container name where MongoDB is running, which the back-end container will resolve to an IP address on the “cloudl-network” we just created.

docker run -p 5000:5000 --name server --network cloudl-network cloudl-server
docker run in action

Dockerizing our client

I feel like this article is getting too long. So I am not going to explain dockerizing the front-end in detail. Only the important parts. First, we need a .dockerignore file to tell Docker which files are to be excluded in the image. When create-react-app initialized our project, it automatically creates a .gitignore file. For now, it is good enough for our task so let’s use a copy of it.

cp .gitignore .dockerignore

Now Dockerfile. When we deploy a React app into production, we don’t serve the app using the development server. We need to build a production-ready version of our app. Our final output will be in plain HTML, CSS and JS files. To serve our production build, we need a web server software. There are many choices but I am using the popular choice, Nginx. Now let’s think about the steps needed to deploy my app if I were to do it manually as we did for the back-end.

  1. Install the dependencies.
  2. Copy the source files and build the production-build.
  3. Install and configure Nginx.
  4. Copy the production-build files to a folder for Nginx to read.
  5. Serve the files at port 80 (default HTTP port).

Now let’s look at the Dockerfile.

Dockerfile for front-end

Do you see two FROM statements? This is called multi-stage Docker builds. The “as build” at line 2 names the stage and makes it recognizable for a future stage.

FROM node:10 as build

We need NodeJS to build our app. The NPM module react-scripts contain many scripts that will come in handy for a React developer, including a script to build our app. It will get executed when we execute “npm run build”. The rest is self-explanatory.

FROM nginx:1.16.0

This is new. Let’s talk about it a bit. We need to tell Nginx where our built files are so it can serve them. The default location is “/usr/share/nginx/html”. Now we copy the built files to this folder (see line 19). We need to tell Docker from which stage to get the built files to copy using the “ — from=build” argument. Otherwise, it will look for these files in the local file system. Then we need to configure Nginx server. Nginx reads its default configuration file from “/etc/nginx/conf.d/”. I am copying this file from our local source files into the image. Please create “nginx.conf” in a folder named “nginx” in your source folder and paste the code below.

nginx.conf

Nginx can be many things. In the first block, it acts as a reverse proxy. When we deployed our app, the back-end and front-end will be run on different pods, thus different IP addresses. Since we don’t know the back-end’s IP address at built-time, we need the help of Kubernetes’ DNS resolving to resolve the back-end pod’s service name into an IP address. This is a similar situation to what I explained in the back-end MongoDB setup above. Our front-end does not need to worry about any of this and it just sends its SocketIO requests to the path “/socket/” and Nginx will take care of the rest. It will forward these requests to our back-end IP address at port 5000 (see line 10). It can resolve the URL “http://cloudl-server-service:5000” into an IP address with the help of DNS resolving and redirect requests and responses to and from the server.

The second block also acts as a reverse proxy, just not for socket requests but for REST requests originating from axios. We configured our ExpressJS server to specifically listen to requests from this path. Nginx will be responsible for handling requests and their responses from the front-end to the back-end.

The third block is what actually serves our front-end. Here we are telling Nginx if a web browser requests the path “/” of our server (which is what you usually get when you type in a domain name i.e. www.cloudl.com), then serve our index.html (we built earlier). There is a reason why the location blocks are in this particular order. Nginx selects the block that first matches a certain path so the “/socket/” and “/api/users/” blocks need to be above “/” block as “/” path matches for anything. In other words, more specific location blocks need to be above of more general location blocks.

At the very top, we tell Nginx to listen for HTTP requests (port 80) from external origins (such as a computer connected to the internet). When we get HTTPS for our server, it will listen to HTTPS requests at port 443. At the very bottom, we can define our own error pages for different HTTP status codes. I am not going to do that now. Now we can build the Docker image.

docker build -t cloudl-client .
docker build in action

Pushing images to Docker Hub

Docker Hub is a repository where we can keep our own Docker images, just like GitHub for source codes. I hope you have a Docker Hub account now. Before pushing the image to Docker Hub, we need to name the image. In other words, we add another “tag” to the image that depicts the repository the image will be in and the tag of the image. This is needed when we are deploying our image anywhere. When deploying, Docker pulls the image from a repository and we tell Docker which image to pull by its tag of the format, username/repository_name:tag_name. So before everything, we need to annotate our built image with the tag.

docker tag cloudl-server kavinduchamiran/cloudl-server:v1

It assigns the tag kavinduchamiran/cloudl-server:v1 to the local Docker image, cloudl-server. Then the same for front-end,

docker tag cloudl-client kavinduchamiran/cloudl-client:v1

When we rebuild an image from a changed source, we change the tagname at the end i.e. v1 to v2 but nothing else.

You can also specify multiple tags while building the image using another -t tag instead of tagging them later.

Finally, we can push these images to Docker Hub.

docker push kavinduchamiran/cloudl-server:v1docker push kavinduchamiran/cloudl-client:v1
docker push in action

Conclusion

In the next article which is the fourth part of the series, I am going to talk about how to set up a CI/CD pipeline and how they make our life easier. I hope this article was interesting and you will also read the fourth part. Just click the link below. See you then!

PS: Claps are appreciated!

--

--