Building And Dockerizing A Node.js App With Stateless Architecture With Help From Kinsta

This article is a sponsored by Kinsta

In this article, we’ll take a swing at creating a stateless Node.js app and dockerizing it, making our development environment clean and efficient. Along the way, we’ll explore the benefits of hosting containers on platforms like Kinsta that offers a managed hosting environment while supporting Docker containers as well as application and database hosting, enabling users to deploy and scale their applications with more flexibility and ease.

Creating A Node.js App
In case you’re newer to code, Node.js is a platform built on Chrome’s JavaScript engine that allows developers to create server-side applications using JavaScript. It is popular for its lightweight nature, efficient performance, and asynchronous capabilities.

Stateless apps do not store any information about the user’s session, providing a clean and efficient way to manage your applications. Let’s explore how to create a Node.js app in this manner.

Step 1: Initialize The Node.js Project

First, create a new directory and navigate to it:

mkdir smashing-app && cd smashing-app

Next, initialize a new Node.js project:

npm init -y

Step 2: Install Express

Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. Install Express with the following command:

npm install express

Step 3: Create Your Stateless App

Create a new file named “app.js” and add the following code:

const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/", (req, res) => {
  res.send("Welcome to our smashing stateless Node.js app!");
});
app.listen(port, () => {
  console.log(`Smashing app listening at http://localhost:${port}`);
});

Let’s explore this a bit. Here’s what each line does:

  • const express = require("express");
    This line imports the Express.js framework into the code, making it available to use.
  • const app = express();
    This line creates an instance of the Express.js framework called app. This app instance is where we define our server routes and configurations.
  • const port = process.env.PORT || 3000;
    This line sets the port number for the server. It looks for a port number set in an environment variable called PORT. If that variable is not set, it defaults to port 3000.
  • app.get("/", (req, res) => {}
    This line defines a route for the server when a GET request is made to the root URL (“/”).
  • res.send("Welcome to our smashing stateless Node.js app!");
    This line sends the string “Welcome to our smashing stateless Node.js app!” as a response to the GET request made to the root URL.
  • app.listen(port, () => {})
    This line starts the server and listens on the port number specified earlier.

Now, run the app with:

node app.js

Your Node.js app is now running at http://localhost:3000.

Stateless Architecture

Stateless architecture means that the server doesn’t store any information about the user’s session, resulting in several benefits:

  • Scalability
    Stateless applications can easily scale horizontally by adding more instances without worrying about session data.
  • Simplicity
    Without session data to manage, the application logic becomes simpler and easier to maintain.
  • Fault tolerance
    Stateless applications can recover quickly from failures because there’s no session state to be lost or recovered.

Okay, we’ve got our Node.js server running locally, but how can we package it up so that anyone can run it? Even people without Node.js installed, and have it run on any platform? That’s where Docker comes in.

Dockerizing The App

Docker is a tool that helps developers build, ship, and run applications in a containerized environment. It simplifies the process of deploying applications across different platforms and environments.

Step 1: Install Docker

First, make sure you have Docker installed on your machine. You can download it here.

Step 2: Create A Dockerfile

Create a new file named Dockerfile in your project directory and add the following code:

FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT=3000
CMD [ "node", "app.js" ]

Once again, let’s take a look at what this is doing in a little more detail:

  • FROM node:18-alpine
    This line specifies the base image for this Docker image. In this case, it is the official Node.js Docker image based on the Alpine Linux distribution. This gives Node.js to the Docker container, which is like a “virtual machine” but lighter and more efficient.
  • WORKDIR /usr/src/app
    This line sets the working directory inside the Docker container to /usr/src/app.
  • COPY . .
    This line copies all the files from the local directory to the working directory in the Docker container.
  • RUN npm install
    This line installs the dependencies specified in the package.json file.
  • ENV PORT=3000
    Using this directive, we make the app more configurable by using the PORT environment variable. This approach provides flexibility and allows hosting providers like Kinsta to connect the application to their infrastructure seamlessly.
  • CMD [ "node", "app.js" ]
    This line specifies the command to run when the Docker container starts. In this case, it runs the node command with app.js as the argument, which will start the Node.js application.

So, this Dockerfile builds a Docker image that sets up a working directory, installs dependencies, copies all the files into the container, exposes port 3000, and runs the Node.js application with the node command.

Step 3: Build And Run The Docker Container

Let’s now build this and run it locally to make sure everything works fine.

docker build -t smashing-app

When this succeeds, we will run the container:

docker run -p 3000:3000 smashing-app

Let’s break this down because that -p 3000:3000 thing might look confusing. Here’s what’s happening:

  1. docker run is a command used to run a Docker container.
  2. -p 3000:3000 is an option that maps port 3000 in the Docker container to port 3000 on the host machine. This means that the container’s port 3000 will be accessible from the host machine at port 3000. The first port number is the host machine’s port number (ours), and the second port number is the container’s port number.
  3. We can have port 1234 on our machine mapped to port 3000 on the container, and then localhost:1234 will point to container:3000 and we'll still have access to the app.
  4. smashing-app is the name of the Docker image that the container is based on, the one we just built.

Your Dockerized Node.js app should now be running at http://localhost:3000.

When running the Docker container, we can additionally pass a custom PORT value as an environment variable:

docker run -p 8080:5713 -d -e PORT=5713 smashing-app

This command maps the container's port 5713 to the host's port 8080 and sets the PORT environment variable to 5713 inside the container.

Using the PORT environment variable in the Dockerfile allows for greater flexibility and adaptability when deploying the Node.js app to various hosting providers, including Kinsta.

More Smashing Advantages Of Dockerizing A Node.js App

Dockerizing a Node.js app brings several advantages to developers and the overall application lifecycle. Here are some additional key benefits with code examples:

Simplified Dependency Management

Docker allows you to encapsulate all the dependencies within the container itself, making it easier to manage and share among team members. For example, let's say you have a package.json file with a specific version of a package:

{
  "dependencies": {
    "lodash": "4.17.21"
  }
}

By including this in your Dockerfile, the specific version of lodash is automatically installed and bundled within your container, ensuring consistent behavior across environments.

Easy App Versioning

Docker allows you to tag and version your application images, making it simple to roll back to previous versions or deploy different versions in parallel. For example, if you want to build a new version of your app, you can tag it using the following command:

docker build -t smashing-app:v2 .

You can then run multiple versions of your app simultaneously:

docker run -p 3000:3000 -d smashing-app:v1

docker run -p 3001:3000 -d smashing-app:v2
Environment Variables

Docker makes it easy to manage environment variables, which can be passed to your Node.js app to modify its behavior based on the environment (development, staging, production). For example, in your app.js file:

const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
const env = process.env.NODE_ENV || 'development';
app.get('/', (req, res) => {
  res.send(`Welcome to our smashing stateless Node.js app running in ${env} mode!`);
});
app.listen(port, () => {
  console.log(`Smashing app listening at http://localhost:${port}`);
});

In your Dockerfile, you can set the NODE_ENV variable:

FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_ENV=production
CMD [ "node", "app.js" ]

Or you can pass it when running the container:

docker run -p 3000:3000 -d -e NODE_ENV=production smashing-app

The TL;DR of this is that through Dockerizing node apps, we can eliminate a whole class of “works on my machine” problems while also boosting the reusability, testability, and portability of our Node.js applications. 🎉

Hosting Containers With Kinsta

Now that we have our stateless Node.js app Dockerized, you might be wondering where to host it. Kinsta is widely known for its application and database hosting. Let’s explore how we’d do this with Kinsta in a step-by-step manner.

  1. Login or sign-up to your Kinsta account.
  2. From there, you should be in your dashboard.
  3. Using the sidebar, navigate to Applications.
  4. From here, you should be able to Add a Service of type application.
  5. Once you add an application, you’ll be invited to connect your GitHub account to Kinsta so that Kinsta can automatically deploy your application when updates are pushed to it.
  6. You can now choose the repo containing the code you’d like to deploy, along with setting some basic details like the application’s name and environment variables.
  7. Next, we specify the build environment of our application. It is here we specify the location of the Dockerfile in our repo that we just created.
  8. Finally, we allocate computer resources for our container, enter our payment information, and we’re ready to go!

Kinsta will now build and deploy our application, and give us a public, secure link from where it is accessible. Our application is now published to the web!

Conclusion

In this tutorial, we’ve built a Node.js app and Dockerized it, making it easy to deploy across various environments. We’ve also explored the benefits of stateless architecture and touched upon some great choices for hosting containers, like Kinsta.

A Gentle Introduction to Using a Docker Container as a Dev Environment

Sarcasm disclaimer: This article is mostly sarcasm. I do not think that I actually speak for Dylan Thomas and I would never encourage you to foist a light theme on people who don’t want it. No matter how wrong they may be.

When Dylan Thomas penned the words, “Do not go gentle into that good night,” he was talking about death. But if he were alive today, he might be talking about Linux containers. There is no way to know for sure because he passed away in 1953, but this is the internet, so I feel extremely confident speaking authoritatively on his behalf.

My confidence comes from a complete overestimation of my skills and intelligence coupled with the fact that I recently tried to configure a Docker container as my development environment. And I found myself raging against the dying of the light as Docker rejected every single attempt I made like I was me and it was King James screaming, “NOT IN MY HOUSE!”

Pain is an excellent teacher. And because I care about you and have no other ulterior motives, I want to use that experience to give you a “gentle” introduction to using a Docker container as a development environment. But first, let’s talk about whyyyyyyyyyyy you would ever want to do that.

kbutwhytho?

Close your eyes and picture this: a grown man dressed up like a fox.

Wait. No. Wrong scenario.

Instead, picture a project that contains not just your source code, but your entire development environment and all the dependencies and runtimes your app needs. You could then give that project to anyone anywhere (like the fox guy) and they could run your project without having to make a lick of configuration changes to their own environment.

This is exactly what Docker containers do. A Dockerfile defines an entire runtime environment with a single file. All you would need is a way to develop inside of that container.

Wait for it…

VS Code and Remote – Containers

VS Code has an extension called Remote – Containers that lets you load a project inside a Docker container and connect to it with VS Code. That’s some Inception-level stuff right there. (Did he make it out?! THE TALISMAN NEVER ACTUALLY STOPS SPINNING.) It’s easier to understand if we (and by “we” I mean you) look at it in action.

Adding a container to a project

Let’s say for a moment that you are on a high-end gaming PC that you built for your kids and then decided to keep if for yourself. I mean, why exactly do they deserve a new computer again? Oh, that’s right. They don’t. They can’t even take out the trash on Sundays even though you TELL THEM EVERY WEEK.

This is a fresh Windows machine with WSL2 and Docker installed, but that’s all. Were you to try and run a Node.js project on this machine, Powershell would tell you that it has absolutely no idea what you are referring to and maybe you mispelled something. Which, in all fairness, you do suck at spelling. Remember that time in 4ᵗʰ grade when you got knocked out of the first round of the spelling bee because you couldn’t spell “fried.” FRYED? There’s no “Y” in there!

Now this is not a huge problem — you could always skip off and install Node.js. But let’s say for a second that you can’t be bothered to do that and you’re pretty sure that skipping is not something adults do.

Instead, we can configure this project to run in a container that already has Node.js installed. Now, as I’ve already discussed, I have no idea how to use Docker. I can barely use the microwave. Fortunately, VS Code will configure your project for you — to an extent.

From the Command Palette, there is an “Add Development Container Configuration Files…” command. This command looks at your project and tries to add the proper container definition.

In this case, VS Code knows I’ve got a Node project here, so I’ll just pick Node.js 14. Yes, I am aware that 12 is LTS right now, but it’s gonna be 14 in [checks watch] one month and I’m an early adopter, as is evidenced by my interest in container technology just now in 2020.

This will add a .devcontainer folder with some assets inside. One is a Dockerfile that contains the Node.js image that we’re going to use, and the other is a devcontainer.json that has some project level configuration going on.

Now, before we touch anything and break it all (we’ll get to that, trust me), we can select “Rebuild and Reopen in Container” from the Command Palette. This will restart VS Code and set about building the container. Once it completes (which can take a while the first time if you’re not on a high-end gaming PC that your kids will never know the joys of), the project will open inside of the container. VS Code is connected to the container, and you know that because it says so in the lower left-hand corner.

Now if we open the terminal in VS Code, Powershell is conspicously absent because we are not on Windows anymore, Dorthy. We are now in a Linux container. And we can both npm install and npm start in this magical land.

This is an Express App, so it should be running on port 3000. But if you try and visit that port, it won’t load. This is because we need to map a port in the container to 3000 on our localhost. As one does.

Fortunately, there is a UI for this.

The Remote Containers extension puts a “Remote Explorer” icon in the Action Bar. Which is on the left-hand side for you, but the right-hand side for me. Because I moved it and you should too.

There are three sections here, but look at the bottom one which says “Port Forwarding,” I’m not the sandwich with the most lettuce, but I’m pretty sure that’s what we want here. You can click on the “Forward a Port” and type “3000,” Now if we try and hit the app from the browser…

Mostly things, “just worked.” But the configuration is also quite simple. Let’s look at how we can start to customize this setup by automating some of the aspects of the project itself. Project specific configuration is done in the devcontainer.json file.

Automating project configuration

First off, we can automate the port forwarding by adding a forwardPorts variable and specifying 3000 as the value. We can also automate the npm install command by specifying the postCreateCommand property. And let’s face it, we could all stand to run AT LEAST one less npm install.

{
  // ...
  // Use 'forwardPorts' to make a list of ports inside the container available locally.
  "forwardPorts": [3000],
  // Use 'postCreateCommand' to run commands after the container is created.
  "postCreateCommand": "npm install",
  // ...
}

Additionally, we can include VS Code extensions. The VS Code that runs in the Docker container does not automatically get every extension you have installed. You have to install them in the container, or just include them like we’re doing here.

Extensions like Prettier and ESLint are perfect for this kind of scenario. We can also take this opportunity to foist a light theme on everyone because it turns out that dark themes are worse for reading and comprehension. I feel like a prophet.

// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14
{
  // ...
  // Add the IDs of extensions you want installed when the container is created.
  "extensions": [
    "dbaeumer.vscode-eslint",
    "esbenp.prettier-vscode",
    "GitHub.github-vscode-theme"
  ]
  // ...
}

If you’re wondering where to find those extension ID’s, they come up in intellisense (Ctrl/Cmd + Shift) if you have them installed. If not, search the extension marketplace, right-click the extension and say “Copy extension ID.” Or even better, just select “Add to devcontainer.json.”

By default, the Node.js container that VS Code gives you has things like git and cURL already installed. What it doesn’t have, is “cowsay,” And we can’t have a Linux environment without cowsay. That’s in the Linux bi-laws (it’s not). I don’t make the rules. We need to customize this container to add that.

Automating environment configuration

This is where things went off the rails for me. In order to add software to a development container, you have to edit the Dockerfile. And Linux has no tolerance for your shenanigans or mistakes.

The base Docker container that you get with the container configurations in VS Code is Debian Linux. Debian Linux uses the apt-get dependency manager.

apt-get install cowsay

We can add this to the end of the Dockerfile. Whenever you install something from apt-get, run an apt-get update first. This command updates the list of packages and package repos so that you have the most current list cached. If you don’t do this, the container build will fail and tell you that it can’t find “cowsay.”

# To fully customize the contents of this image, use the following Dockerfile instead:
# https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14/.devcontainer/Dockerfile
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14
# ** Install additional packages **
RUN apt-get update \
  && apt-get -y install cowsay

A few things to note here…

  1. That RUN command is a Docker thing and it creates a new “layer.” Layers are how the container knows what has changed and what in the container needs to be updated when you rebuild it. They’re kind of like cake layers except that you don’t want a lot of them because enormous cakes are awesome. Enormous containers are not. You should try and keep related logic together in the same RUN command so that you don’t create unnecessary layers.
  2. That \ denotes a line break at the end of a line. You need it for multi-line commands. Leave it off and you will know the pain of many failed Docker builds.
  3. The && is how you add an additional command to the RUN line. For the love of god, don’t forget that \ on the previous line.
  4. The -y flag is important because by default, apt-get is going to prompt you to ensure you really want to install what you just tried to install. This will cause the container build to fail because there is nobody there to say Y or N. The -y flag is shorthand for “don’t bother me with your silly confirmation prompts”. Apparently everyone is supposed to know this already. I didn’t know it until about four hours ago. 

Use the command prompt to select “Rebuild Container”…

And, just like that…

It doesn’t work.

This the first lesson in what I like to call, “Linux Vertigo.” There are so many distributions of Linux and they don’t all handle things the same way. It can be difficult to figure out why things work in one place (Mac, WSL2) and don’t work in others. The reason why “cowsay” isn’t available, is that Debian puts “cowsay” in /usr/games, which is not included in the PATH environment variable.

One solution would be to add it to the PATH in the Dockerfile. Like this…

FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14
RUN apt-get update \
  && apt-get -y install cowsay
ENV PATH="/usr/games:${PATH}"

EXCELLENT. We’re solving real problems here, folks. People like cow one-liners. I bullieve I herd that somewhere.

To summarize, project configuration (forwarding ports, installing project depedencies, ect) is done in the “devcontainer.json” and enviornment configuration (installing software) is done in the “Dockerfile.” Now let’s get brave and try something a little more edgy.

Advanced configuration

Let’s say for a moment that you have a gorgeous, glammed out terminal setup that you really want to put in the container as well. I mean, just because you are developing in a container doesn’t mean that your terminal has to be boring. But you also wouldn’t want to reconfigure your pretentious zsh setup for every project that you open. Can we automate that too? Let’s find out.

Fortunately, zsh is already installed in the image that you get. The only trouble is that it’s not the default shell when the container opens. There are a lot of ways that you can make zsh the default shell in a normal Docker scenario, but none of them will work here. This is because you have no control over the way the container is built.

Instead, look again to the trusty devcontainer.json file. In it, there is a "settings" block. In fact, there is a line already there showing you that the default terminal is set to "/bin/bash". Change that to "/bin/zsh".

// Set *default* container specific settings.json values on container create.
"settings": {
  "terminal.integrated.shell.linux": "/bin/zsh"
}

By the way, you can set ANY VS Code setting there. Like, you know, moving the sidebar to the right-hand side. There – I fixed it for you.

// Set default container specific settings.json values on container create.
"settings": {
  "terminal.integrated.shell.linux": "/bin/zsh",
  "workbench.sideBar.location": "right"
},

And how about those pretentious plugins that make you better than everyone else? For those you are going to need your .zshrc file. The container already has oh-my-zsh in it, and it’s in the “root” folder. You just need to make sure you set the path to ZSH at the top of the .zshrc so that it points to root. Like this…

# Path to your oh-my-zsh installation.
export ZSH="/root/.oh-my-zsh"


# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="cloud"


# Which plugins would you like to load?
plugins=(zsh-autosuggestions nvm git)


source $ZSH/oh-my-zsh.sh

Then you can copy in that sexy .zshrc file to the root folder in the Dockerfile. I put that .zshrc file in the .devcontainer folder in my project.

COPY .zshrc /root/.zshrc

And if you need to download a plugin before you install it, do that in the Dockerfile with a RUN command. Just remember to group all of these into one command since each RUN is a new layer. You are nearly a container expert now. Next step is to write a blog post about it and instruct people on the ways of Docker like you invented the thing.

RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

Look at the beautiful terminal! Behold the colors! The git plugin which tells you the branch and adds a lightning emoji! Nothing says, “I know what I’m doing” like a customized terminal. I like to take mine to Starbucks and just let people see it in action and wonder if I’m a celebrity.

Go gently

Hopefully you made it to this point and thought, “Geez, this guy is seriously overreacting. This is not that hard.” If so, I have successfully saved you. You are welcome. No need to thank me. Yes, I do have an Amazon wish list.

For more information on Remote Containers, including how to do things like add a database or use Docker Compose, check out the official Remote Container docs, which provide much more clarity with 100% less neurotic commentary.


The post A Gentle Introduction to Using a Docker Container as a Dev Environment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Docker Layers Explained

When you pull a Docker image, you will notice that it is pulled as different layers. Also, when you create your own Docker image, several layers are created. In this post we will try to get a better understanding of Docker layers.

1. What Is a Docker Layer?

A Docker image consists of several layers. Each layer corresponds to certain instructions in your Dockerfile. The following instructions create a layer: RUNCOPYADD. The other instructions will create intermediate layers and do not influence the size of your image. Let’s take a look at an example. We will use a Spring Boot MVC application which we have created beforehand and where the Maven build creates our Docker image. The sources are available at GitHub.