Scaling a Node JS Application

Whenever we build an awesome product we first build it standalone but sooner or later it attracts more users and then our minds start thinking about how to accommodate more users and there comes the need of scaling the application. Generally scaling means providing more elasticity to the application so that it can sustain the high influx of users and run smoothly without any glitches.

Software scalability is an attribute of a tool or a system to increase its capacity and functionalities based on its users’ demands. Scalable software can remain stable while adapting to changes, upgrades, overhauls, and resource reduction

Docker: How to Stop and Remove All Containers at Once

It’s an understatement to say that Docker is a game-changer for systems engineers and developers. You can run almost any application with a single command and customize it for your environment via a consistent container-based interface. But as containers proliferate, controlling them gets more complicated, too. Managing containers from the command line can be painful, but setting up an orchestration tool like Kubernetes or Docker Swarm is overkill for smaller systems.

Stopping and removing a container from the command line takes two steps. Stopping and removing two containers is four. And stopping and removing 10 containers is — well, you get the idea. Let’s look at how to make the Docker command line easier to use. We’ll focus on stopping and removing containers. Then, we’ll look at Docker Compose, another tool that makes managing smaller collections of containers easier.

How to Connect Two Containers From Different docker-compose Files

Working with Docker containers allows developers to create encapsulated applications that are independent of the host machine and contain all the necessary libraries and dependencies. In practice, docker-compose is often used to configure and manage containers. When several containers are built in a docker-compose file, they are automatically connected to a common network (which is created by default) and can communicate with each other. 

In most cases, however, each project will have its own docker-compose file. In such configurations, the containers from one docker-compose will not be able to connect to those from the other unless we have previously created and configured a shared network. In such cases, it is necessary to use a docker networking in compose. In this article, we’ll take a look at a sample method/example, how to set up networks in different docker-compose files, so that individual projects and the containers/services in them can be connected in a single network when running on a local machine.

Three easy ways to run Kafka without Zookeeper

There has been a couple of years since the announcement of the removal of Apache Zookeeper as a dependency to manage Apache Kafka metadata. Since version 2.8, we now can run a Kafka cluster without Zookeeper. This article will go over three easy ways to get started with a single node cluster using containers.

Control and data planes

Apache Kafka implements independent control and data planes for its clusters. The control plane manages the cluster, keeps track of what brokers are alive, and takes action when the set changes. Meanwhile, the data plane consists of the features required to handle producers and consumers and their records. In the previous iterations, Zookeeper was the cluster component that held most of the implementation of the control plane. 

Containers are Here to Stay

Containers are not a fad. They’re never overkill for any project and they simplify many aspects of development, even when running locally. With a wide selection of relevant tools and resources, there has never been a better time than now to implement containers in your organization and get ahead of the curve.

What Are Containers?

Containers are portable, self-sufficient, standardized units that store all code, assets, and dependencies of a program. They can be shared and run on virtually any hardware. Containerization has been around for years, although it has recently been gaining more traction with the advent of Docker. Many organizations are recognizing the value of managing their code in a simplified, shareable, and maintainable way. What does this mean for you and your organization?

Get Started With Kafka Connector for Azure Cosmos DB Using Docker

Having a local development environment is quite handy when trying out a new service or technology. Docker has emerged as the de-facto choice in such cases. It is especially useful in scenarios where you’re trying to integrate multiple services and gives you the ability to start fresh before each run.

This blog post is a getting started guide for the Kafka Connector for Azure Cosmos DB. All the components (including Azure Cosmos DB) will run on your local machine, thanks to:

NGINX and HTTPs With Let’s Encrypt, Certbot, and Cron Dockerization In Production

Docker is a popular open-source containerization platform and it frees your hands to build your applications in development and production. In this post, I'm going to walk you through how to build a production-grade HTTPs secured Nginx server with Docker, Docker Compose, Let’s Encrypt (its client certbot). Let’s Encrypt certificates last 90 days and will need to be renewed after the certificate expires. So I will also provide details to script the renewal in crontab in Docker container.

1. Basic Example

In development, we need a basic Nginx container without HTTPs to fast setup our local test environment. I use Nginx official docker image and wrap up all the stuff with docker-compose.

How to Use Docker Volumes to Code Faster

If you are a developer who uses Docker, odds are you may have heard that you can use volumes to maintain persistent state for your containers in production. But what many developers don’t realize is that volumes can also be an excellent tool for speeding up your development workflow.

In this post, I’ll give you a brief overview of what is a Docker volume, how Docker host volumes work, and show you a tutorial and example of how you can use volumes and nodemon to make coding with Docker easier and faster.

VS Code Remote Development With Docker Compose: Developing Services in Standalone and Integrated Modes

VS Code remote development is a brilliant feature from the VS Code team. Using the extensions available in the VS Code remote extension pack, you can develop your applications in an external development environment viz. a remote server (through SSH), containers, and WSL. The premise of the three modes of development is the same. The application code is stored either on your local system (on container and WSL through volume mount) or remote server (through SSH), and the local instance of the VS Code attaches itself to the external system through an exposed port (container and WSL), or SSH tunnel (remote server). For a developer, this experience is seamless and requires a one-off setup. VS Code is responsible for the heavy lifting of the entire experience of remote development.

Let’s discuss some of the everyday use cases of remote development. The primary use of remote development is to develop and test Linux compatible apps with WSL on Windows. Remote development allows you to use a remote machine with better specs for development (e.g., code and debug on your desktop from your tablet), which is another use of the feature. However, the most beneficial use case for most developers working in a team environment is that they now can specify the development environment (including VS Code extensions) in the form of Dockerfiles and container specifications, and add them to the source control. With the configurations in place, anyone can recreate the development environment and be productive immediately.

Docker Commands to Containerize an Application

Docker is an exciting technology when the developers are focusing on the design of their applications in a cloud-native approach. One of the key characteristics of designing a cloud-native application is to containerize the application. Designing applications such way will save you from hearing some of the words from other developers during development time.

  • "The application is not working in my local machine!"
  • "I'm facing version conflicts."
  • "Libraries are missing."

Every time we onboard new developers in our team we have to fix several issues to build and run the applications successfully in a new machine that leads the onboarding period a bit longer and forces another expert to engage when he/she is focusing on something deliverables. 

MiBand 3 and React-Native (Part 3): Docker, Spring Cloud, and Spring Boot

After significant work on my mistakes described in the second chapter of this series, I decided to move on to the final chapter. In this article, we will focus on my latest findings in the development of the server-side. I am going to show how a React-native application can collect MiBand data and transfer it to a real server.

My server will be based on a microservice solution that can be deployed easily because of docker-compose. For the last 5-8 years, microservices have become a trending solution for solving many issues in server-side development. Its significant capabilities in the scaling of infrastructure and efficient and minimal time consuming for request's processing, motivated me to implement a small server-side API, based on Spring Cloud.

Docker With Spring Boot and MySQL: Docker Compose (Part 2)

docker-compose helps manage multiple containers

In my previous article, I wrote about Docker, the CLI commands needed to run a database, and Spring Boot applications. We used Dockerfile to set up the environment and run the application by running containers separately and then building a link between them. But for multiple container applications we can use the docker-compose tool. Docker CLI can manage a single container, but docker-compose can manage multiple containers and define the dependent services. 

Subtle Art of Leveraging Docker Compose for Test Automation

Powerful, yet subtle.

Many of us over the years have executed test automation against a production-like test environments. By the term "production-like," I mean test environments that have the same set up as a production environment but may or may not have the exact configuration as that of the production environment.

However, when it comes to executing test automation against the test environments, there is always a certain degree of challenge (although solvable) that an engineer faces. Classic examples include: