CI/CD for Cloud-Native Applications

This is an article from DZone's 2022 DevOps Trend Report.

For more:


Read the Report

Continuous integration (CI) and continuous delivery (CD) are crucial parts of developing and maintaining any cloud-native application. From my experience, proper adoption of tools and processes makes a CI/CD pipeline simple, secure, and extendable. Cloud native (or cloud based) simply means that an application utilizes cloud services. For example, a cloud-native app can be a web application deployed via Docker containers and uses Azure Container Registry deployed to Azure Kubernetes Services or uses Amazon EC2, AWS Lambda, or Amazon S3 services. 

Lessons Learned Moving From On-Prem to Cloud Native

Recently, I came across a sample e-commerce application that demonstrates how to use Next.js, GraphQL engine, Postgres, and a few other frameworks to build a modern web application. The application supports basic e-commerce capabilities such as product inventory and order management, recommendation system, and checkout function. This made me curious as to how much effort it would take to turn this application from an on-prem to a cloud-native solution.

The original architecture for this sample app looked like the below diagram. You can start the whole setup in a few minutes following this guide.

How To Set Up a Scalable and Highly-Available GraphQL API in Minutes

A modern GraphQL API layer for cloud-native applications needs to possess two characteristics: horizontal scalability and high availability. 

Horizontal scalability adds more machines to your API infrastructure, whereas vertical scalability adds more CPUs, RAM, and other resources to an existing machine that runs the API layer. While vertical scalability works to a certain extent, the horizontally scalable API layer can scale beyond the capacity of a single machine. 

Systematic and Chaotic Testing: A Way to Achieve Cloud Resilience

In today’s digital technology era where downtime translates to shut down, it is imperative to build resilient cloud structures. For example, in the pandemic, IT maintenance teams can no longer be on-premises to reboot any server in the data center. This may lead to a big hindrance in accessing all the data or software, putting a halt on productivity, and creating overall business loss if the on-premises hardware is down. However, the solution here would be to transmit all your IT operations to cloud infrastructure that ensures security by rendering 24/7, round-the-clock tech support by remote members. Cloud essentially poses as a savior here.

Recently, companies have been fully utilizing the cloud potency, and hence, observability and resilience of cloud operations become imperative as downtime now equates to disconnection and business loss.

Designing High-Volume Systems Using Event-Driven Architectures

Prelude

Microservices style application architecture is taking root and rapidly growing in population that are possibly scattered in different parts of the enterprise ecosystem. Organizing and efficiently operating them on a multi-cloud environment organizing data around microservices, making the data as real-time as possible are emerging to be some of the key challenges.

Thanks to the latest development in Event-Driven Architecture (EDA) platforms such as Kafka and data management techniques such as Data Meshes and Data Fabrics, designing microservices-based applications is now much easier.

Cloud-Native and MongoDB

NoSQL stands for "Not an SQL" The term has been coined many years back, as a result, a number of products have come to the market spanning from key-value to document to columnar to a graph database. MongoDB started its journey as the leading provider of Document DB, a close competitor was CouchBase for reference. There are several languages, a framework which built an out-of-the-box connector for MongoDB just like how JDBC connectors help to connect Java programs to an RDBMS, for example, Mongoose as a module under NodeJS, available through npm and this is one of the key components in MEAN (MongoDB, Express, AngularJS, Node.js) or MERN (MongoDB, Express, ReactJS, Node.js) stacks.

Over the years many such platforms have realized that just having a database will not help to build the platform, the need is to help build a Cloud Native solution with Database-as-a-Service aka DaaS is the future. This means the platform needs to support API as the key enabler to connect to various consumers. The classical way to use an on-premise installation with SDK based connector will not stand long in this "Journey to Cloud" era.

What is serverless with Java?

For decades, enterprises have developed business-critical applications on various platforms, including physical servers, virtual machines, and cloud environments. The one thing these applications have in common across industries is they need to be continuously available (24x7x365) to guarantee stability, reliability, and performance, regardless of demand. Therefore, every enterprise must be responsible for the high costs of maintaining an infrastructure (e.g., CPU, memory, disk, networking, etc.) even if actual resource utilization is less than 50%.

The serverless architecture was developed to help solve these problems. Serverless allows developers to build and run applications on-demand, guaranteeing high availability without having to manage servers in multi- and hybrid-cloud environments. Behind the scenes, there are still many servers in the serverless topology, but they are abstracted away from application development. Instead, cloud providers use serverless services for resource management, such as provisioning, maintaining, networking, and scaling server instances.

Getting started with Java Serverless Functions using Quarkus and AWS Lambda

The serverless journey started with functions - small snippets of code running on-demand and a short period in Figure 1.  AWS Lambda in the “1.0” phase made this paradigm very popular, but it had its limitations around execution time, protocols, and poor local development experience. 

Since then, developers realized that the same serverless traits and benefits could be applied to microservices and Linux containers. This leads us into what we're calling the “1.5” phase in Figure 1.  Some serverless containers here completely abstract Kubernetes, delivering the serverless experience through an abstraction layer that sits on top of it, like Knative.

Getting started with edge development on Linux using open source

There are many reasons why Linux is such a popular platform for processing Internet of Things (IoT) edge applications. A major one is a transparency. Linux security capabilities are built on open source projects, giving users a transparent view of security risks and threats and enables them to apply fixes quickly with security module patches or kernel-level updates. Another Linux advantage is that developers can choose from various programming languages to develop, test, and run device communications over various networking protocols—other than HTTP(s)—when developing IoT edge applications. It also enables developers to address server programming for controlling data flow from IoT devices to front-end graphical user interface (GUI) applications.

This article explains how to get started with IoT edge development using Quarkus, a cloud-native Java framework that enables you to integrate a lightweight message broker for processing data streams from IoT devices in a reactive way.

Build Even Faster Quarkus Applications With fast-jar

Quarkus is already fast, but what if you could make inner loop development with the supersonic, subatomic Java framework even faster? Quarkus 1.5 introduced fast-jar, a new packaging format that supports faster startup times. Starting in Quarkus 1.12, this great feature became the default packaging format for Quarkus applications. This article introduces you to the fast-jar format and how it works.

Note: The ninth annual global Java developer productivity report found that more developers are implementing business applications with Quarkus. Quarkus’s support for live coding with fast startup and response times lets developers focus more on business logic implementations rather than wasting time on jobs such as recompiling and redeploying code and continuously restarting the runtime environment.

Why You Should Care About Service Mesh

Many developers wonder why they should care about service mesh. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: 'As long as you want to simplify your microservices architecture, it should be running on Kubernetes.'

Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.

Why You Should Care About Service Meshes

Many developers wonder why they should care about service meshes. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: "As long as you want to simplify your microservices architecture, it should be running on Kubernetes."

Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.

3 New Java Tools to Try in 2021

Despite the popularity of Python, Go, and Node.js for implementing artificial intelligence and machine learning applications and serverless functions on Kubernetes, Java technologies still play a key role in developing enterprise applications. According to Developer Economics, in Q3 2020, there were 8 million enterprise Java developers worldwide.

Although the programming language has been around for more than 25 years, there are always new trends, tools, and frameworks in the Java world that can empower your applications and your career.

Log Monitoring and Alerting With Grafana Loki

In a production environment, a downtime of even a few microseconds is intolerable. Debugging such issues is time-critical. Proper logging and monitoring of infrastructure help in debugging such scenarios. It also helps in optimizing cost and other resources proactively, as well as helps to detect any impending issue which may arise in the near future. There are various logging and monitoring solutions available in the market. In this post, we will walk through the steps to deploy Grafana Loki in a Kubernetes environment. This is due to its seamless compatibility with Prometheus, a widely used software for collecting metrics. Grafana Loki consists of three components: Promtail, Loki, and Grafana (PLG), which we will see in brief before proceeding to the deployment. This article provides a better insight into the architectural differences of PLG and other primary logging and monitoring stack like Elasticsearch-FluentD-Kibana (EFK).

Logging, Monitoring, and Alerting With Grafana Loki

Before proceeding with the steps for deploying Grafana Loki, we will see each tool briefly.

Azure Spring Cloud: A Comprehensive Overview

logos

In this article, you will learn about Azure Spring Cloud and its main features quickly and with ease, through a very down-to-earth approach.

What Made it Possible

Azure Spring Cloud is the natural consequence of Microsoft getting closer to the Java community over the last few years, on top of the fact that the Java ecosystem has been completely dominated by Spring for a long time now.

GraalVM — Byte Code to Bit Code

Early adopters for Cloud-Native (microservices, serverless) are now moving to its next wave called v2.x., leveraging the maturity, learnings, and identified shortfalls to design next-level stuff.

Let's recap few purposes of going cloud-native that we will relate here: