How To Connect Stateful Workloads Across Kubernetes Clusters

One of the biggest selling points of Apache Cassandra™ is its shared-nothing architecture, making it an ideal choice for deployments that span multiple physical data centers. So when our Cassandra as-a-service single-region offering reached maturity, we naturally started looking into offering it cross-region and cross-cloud. One of the biggest challenges in providing a solution that spans multiple regions and clouds is correctly configuring the network so that Cassandra nodes in different data centers can communicate with each other successfully, even as individual nodes are added, replaced, or removed. 

From the start of the cloud journey at DataStax, we selected Kubernetes as our orchestration platform, so our search for a networking solution started there. While we’ve benefited immensely from the ecosystem and have our share of war stories, this time we chose to forge our own path, landing on ad-hoc overlay virtual application networks (how’s that for a buzzword soup?). In this post, we’ll go over how we arrived at our solution, its technical overview, and a hands-on example with the Cassandra operator.

The Service Mesh in the Microservices World

The software industry has come a long journey and throughout this journey, Software Architecture has evolved a lot. Starting with 1-tier (Single-node), 2-tier (Client/ Server), 3-tier, and Distributed are some of the Software Architectural patterns we saw in this journey.

The Problem

The majority of software companies are moving from Monolithic architecture to Microservices architecture, and Microservices architecture is taking over the software industry day-by-day. While monolithic architecture has many benefits, it also has so many shortcomings when catering to modern software development needs. With those shortcomings of monolithic architecture, it is very difficult to meet the demand of the modern-world software requirements and as a result, microservices architecture is taking control of the software development aggressively. The Microservices architecture enables us to deploy our applications more frequently, independently, and reliably meeting modern-day software application development requirements.

Service Mesh Comparison: Istio vs Linkerd

From the latest CNCF annual survey, it is pretty clear that a lot of people are showing high interest in using a service mesh in their project and many are already using in them production. Nearly 69% are evaluating Istio and 64% are looking at Linkerd. Linkerd was the first service mesh in the market, but Istio made service meshes more popular. Both projects are cutting edge and very competitive, making it a tough choice to select one. In this blog post, we will learn more about Istio and Linkerd architecture, their moving parts, and compare their offerings to help you make an informed decision.

Introduction to Service Mesh

Over the past few years, microservices architecture has become a popular style of designing software applications. In this architecture, we breakdown the application into independently deployable services. The services are usually lightweight, polyglot in nature, and often managed by various functional teams. This architecture style works well until a certain point, when the number of these services becomes large and difficult to manage. Suddenly, they are not simple anymore. This leads to challenges in managing various aspects like security, network traffic control, and observability. A service mesh helps address these challenges.

Service Mesh and Cloud-Native Microservices

We all want this kind of service.


You may also like: Istio Service Mesh, the Step-by-Step Guide, Part 1: Theory

Microservices need to be decoupled, flexible, operationally transparent, data-aware and elastic. Most material from last year only discusses point-to-point architectures with tightly coupled and non-scalable technologies like REST / HTTP. This blog post takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed, and efficiency.

Chaos Engineering in Organic Microservice Architectures

Puzzles? More like chaos engineering.


The resilience of a distributed microservice application depends fundamentally on how gracefully it can adapt to those all-too-certain environmental degradations and service failures. It is therefore not only a good but an essential practice that such applications be tested for how they will behave under various failure scenarios.

Part 1: How Canary Deployments Work in Kubernetes, Istio, and Linkerd

This is the first of a two-part series on canary deployments. In this post, we cover the developer pattern and how it is supported in Kubernetes, Linkerd, and Istio. In part two, we’ll explore the operational pattern, how it is supported in Glasnostic, a comparison of the various implementations, and finally the pros and cons of canary deployments.

A canary deployment (or canary release) is a microservices pattern that should be part of every continuous delivery strategy. This pattern helps organizations deploy new releases to production gradually, to a subset of users at first, before making the changes available to all users. In the unfortunate event that things go sideways in the push to prod, canary deployments help minimize the resulting downtime, contain the negative effects to a small number of users, and make it easier to initiate a rollback if necessary. In a nutshell, think of a canary deployment as a phased or incremental rollout.