Canary Deployment, Constraints, and Benefits

Understanding Canary 

Canary was an essential part of British mining history: these humble birds were used to detect carbon monoxide and other toxic gases before the gases could hurt humans (the canary is more sensitive to airborne toxins than humans). The term “canary analysis” in software deployment serves a similar purpose. Just as the canary notified miners about any problems in the air they breathed, DevOps engineers use a canary deployment analysis to gauge if their new release in CI/CD process will cause any trouble to business.

You can consider the following general definition of a canary release deployment: canary deployment is a technique to reduce the risk of introducing a software update in production by slowly rolling out the change to a small subset of users before making it available to everybody. 

Part 1: How Canary Deployments Work in Kubernetes, Istio, and Linkerd

This is the first of a two-part series on canary deployments. In this post, we cover the developer pattern and how it is supported in Kubernetes, Linkerd, and Istio. In part two, we’ll explore the operational pattern, how it is supported in Glasnostic, a comparison of the various implementations, and finally the pros and cons of canary deployments.

A canary deployment (or canary release) is a microservices pattern that should be part of every continuous delivery strategy. This pattern helps organizations deploy new releases to production gradually, to a subset of users at first, before making the changes available to all users. In the unfortunate event that things go sideways in the push to prod, canary deployments help minimize the resulting downtime, contain the negative effects to a small number of users, and make it easier to initiate a rollback if necessary. In a nutshell, think of a canary deployment as a phased or incremental rollout.

Hands-on With Istio Service Mesh: Implementing Canary Deployment

In this hands-on exercise, we will build a Kubernetes cluster, install Istio on the cluster, build two simple dockerized microservices using Spring Boot, deploy to the cluster, and configure canary deployment using the Istio Service Mesh Virtual Service and Destination Rules.

The exercise assumes a basic knowledge of Kubernetes, Istio, Spring Boot, and Docker. We will use Google Cloud Engine for building Kubernetes cluster.