Run Containers and VMs Together With KubeVirt

Although many enterprises have deployed Kubernetes and containers, most also operate virtual machines. As a result, the two environments will likely co-exist for years, creating operational complexity and adding cost in time and infrastructure.

Without going into the pros and cons of one versus the other, it’s helpful to remember that each virtual machine or VM contains its instance of a full operating system and is intended to operate as if it were a standalone server—hence the name. By contrast, in a containerized environment, multiple containers share one instance of an operating system, almost always some flavor of Linux.

Running KVM and VMware VMs in Container Engine for Kubernetes

With the advent of microservices, people commonly ask, "Is it possible to run my legacy virtual machines in Kernel-based Virtual Machine (KVM) or VMware with my microservices on Kubernetes, or do I need to migrate them to containers?" One possible answer to that question is KubeVirt.

The KubeVirt project turns Kubernetes into an orchestration engine for application containers and VM workloads. It addresses the needs of development teams that have adopted or want to adopt Kubernetes but have existing VM-based workloads that they can’t easily put in containers. The technology provides a unified development platform in which developers can build, modify, and deploy applications that reside in both application containers and VMs in a common, shared environment.

Traffic Management With Istio (1): Unified Management of TCP Ingress Traffic Routing Based on Istio Rules

The Istio traffic management model basically allows for the decoupling of traffic from infrastructure scaling, allowing operations personnel to specify the rules to apply to traffic using Pilot instead of specifying which pods/VMS should receive traffic. Decoupling traffic from infrastructure scaling allows Istio to provide a variety of traffic management functions independent of application code. The Envoy sidecar proxy implements these functions.

In a typical mesh, you often have one or more finalizing external TLS connections at the end to guide traffic into the mesh's load balancer (known as a gateway); the traffic then flows through internal services after the sidecar gateway. The following figure illustrates the use of gateways in a mesh:

Azure Kubernetes Service (AKS) Security Features

Today, we are deploying a Kubernetes cluster for our application. Azure Kubernetes Service (AKS) has many advantages over similar Kubernetes platforms because the user does not pay for the master VMS or its maintenance. An Azure subscriber pays only for the worker VMS. However, AKS — out of the box — is not a production-ready product. The following are the steps we need to take before we became almost production-ready.

In this article, we are going to discuss the following topics: