TiDB Operator Source Code Reading (Part 2): Operator Pattern

In my last article, I introduced the TiDB Operator's architecture and what it is capable of. But how does TiDB Operator code run? How does TiDB Operator manage the lifecycle of each component in the TiDB cluster?

In this post, I'll present Kubernetes's Operator pattern and how it is implemented in TiDB Operator. More specifically, we'll go through TiDB Operator's major control loop, from its entry point to the trigger of the lifecycle management.

Creating a Kubernetes Operator in Java by Rudy De Busscher

Kubernetes is much more than a runtime platform for Docker containers. Through its API not only can you create custom clients, but you can also extend Kubernetes. Those custom Controllers are called Operators and work with application-specific custom resource definitions. Not only can you write those Kubernetes operators in Go, but you can also do this in Java. Within this talk, you will be guided through setting up and your first explorations of the Kubernetes API within a plain Java program. We explore the concepts of resource listeners, programmatic creation of deployments and services, and how this can be used for your custom requirements.

Oxidizing the Kubernetes Operator

Some applications are hard to manage in Kubernetes, requiring lots of manual labor and time invested to operate them. Some applications might be hard to set up, some need special care for restarts, some both. It’s individual. What if the knowledge of how to operate an application could be extracted into a dedicated software, relieving human operators? That is exactly what Kubernetes operators are about. An operator, a.k.a., custom controller automates what a human operator would do with the application to make it run successfully. First, Kubernetes is taught about the application to be operated by the custom controller. This is simply done by creating a custom resource. Custom resources end up extending the Kubernetes API, making Kubernetes recognize the resource. The operator then watches events of such custom resource and acts upon them, hence the name custom controller — a controller for custom resources.

Rust is an extraordinary language for operators to be implemented in. A typical Kubernetes operator ends up making lots of calls to the Kubernetes API. Watching resources states, creating new ones, updating/deleting old ones. Also, an operator should be able to manage multiple resources at a time, ideally in parallel. Rust’s asynchronous programming model is a perfect match for building high-performance/throughput operators. If a threaded runtime, such as Tokio is used, a vast amount of custom resources can be managed in parallel by the operator. Rust is an ahead-of-time compiled language with no runtime, no garbage collection pauses, and C-level performance. An operator typically resides inside a Pod in Kubernetes. Such a pod can be restarted at any time. Near-instant startup time minimizes delay before the state of managed resources is handled by the operator again. Rust also guarantees memory safety and eliminates data races. This is especially helpful for heavily concurrent/parallel applications, like Kubernetes operators.

Manage Azure Event Hubs With Azure Service Operator on Kubernetes

Azure Service Operator is an open source project to help you provision and manage Azure services using Kubernetes. Developers can use it to provision Azure services from any environment, be it Azure, any other cloud provider, or on-premises — Kubernetes is the only common denominator!

It can also be included as a part of CI/CD pipelines to create, use and tear down Azure resources on-demand. Behind the scenes, all the heavy lifting is taken care of by a combination of Custom Resource Definitions which define Azure resources and the corresponding Kubernetes Operator(s) which ensure that the state defined by the Custom Resource Definition is reflected in Azure as well.

Cloud-Native Benchmarking With Kubestone

Intro

Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure.

In this post, I am introducing you to a cloud-native bench-marking tool known as Kubestone. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.

How Does Kubestone Work?

At it's core, Kubestone is implemented as a Kubernetes Operator in Go language with the help of Kubebuilder. You can find more info on the Operator Framework via this blog post.
Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via xridge's DockerHub space. Here is a list of currently supported benchmarks:

First Steps with the Kubernetes Operator

 

This blog post demonstrates how you can use the Operator Lifecycle Manager to deploy a Kubernetes Operator to your cluster. Then, you will use the Operator to spin up an Elastic Cloud on Kubernetes (ECK) cluster.

WSO2 Announces API Manager 3.0

WSO2 has announced the release of API Manager 3.0, the latest iteration of the company’s open-source API management solution. With this update, WSO2 has added a native Kubernetes Operator, which it hopes will simplify application configuration in cloud-native environments. Additionally, V3.0 includes updates to various user interfaces, a new monetization method, and a pre-defined CI/CD pipeline.