Deploying Kafka on OpenShift

This article describes an easy way for developers to deploy Kafka on Red Hat OpenShift.

Managed Services

There are multiple ways to use Kafka in the cloud. One way is to use IBM’s managed Event Streams service or Red Hat’s managed service OpenShift Streams for Apache Kafka. The big advantage of managed services is that you don’t have to worry about managing, operating, and maintaining the messaging systems. As soon as you deploy services in your own clusters, you are usually responsible for managing them. Even if you use operators which help with day 2 tasks, you will have to perform some extra work compared to managed services.

Centralized Logging for Kafka on Kubernetes With Grafana, Loki, and Promtail.

Introduction

In one of my another articles, I discussed how to set up strimzi (also known as Kafka on Kubernetes) on minikube. Also, we discussed how to set up Grafana and Prometheus to fetch metrics from Kafka and zookeeper instances. But wouldn't it have been more helpful and more administrator-friendly if Grafana could also be used to monitor logs of all the pods? If there are multiple zookeeper and Kafka pods, a single window would certainly be a boon for administrators and management.

Grafana provides Loki and Promtail a functionality to aggregate logs and view them from the same Grafana UI.

Connecting Apicurio Registry With Secured Strimzi Clusters

Apicurio Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. Apicurio Registry decouples the structure of your data from your client applications, and enables you to share and manage your data types and API descriptions at runtime. Decoupling your data structure from your client applications reduces costs by decreasing overall message size, which creates efficiencies by increasing consistent re-use of schemas and API designs across your organization.

Some of the most common uses cases where Apicurio Registry helps us are:

Kafka on Kubernetes, the Strimzi Way! (Part 1)

Some of my previous blog posts (such as Kafka Connect on Kubernetes, the easy way!), demonstrate how to use Kafka Connect in a Kubernetes-native way. This is the first in a series of blog posts which will cover Apache Kafka on Kubernetes using the Strimzi Operator. In this post, we will start off with the simplest possible setup i.e. a single node Kafka (and Zookeeper) cluster and learn:

  • Strimzi overview and setup
  • Kafka cluster installation
  • Kubernetes resources used/created behind the scenes
  • Test the Kafka setup using clients within the Kubernetes cluster

The code is available on GitHub - https://github.com/abhirockzz/kafka-kubernetes-strimzi