Integrate Oracle Database With Apache Kafka Using Debezium

Oracle Databases are used for traditional enterprise applications, cloud-native use cases, and departmental systems in large enterprises. Debezium connector for Oracle is a great way to capture data changes from the transactional system of record and make them available for use cases such as replication, data warehousing, and real-time analytics.  

What Is Debezium?

Debezium is an open source distributed streaming platform for change data capture (CDC) that provides Apache Kafka Connect connectors for several databases, including Oracle.

Alternatives to Docker Desktop

Last year Docker announced a change in their subscription service agreement that limited the free usage of Docker Desktop effective August 31, 2021, with a grace period until January 31, 2022. The grace period is over, so what are your options if you don't fall in any of the allowed categories to keep it running for free or if you just want to look for alternatives? In this post, we will go over podman and rancher desktop to check if they can replace Docker Desktop usage.

A lot of ink has run regarding the goals behind the change in the product subscription, and I won't be able to cover them in this article, so I will just summarize the actual changes. First, Docker Desktop is still free for personal use, open source projects, and small businesses. The real difference comes for subscribers that use it for professional work. Docker sets the barrier at 250 employees and $10 million in annual revenue. If your employer is above those limits, you will need a professional plan starting at $5 per user per month to comply.

Three easy ways to run Kafka without Zookeeper

There has been a couple of years since the announcement of the removal of Apache Zookeeper as a dependency to manage Apache Kafka metadata. Since version 2.8, we now can run a Kafka cluster without Zookeeper. This article will go over three easy ways to get started with a single node cluster using containers.

Control and data planes

Apache Kafka implements independent control and data planes for its clusters. The control plane manages the cluster, keeps track of what brokers are alive, and takes action when the set changes. Meanwhile, the data plane consists of the features required to handle producers and consumers and their records. In the previous iterations, Zookeeper was the cluster component that held most of the implementation of the control plane. 

Event-Driven APIs and Schema Governance for Apache Kafka

As a developer, I'm always excited to attend so many great sessions addressing critical challenges in the Apache Kafka ecosystem. One example is how changes to event-driven APIs are leading developers to focus on contract-first development for Kafka.

This article discusses the journey Kafka users have taken to get on the API bandwagon and how developers are using contracts to describe brokers without losing control of their data in the cluster. A critical component for effective schema governance is having a schema registry such as Apicurio Registry

Deploying an Apache Kafka mock service with Microcks

Developers are working with new applications every day using Apache Kafka as the backbone to implement an event-driven architecture (EDA) to support distributed systems. However, this adds new challenges when sharing across teams, even within the same organization. What endpoints are available? What is the structure of the message? That’s why payload examples became critical to speed up development. For this reason, having a reliable and enterprise-grade service to mock Apache Kafka should be an item in your EDA checklist. This post will do a quick review of the Microcks General Availability (GA) version and their support to Kafka.

What is Microcks?


Three ways to run Kubernetes in your laptop with Docker

In the past, running a Kubernetes cluster on your laptop required several servers to get started or running heavyweight Virtual Machines in your local environment. Different projects began in recent years to make it easier to get started with Kubernetes in your computer and remove the dependency with VMs. This post will cover 3 of them to help you get started with Kubernetes on your laptop just by using Docker.

First of all, Kubernetes is an open source platform to manage workloads by abstracting the underlying infrastructure. It helps you manage microservices containers and makes them portable across clouds, including your laptop. As a developer, you may care about Kubernetes because it is the defacto standard to deploy and manage containerized applications.

Replacing Confluent schema registry with Red Hat Integration

Along with the enhancements for Apache Kafka-based environments, Red Hat announced the Red Hat Integration service registry to help teams to govern their services schemas. Developers can now use the registry to query for the schemas and artifacts required by each service endpoint or register and store new structures for future use.

Registry for event-driven architecture

Red Hat Integration's service registry, based on the Apicurio project registry, provides a way to decouple the schema used to serialize and deserialize Kafka messages with the applications that are sending/receiving them. The service registry is a store for schema (and API design) artifacts providing a REST API and a set of optional rules for enforcing content validity and evolution. The registry handles data formats like Apache Avro, JSON Schema, Google Protocol Buffers (protobuf), as well as OpenAPI and AsyncAPI definitions.

Getting started with Apache Kafka and Red Hat service registry

New projects require some help. Imagine you are getting ready to start that new feature your business has been asking for the last couple of months. Your team is ready to start coding to implement the new awesome thing that would change your business.

To achieve it, the team will need to interact with the current existing software components of your organization. Your developers will need to interact with API services and event endpoints already available in your architecture. Before being able to send and process information, developers need to be aware of the structure or schema expected by those services.

Debezium Serialization With Apache Avro and Apicurio Service Registry

This article demonstrates how to use Debezium to monitor a MySQL database and then use Apache Avro with the Apicurio service registry to externalize the data schema and reduce the payload of each one of the captured events.  

What Is Debezium?

Debezium is a set of distributed services that captures row-level database changes so that applications can see and respond to them. Debezium connectors record all events to a Red Hat AMQ Streams Kafka cluster. Applications use AMQ Streams to consume change events. Debezium uses the Apache Kafka Connect framework, which makes all of Debezium’s connectors into Kafka Connector source connectors. As such, they can be deployed and managed using AMQ Streams’ Kafka Connect custom Kubernetes resources.