Streaming Data to RDBMS via Kafka JDBC Sink Connector Without Leveraging Schema Registry

In today’s M2M (Machine to machine) communications landscape, there is a huge requirement for streaming the digital data from heterogeneous IoT devices to the various RDBMS for further analysis via the dashboard, triggering different events to perform numerous actions. To support the above scenarios, Apache Kafka acts like a central nervous system where data can be ingested from various IoT devices and persisted into various types of the repository, RDBMS, cloud storage, etc. Besides, various types of data pipelines can be executed before or after data arrives at Kafka’s topic. By using the Kafka JDBC sink connector, we can stream data continuously from Kafka’s topic into respective RDBMS.

The Biggest JDBC Sink Connector Difficulty

The biggest difficulty with the JDBC sink connector is that it requires knowledge of the schema of data that has already landed on the Kafka topic. Schema Registry must, therefore, be integrated as a separate component with the exiting Kafka cluster in order to transfer the data into the RDBMS. Therefore, to sink data from the Kafka topic to the RDBMS, the producers  must publish messages/data containing the schema. The schema defines the structure of the data format. If the schema is not provided, the JDBC sink connector would not be able to map the messages with the database table’s columns after consuming messages from the topic.

CategoriesUncategorizedTags