How to Create — and Configure — Apache Kafka Consumers

Apache Kafka’s real-time data processing relies on Kafka consumers (more background here) that read messages within its infrastructure. Producers publish messages to Kafka topics, and consumers — often part of a consumer group — subscribe to these topics for real-time message reception. A consumer tracks its position in the queue using an offset. To configure a consumer, developers create one with the appropriate group ID, prior offset, and details. They then implement a loop for the consumer to process arriving messages efficiently. 

It’s an important understanding for any organization using Kafka in its 100% open-source, enterprise-ready version — and here’s what to know.

The Effect of Data Storage Strategy on PostgreSQL Performance

PostgreSQL continues to solidify its effectiveness as an enterprise-ready database in its 100% free and open-source version. Data teams should feel confident with OS PostgreSQL and not be taken in by less versatile and more costly open-core Postgres repackaging.

That said, backing open-source PostgreSQL with the right supplemental technology strategy can have a profound impact on the value the venerable relational database delivers. For example, enterprises that support their PostgreSQL database implementations with fast storage strategies can realize high-end performance advantages, including substantial increases in the TPS workloads that servers can handle.