Effortlessly Streamlining Test-Driven Development and CI Testing for Kafka Developers

Test-driven development has gained popularity among developers as it gives developers instant feedback and can identify defects and problems early. Once the application is developed, during continuous integration (CI), it’s also important to run automatic tests to cover all possible scenarios before it gets built and deployed to detect defects and issues early.

Apache Kafka® provides a distributed, fault-tolerant streaming system that allows applications to communicate with each other asynchronously. Whether you are building microservices or data pipelines, it allows applications to be more loosely-coupled for better scalability and flexibility. But at the same time, it also introduces a lot more complexity to the environment.

Getting Started With Redpanda in Kubernetes

Redpanda is an event streaming platform that is free and open source, similar to MariaDB and CockroachDB. It is compatible with Kafka APIs and is used by many as an alternative to Apache Kafka due to its performance and lightweight design. 

Kubernetes (K8s) is the defacto platform for cloud-native environments, so it’s not surprising that many developers choose it to manage their Redpanda clusters. But when things go wrong, it’s not as simple as “kill it, dump it, and rebuild” — much like with other data-intensive software, databases, messaging systems, and even Apache Kafka®. This is especially true when you’re streaming vast amounts of data with high throughput.

What Is Advertised Kafka Address?

Let’s start with the basics. After successfully starting a Redpanda or Apache Kafka® cluster, you want to stream data into it right away. No matter what tool and language you chose, you will immediately be asked for a list of bootstrap servers for your client to connect to it.

This bootstrap server is just for your client to initiate the connection to one of the brokers in the cluster, and then it will provide your client with initial sets of metadata. The metadata tells the client the currently available brokers, and which the leader of each partition is hosted by which brokers so that the client can initiate a direct connection to all brokers individually. The diagram below will give you a better idea.

Tooling Guide for Getting Started With Apache Camel in 2021

Getting Started with Apache Camel? This post is for you. But I am not going to dive into how to write the Camel route, there are plenty of materials out there that do a better job than me. A good place to get started is the one and only Camel Blog, it gives you all the latest and greatest news from the community. If you want to start from the basics, I highly recommend Camel in Action II Book. It has everything you need to know about writing Camel. Or join the Camel community mailing list or Zulip to ask questions, it's very friendly and welcoming.

If you are a returning Camel rider, I would go through this post, where it will get you up to speed on what to expect.  Another good resource is the FAQ page. I found the majority of the getting started enquiries can be found here.

Integrating SAP With Serverless Camel

SAP is the world's leading enterprise business processing solution. There is always going to be a need to connect an organization's core to other SaaS offerings, partners, or even another SAP solution.  Red Hat Integration (RHI) offers flexibility, adaptability, and the ability to move quickly with framework and software to build the event-driven integration architecture. Not just connect, but also maintain data consistency across platforms. 

SAP offers interfaces such as OData v4, OData v2, RESTful API, and SOAP as the HTTP-based one, or you can also use the classic RFC(remote procedure call) and iDoc Messages. And there is a recent event enablement add-on that offers AMQP and MQTT protocol. Camel in RHI allows you to seamlessly connect to any of your preferred protocols. Developers can simply configure to connect to the endpoints with their address, credential, and/or SSL settings.

Serverless in Financial Services

Introduction

The financial industry is going through a radical change; the new generation of customers expects more precise, immediate, and comprehensive services. The emerging fintech offerings completely reshape the industry. New regulations are constantly emerging and need to be applied timely. In 2020, the unexpected global pandemic is another accelerator for this revolution. Around the world branches and offices are closed, more services and operations have been moving online, on the cloud, and on devices. At the same time, in the world of technology, people are moving toward becoming cloud-native, more precisely Kubernetes- native. Which completely changes the mindset of deployment, packaging, software development, and even the structures of how the IT teams are formed. 

Financial institutions have shifted their focuses, and invest more heavily on the IT infrastructure in order to:

Serverless Eventing Architecture

Event-Driven Architecture (EDA) is a way of designing applications and services to respond to real-time information based on the sending and receiving of information about individual events. Serverless is all about providing service on a provision as-used basis. Combining both you get the best of both worlds. 

  • Loose coupling of services - For better fault tolerance, and can add/remove functionality on the go without affecting others. 

Contract-First Development — the Event-Driven Way!

Introduction

Contract first application development is not limited to synchronized RESTFul API calls. With the adoption of event-driven architecture, more developers are demanding a way to set up contracts between these asynchronous event publishers and consumers. Sharing what data format that each subscriber is consuming, what data format is used from the event publisher, in a way OpenAPI standards is going to be very helpful.

But in the asynchronous world, it is ever more complex, not only do you need to be caring about the data schema, there are different protocols, serializing, deserializing mechanisms, and various support libraries. In fact, there are talks on AsyncAPI. But I am going to show you that what can be done today,  is to use ONE of the most commonly used solutions in building EDA, “Kafka”. And show how to do Contract First Application Development using Camel + Apicurio Registry. 

Camel K in a Nutshell

Camel K, a project under the famous Apache Camel project, is a project that totally changes the way developers work with Kubernetes/OpenShift cloud platforms by automating the nasty configuration and loads of prep work from developers. If you are an old-time developer like me, you did your best to slowly try to adapt to the latest and greatest cloud native “ecology.” It’s not difficult, but with small things and traps here and there. I’ll tell you it's not a smooth ride. It’s understandable for emerging technologies. But with the large adoption of cloud, I see it’s reaching a level of maturity, where now we are thinking of how to make things go faster, as well as making it more accessible to the larger audience. 

Check out some reasons why you might love Camel K.