Kafka Message Filtering: An Analysis

A lot of companies nowadays use event-driven architectures in their day-to-day business activities, especially when they desire their applications to own real-time or near real-time reactiveness.

In such a scenario, during the interactions among the three main types of actors — producers, message brokers, and consumers – a lot of messages are exchanged. Nevertheless, under certain circumstances, some of these messages might not be of interest and thus they are discarded and ignored.

Generative AI With Spring Boot and Spring AI

It’s been more than 20 years since Spring Framework appeared in the software development landscape and 10 since Spring Boot version 1.0 was released. By now, nobody should have any doubt that Spring has created a unique style through which developers are freed from repetitive tasks and left to focus on business value delivery. As years passed, Spring’s technical depth has continually increased, covering a wide variety of development areas and technologies. On the other hand, its technical breadth has been continually expanded as more focused solutions have been experimented, proof of concepts created, and ultimately promoted under the projects’ umbrella (towards the technical depth).

One such example is the new Spring AI project which, according to its reference documentation, aims to ease the development when a generative artificial intelligence layer is aimed to be incorporated into applications. Once again, developers are freed from repetitive tasks and offered simple interfaces for direct interaction with the pre-trained models that incorporate the actual processing algorithms.

PostgreSQL Views With Runtime Parameters

There are many situations when applications are requested to be agile and versatile enough so that they can run dynamic reports for which the input comes at runtime.

This article aims to present a way of achieving as much by leveraging the temporary configuration parameters supported by PostgreSQL databases.

Acting Soon on Kafka Deserialization Errors

Event-driven architectures have been successfully used for quite an amount of time by a lot of organizations in various business cases. They excel at performance, scalability, evolvability, and fault tolerance, providing a good level of abstraction and elasticity. These strengths made them good choices when applications needed real or near real-time reactiveness.

In terms of implementations, for standard messaging, ActiveMQ and RabbitMQ are good candidates, while for data streaming, platforms such as Apache Kafka and Redpanda are more suitable. Usually, when developers and architects need to opt for either one of these two directions they analyze and weigh from a bunch of angles – message payload, flow and usage of data, throughput, and solution topology. As the discussion around these aspects can get too big and complex, it is not going to be refined as part of this article.

Stream Summary Statistics

In order to be able to leverage various capabilities of the Java Streams, one shall first understand two general concepts – the stream and the stream pipeline. A Stream in Java is a sequential flow of data. A stream pipeline, on the other hand, represents a series of steps applied to data, a series that ultimately produce a result.

My family and I recently visited the Legoland Resort in Germany – a great place, by the way – and there, among other attractions, we had the chance to observe in detail a sample of the brick-building process. Briefly, everything starts from the granular plastic that is melted, modeled accordingly, assembled, painted, stenciled if needed, and packed up in bags and boxes. All the steps are part of an assembly factory pipeline.

Idempotent Liquibase Changesets

Abstract

“Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application” [Resource 3].

 The purpose of this article is to outline a few ways of creating idempotent changes when the database modifications are managed with Liquibase. Throughout the lifetime of a software product that has such tier, various database modifications are being applied as it evolves. The more robust the modifications are, the more maintainable the solution is. In order to accomplish such a way of working, it is usually a good practice to design the executed changesets to have zero side effects, that is, to be able to be run as many times as needed with the same end result. 

Repeatable Database Updates via Liquibase

The main purpose of this tutorial is to present a way of detecting modifications to a stored Liquibase change set that was previously applied and execute it again automatically. In order to illustrate this, a small proof of concept is constructed gradually. In the first step, the application configures Liquibase as its migration manager and creates the initial database schema. Then, modifications are applied to the running version, and lastly, the repeatable script is introduced and enhanced.

Set-Up

  • Java 17
  • Spring Boot v.3.0.2
  • Liquibase 4.17.2
  • PostgreSQL 12.11
  • Maven

Proof of Concept

As PostgreSQL was chosen for the database layer of this service, first, a new schema is created (liquirepeat). This can be easily accomplished by issuing the following SQL command after previously connecting to the database.

IMAP OAuth 2.0 Authorization in Exchange Online

Microsoft announced that starting October 2022, Basic authentication for specific protocols in Exchange Online would be considered deprecated and turned off gradually and randomly for certain tenants. As insightful details concerning this topic may be found in Resources items one and two, among these protocols, there are Exchange ActiveSync (EAS), POP, IMAP, Remote PowerShell, Exchange Web Services (EWS), Offline Address Book (OAB).

Consequently, customer applications leveraging Basic authentication towards Exchange Online as part of their business use cases need to replace it with Modern authentication —  OAuth 2.0 token-based authorization —  which no doubt has many benefits and improvements that help mitigate the former's risks.

Bypassing Spring Interceptors via Decoration

Whether they are built using the genuine Spring Framework or Spring Boot, such applications are widely developed and deployed these days. By trying to address simple or complex business challenges, products strongly rely on the used framework features in their attempt to offer elegant solutions. Elegant here means correct, clean, and easy to understand and maintain.

In the case of a web application, some requests are handled in a way, while others may need extra pre or post-processing or even a change in the initial request. Generally, Servlet Filters are configured and put in force in order to accommodate such scenarios.

Delegating JWT Validation for Greater Flexibility

In my opinion, the purpose of all software applications that have been created so far, are being and will be developed should primarily be to make humans' day-to-day activities easier to fulfill. Humans are the most valuable creations, and software applications are great tools that at least could be used by them.

Nowadays, almost every software product exchanges data with at least one other peer software product, which results in huge amounts of data flowing among them. Usually, a request from one product to another needs to pass a set of preconditions before it is considered acceptable and trustworthy.

Respectful REST APIs: ‘Sunset’ and ‘Deprecation’ HTTP Headers

1. Introduction

According to Richardson Maturity Model [Reference 1], a Level 3 REST architecture introduces discoverability through hypermedia controls in addition to resources and HTTP verbs, thus making communication between the involved actors more self-documenting.

Hypermedia enriches the interaction from various perspectives, decreasing the coupling between parties and also allowing them to evolve independently. Moreover, the data enclosed in the exchanged messages is enhanced with links, which makes the overall exchanged information more accurate. On the other hand, developers now need to pay more attention when thinking the design, as the representations have a greater impact.

JUnit Test Groups for More Reliable Development

Introduction

As a product is being developed and maintained its test suite is enriched as features and functionalities are added and enhanced. Ideally, development teams should aim having a lot of quick unit tests that are run whenever modifications in the code are made. It is great if these are written before or at least together with the tested code, cover as many use cases as possible and finish running after a reasonable amount of time. By all means, a reasonable amount of time is an entity difficult to quantify. 

On the other hand, there are a lot of live products that solve quite complex business problems whose thorough verification concerns cross-cutting through multiple application layers at once and thus, the number of integration tests is significant. Running all these unit and integration tests whenever the code is touched is not always feasible, as the productivity and the development speed is decreased considerably. Again, productivity and development speed are hard to quantify but should be always traded for correctness.