What Happened to HornetQ, the JMS That Shattered Records?

HornetQ 2.0 broke records and defeated top-ranked messaging services in benchmark tests. Why wasn't it widely adopted?


Software vendors make all kinds of claims about their products, but what developers care about is proof. When testing a new product, it's important to see how it stacks up against its competition.

For years, researchers at UT Darmstadt have compared the performance of message-oriented middleware servers based on Java Messaging Service (JMS). In 2010, the SPECjms2007 benchmark record was smashed by HornetQ, an open-source enterprise messaging system from JBoss.

Debugging the Java Message Service (JMS) API Using Lightrun

The Java Message Service API (JMS) was developed by Sun Microsystems in the days of Java EE. The JMS API provides us with simple messaging abstractions, including message producer, message consumer, and so on. Messaging APIs let us place a message on a “queue” and consume messages placed into said queue. This is immensely useful for high-throughput systems — instead of wasting user time by performing a slow operation in real time, an enterprise application can send a message. This non-blocking approach enables extremely high throughput while maintaining reliability at scale.

The message carries a transactional context which provides some guarantees on deliverability and reliability. As a result, we can post a message in a method and then just return, which provides similar guarantees to the ones we have when writing to an ACID database.

Developing Event-Driven Microservices

This is the second in a series of blogs on data-driven microservices design mechanisms and transaction patterns with the Oracle converged database. The first blog illustrated how to connect to an Oracle database in Java, JavaScript, Python, .NET, and Go as succinctly as possible. The goal of this second blog is to use that connection to receive and send messages with Oracle AQ (Advanced Queueing) queues and topics and conduct an update and read from the database using all of these same languages.

Advanced Queuing (AQ) is a messaging system that is part of every Oracle database edition and was first released in 2002. AQ sharded queues introduced partitioning in release 12c and is now called Transaction Event Queues (TEQ).

Real-Time Stock Data Updates with WebSockets using Ballerina

The internet is built on the HTTP standard for communicating data between clients and servers. Although HTTP is widely-used, when there is a requirement to support the continuous transmission of data streams or real-time updates between client and server, repeatedly making regular HTTP requests will slow things down.

Messaging on the Internet

Before discussing the real power and use of WebSocket let’s see what are the different ways of messaging on the internet. 

Apache Kafka vs. Oracle Transactional Event Queues as Microservices Event Mesh

This blog focuses on transactional and message delivery behavior, particularly as it relates to microservice architectures.  There are of course numerous areas to compare MongoDB, PostgresSQL, and Kafka with the converged Oracle DB and Oracle Transactional Event Queues/AQ that are beyond the scope of this blog.

The Oracle database itself was first released in 1979 (PostgresSQL was released in 1997 and MongoDB in 2009).  Oracle Advanced Queuing (AQ) is a messaging system that is part of every Oracle database edition and was first released in 2002 (Kafka was open-sourced by LinkedIn in 2011 and Confluent was founded in 2014).  

Understanding Solace Endpoints: Queues vs Topic Endpoints

One of the most frequent questions I get asked in the community is “What is the difference between a queue and a topic endpoint?” While both queues and topic endpoints persist messages, it’s important to understand what they are, how they’re different, and when each one should be used.

What Are Solace Endpoints?

Solace endpoints are objects created on the event broker to persist messages. There are two types of endpoints: queue endpoints (usually just called queues) and topic endpoints.

Auto Filter Messages Into Subscriptions in Azure Service Bus Topic

Azure Service Bus is the cloud messaging service offered by Microsoft Azure. Azure Service Bus is mainly used in multi-tenant architectures which need messages to be transferred securely between different services.

Azure Service Bus gives a FIFO message model and a publish/subscribe to messaging model, queues for one-to-one asynchronous message processing, as well as topic and subscriptions for multiple subscribers.

ELI5: What Is the Publish-Subscribe Messaging Pattern?

Introduction

Used in microservices architecture (a method of designing software applications that is rapidly growing in popularity), the publish-subscribe messaging pattern is a form of asynchronous communication where messages are published to a topic and received – in real-time – by consumers who subscribe to the topic.

Now, what does that really mean? What are the advantages of publish-subscribe? How can you explain it to someone non-technical?

Enterprise Messaging With Autonomous DB and Micronaut

I’ve written about messaging many times on my blog, and for good reason, too. It’s a popular subject that developers can’t seem to get enough of. In this world of distributed architectures, it’s critical that services communicate with each other to ensure the application's business logic is implemented properly. It’s well established that messaging is crucial for modern applications, so let’s look at a messaging solution that exists in the Oracle Cloud that you may not be aware of. In fact, if you’re already using Autonomous DB, then this solution is available to you at no additional charge! Allow me to introduce you to Oracle Advanced Queuing (AQ). 

What’s AQ? It’s exactly what it sounds like: a full-featured messaging solution right inside the database. Point-to-point, pub/sub, persistent, and non-persistent messaging are all supported. There are tons of ways to interact — including via PL/SQL, JMS, JDBC, .NET, Python, Node.JS — and pretty much any popular language can interface with AQ. Demos tend to be the best way to understand concepts like this, so in this post, we’re going to look at how to enable AQ in your Autonomous DB instance, create a queue, and enqueue and dequeue messages with PL/SQL. To complete the demo, we’ll look at publishing and consuming messages from AQ from a very simple Java application written with Micronaut.

Java JMS Oversimplified

Java Messaging Service

JMS or Java Messaging service is a solution for asynchronous communication using messages (objects). Initially, it was part of JSR (specification used in Java EE).

Simple Problem as Example to Explain: User Creation Service

Let's assume we have any service that can process only 100 requests per second. Any higher load will freeze service or put it down.

Spring Cloud Stream Channel Interceptor

Introduction

A Channel Interceptor is a means to capture a message before being sent or received in order to view it or modify it. The channel interceptor allows having a structured code when we want to add extra message processing or embed additional data that are basically related to a technical aspect without affecting the business code.

The Message Interceptor is used in frameworks like Spring Cloud Sleuth and Spring Security to propagate tracing and security context through message queue by adding headers to message in the producer part, then reading them and restoring the context in the consumer part.

RabbitMQ RPC With FastAPI [Video]

Below, I explain a sample app with a FastAPI endpoint. RabbitMQ is used to deliver and return messages between the API endpoint and the backend. The backend code could run on a different microservice, and multiple backends can be started for scalability.

Transforming TCP Sockets to HTTP With Go

Sometimes, we need to work with legacy applications, and legacy applications can be hard to rewrite and change. Imagine, for example, we have an application that is using raw TCP sockets to communicate with another process. Raw TCP sockets are fast, but they have various problems. For example, all data is sent in plain text over the network and without authentication (if we don’t implement a protocol).

One solution is to use HTTPS connections instead. We can also authenticate those requests with an Authentication Bearer. For example, I’ve created a simple HTTP server with Python and Flask:

How KubeMQ Customers Build Scalable Messaging Platforms With Kubernetes Operators

Over the last several years, the adoption of Kubernetes has increased tremendously. In fact, according to a Cloud Native Computing Foundation (CNCF) survey, 78% of respondents in late 2019 were using Kubernetes in production. Leveraging Kubernetes allows organizations to create a management layer to commodify clouds themselves and build cross- or hybrid-cloud deployments that hide the provider-specific implementation details from the rest of the team.

One crucial part of the Kubernetes ecosphere is Operators — a tool initially introduced by CoreOS in 2016 to utilize the Kubernetes APIs themselves to deploy and manage the state of applications. Operators are a critical part of the deployment and operation of applications in a cross/hybrid environment. They can help manage and maintain state across a federated Kubernetes deployment (multiple Kubernetes clusters running together) or even across clusters.

Intro To Apache Kafka: How Kafka Works

Introduction

We recently published a series of tutorial videos and tweets on the Apache Kafka® platform. So now you know there’s a thing called Kafka, but before you put your hands to the keyboard and start writing code, you need to form a mental model of what the thing is. These videos give you the basics you need to know to have the broad grasp on Kafka necessary to continue learning and eventually start coding. This article summarizes those videos.

Events

Pretty much all of the programs you’ve ever written respond to events of some kind: the mouse moving, input becoming available, web forms being submitted, bits of JSON being posted to your endpoint, the sensor on the pear tree detecting that a partridge has landed on it, etc. Kafka encourages you to see the world as sequences of events, which it models as key-value pairs. The key and the value have some kind of structure, usually represented in your language’s type system, but fundamentally they can be anything. Events are immutable, as it is (sometimes tragically) impossible to change the past.

Azure Event Hubs: Role Based Access Control (RBAC) in action

Azure Event Hubs is a streaming platform and event ingestion service that can receive and process millions of events per second. In this blog, we are going to cover one of the security aspects related to Azure Event Hubs.

Shared Access Signature (SAS) is a commonly used authentication mechanism for Azure Event Hubs which can be used to enforce granular control over the type of access you want to grant - it works by configuring rules on Event Hubs resources (namespace or topic). However, it is recommended that you use Azure AD credentials (over SAS) whenever possible since it provides similar capabilities without the need to manage SAS tokens or worry about revoking a compromised SAS.