Skip to content

Chapter 1

Introduction

Chapter 2

The Setup

Chapter 3

The Case

Chapter 6

Conclusion

Chapter 1

Introduction

As Muleys, I suppose almost all of us have encountered the inevitable need for an asynchronous process that also requires long, reliable persistence, high performance under low resource constraints, and even the ability to be queried or rerun in case of errors, corruptions, or outages. You might already have some solutions in mind for these requirements, such as the native VM queues of Mule, Anypoint MQ, SQL/NoSQL databases, etc. However, it’s worth noting that these solutions may not be entirely suitable for all scenarios. While this is open to debate, it’s a topic for another discussion. Assuming you’re here, I’ll presume you’ve already decided to incorporate an event-driven approach into your API-led architecture.

During one of my consulting assignments, a company was grappling with bottlenecks, low performance in some API streams, and reliability issues in asynchronous processes. We discussed the situation with architects and team leads, considering various solutions, including KubeMQ, Active MQ, Azure Event Hub, Solace, and Kafka/Confluent. Ultimately, we opted to explore Kafka, both Confluent and the Community Edition.

In this article, you’ll find an example of how to load data onto a Kafka topic using a streaming API and how to read from it. The tests were conducted on my local computer for the Mule part, while I utilized Confluent Kafka with a basic cluster (EU Central 1) and a topic consisting of six partitions. Keep in mind that writing performance to Kafka topics may vary depending on the location of your topic. However, since these tests aim to compare the performance of a Mule application using different approaches within Mule itself, the specifics of the Kafka setup should not be a significant factor.”