decorative image for blog showing activemq logo
December 10, 2020

Guide to ActiveMQ Performance Optimization

Web Infrastructure
Middleware

ActiveMQ is a hugely popular open source messaging server, and is commonly used to solve complex messaging needs in enterprise applications. While ActiveMQ can be used to solve many types of messaging performance issues, it can cause a few of its own.

In this blog, we look at some of the common performance problems teams experience when using ActiveMQ, and ActiveMQ performance tuning tips that can help to improve messaging performance.

Back to top

Balancing ActiveMQ Messaging Performance and Reliability

Like most messaging systems, ActiveMQ was designed with an understanding of the tradeoffs between performance and reliability when it comes to message delivery. Different endpoints and message types may have different needs for reliable message delivery or throughput, even when associated with the same broker instance. This flexibility is possible in ActiveMQ by ensuring your messaging architecture matches the intended usage. Beyond the basics, configuration options allow further optimization of how the ActiveMQ message broker and clients handle messages efficiently.

See Real-World Use Cases for ActiveMQ

This webinar shows some fantastic, real-world examples of ActiveMQ and the problems it can solve.

Back to top

3 Common ActiveMQ Performance Problems

Though ActiveMQ’s flexibility and configurability lends itself to a wide variety of architectures and use cases, we often see customers first experiencing performance problems from a few common causes. These are all related to increasing message volume, or systems that often worked great when initially deployed. As volumes increase, and as usage expands to additional applications, there are some common areas where we see customers having growing pains.

1. Slow Consumers/Disabling Producer Flow Control

It may seem odd to group these, but one common issue customers identify is that their producers are being throttled by the broker. They see delays in sending messages, and the broker indicates producer flow control is being triggered, forcing delays on message producers by delaying message acknowledgement.

A common reaction to this behavior is to disable producer flow control – after all, it’s an option that can be disabled as a destination policy, and at first glance looks like it will get your producers up and running again at full speed, sending messages at the rate they can produce them to the broker. It might even get you a temporary burst of performance that helps with your message throughput initially – before you encounter bigger problems, like effects on other queues and topics, more significant pauses in message delivery due to resource contention, or even brokers crashing.

Producer flow control exists to slow down and delay producers from being able to send more messages rather than suspending the entire connection when encountering “high water mark” memory limits. The most common causes of seeing producer flow control engage are having too few or slow consumers (not processing messages as fast as they are sent to the broker), or (less often) needing to increase resources available to the broker (memory, faster disk for persistence stores, etc.)

In most cases, disabling producer flow control is at best a band-aid, and acts to cover up symptoms without fixing the root cause.

2. Not Using Dedicated Storage for Persistence Stores

This one is another common problem as brokers scale – as you scale past a single broker (possibly a direct copy of the configuration as tested in development), you might start using the persistence store for higher message volumes, and see performance issues if the backing storage experiences contention with other I/O loads. This can be other loads on the same system, contention over shared storage in a VM environment, or anything else affecting storage performance and availability.

3. Trying to Do too Much in the Broker, or Using an Embedded Broker

The primary purpose of a message broker like ActiveMQ is to efficiently receive messages from a source and deliver them to their destination. Though it is possible to embed an almost unlimited amount of message-processing logic in the broker itself (such as by using an embedded Camel instance for more than very simple routing tasks), this can easily affect broker performance.

Similarly, outside of a broker solely used for internal communication inside an application, it is generally best to run ActiveMQ standalone and not embedded within an application inside an application server or inside the same JVM as other applications. This helps predictability for memory management – ActiveMQ by default bases many thresholds (and high-water marks) based on the JVM’s maximum available memory allocation. Additionally, the object creation/expiration patterns and garbage collection for a message broker like ActiveMQ is likely to be significantly different than another application and keeping them separate can make garbage collection more effective. 

Back to top

3 ActiveMQ Performance Tuning Tips

So in the previous section, we covered some basic common problems encountered with ActiveMQ performance and how to fix them. Beyond these types of scaling problems, it’s possible do some more subtle tuning as well. The ActiveMQ site has some good pointers on basic tuning and how to test, but I’ll aim to provide some additional useful context here.

1. Configuring Pre-fetch Sizes for Consumers

Consumers in ActiveMQ have a prefetch value that determines how many more messages will be fetched from a destination when the consumer has run out of messages to process. The default prefetch in ActiveMQ is 1000, and this can be increased if desired to allow for higher numbers of messages to be prefetched to decrease the impact of latency and the need to minimize the number of fetches that have to happen to process a large volume of messages.

Though possible to set a high prefetch, it is generally not advisable. High prefetches can increase throughput, but for most applications in relatively low-latency environments, a prefetch beyond 1 will not provide a large performance benefit – if the consumer processes messages in the same or greater time as an individual message fetch happens, then a prefetch of 1 will not slow message processing. At the same time, lower prefetches allow more consistent load balancing across multiple consumers, as you cannot run into the situation where the first consumer to connect has prefetched/claimed the first 1000 (or other prefetch value) messages, starving the next consumer of the opportunity to share in the processing of that batch.

2. Batching Acknowledgement Receipt

If you have auto-acknowledge consumers, setting optimized acknowledgements will allow the ActiveMQ broker to send acknowledgements for a batch of messages, typically 65% of the prefetch value. If you have a high-volume queue with consumers consuming messages with a large prefetch and auto acknowledgement (i.e., you’re configuring for message throughput over guaranteed delivery/resilience from consumer failures), then this is one more option you can set to further optimize throughput by not requiring individual message acknowledgements. This option is set by setting jms.optimizeAcknowledge=true on the connection string, or setOptimizeAcknowledge(true) on the connection object.

Please note that though auto-acknowledgement is default and can increase throughput for fast consumers, it does mean that consumer failures are not guarded against – once the message is dispatched to a consumer, it is considered dequeued by the broker and the broker will no longer retain a copy.

In situations where you want to ensure a message is processed even if an individual consumer fails during processing, you should use manual acknowledgement to allow the consumer to acknowledge the message has been processed to the broker (and the broker can dequeue the message) only after its processing has been fully completed. With manual acknowledgement, if a consumer fails during processing it is possible for the message to be redelivered to another consumer based on your redelivery policy and not be lost.

3. Increase Throughput With Straight Through Session Consumption

One more option you can set for circumstances where you’re serving auto-acknowledge consumers and prioritizing throughput over failure corner cases is straight through session consumption. In most cases this will be unnecessary, but you can have the consumer session dispatch messages to a consumer directly rather than first starting a separate thread for dispatch. This can be accomplished by using the alwaysSessionAsync=false on the connection factory for auto-acknowledge consumers.

Back to top

Final Thoughts

ActiveMQ is a fast and highly configurable message broker, which works well out of the box for a variety of messaging patterns and clients. Even though it is simple to set up and run, because of the number of different types of messaging loads and architectures it supports, there are numerous ways to configure and customize its behavior to tune it for your specific usage.

Get ActiveMQ Training and Support

If your team is considering, or working with ActiveMQ, OpenLogic can help support your team. Click the links below to learn more about our on-demand ActiveMQ training course, and our support offerings for ActiveMQ, Camel, and Kafka.

Get ActiveMQ TrainingGet ActiveMQ Support

Additional Resources

Back to top