SwiftMQ Tuning Guide

Introduction

SwiftMQ's default configuration is optimized for the common use-cases. That means, you don't need to change it until you hit the limits. This guide will help you to change SwiftMQ's various tuning options.

Persistence

Persistence means a message is written to disk before the send method returns. The JMS default delivery mode is PERSISTENT. Therefore, if you start SwiftMQ out if the box, all messages are persistet to disk which has big impact on your throughput. Sending persistent messages also requires synchronous sends at the message producer so the impact is not only the time to write to the disk but also the time to wait for a reply to send the next one.

Persistency is required if messages should survive a restart of the SwiftMQ Universal Router resp. a failover of a SwiftMQ HA Router. If you don't need that, you should change the delivery mode to NON_PERSISTENT. This can be done at the sending JMS client with

            sender.setDeliveryMode(DeliveryMode.NON_PERSISTENT);

This changes the default delivery mode of this particular MessageProducer object.

Or it can be done by putting the delivery mode as additional parameter to the send method:

            sender.send(msg, DeliveryMode.PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE);

If you can't change your code, you can change the default delivery mode in the JMS connection factory:

            <connection-factory name="ConnectionFactory" jms-default-delivery-mode="non_persistent"/>

The next time your JMS client looks up the connection factory, it sends non-persistent messages.

In case you don't do JNDI lookups but use our proprietary connection factory, just change it there:

            props.put(SwiftMQConnectionFactory.SOCKETFACTORY, "com.swiftmq.net.JSSESocketFactory");
            props.put(SwiftMQConnectionFactory.HOSTNAME, "localhost");
            props.put(SwiftMQConnectionFactory.PORT, "4001");
            props.put(SwiftMQConnectionFactory.KEEPALIVEINTERVAL, "60000");
            props.put(SwiftMQConnectionFactory.JMS_DELIVERY_MODE, String.valueOf(DeliveryMode.NON_PERSISTENT));
            QueueConnectionFactory qcf = (QueueConnectionFactory) SwiftMQConnectionFactory.create(props);

There is also a way to overwrite the message persistent setting contained in the message. This can either be done at a regular queue or at a queue controller. This is NOT recommended because you only change the persistence mode but not the send behavior. That is, your message is sent as persistent message (thus synchronously) but due to the overwrite at the queue/queue controller, the message isn't persistet. This doesn't make much sense so please use one of the methods above.

Related links: Connection Factories , Queue Manager configuration

Async Send (since 8.0.0)

Persistent messages are sent synchronously per default (required by the JMS spec). The send method only returns after the message has been persistet AND a reply has been sent back to the client. This can be relaxed to asynchronous sending by setting this system property at the JMS client:

            -Dswiftmq.jms.persistent.asyncsend=true

In that case the send request is stored in an internal outbound queue at the client and the send method returns immediately. The send requests are transferred in the background as batches. This may double the throughput but those messages may be lost that are still in the client's outbound queue when the client is terminated.

Related links: System Properties

JMS Session Type and Acknowledge Mode

A JMS session can be created as transacted or non-transacted session.

Messages sent in a transacted session are buffered at the client until a commit is called on the session which then transfers all messages of this transaction to the router. The behavior of the send method on non-transacted sessions depends on the persistence setting of the message. Persistent messages are sent synchronously while non-persistent messages are sent asynchronously. See sections above.

Therefore, to send persistent messages, a transacted session can be used to implement your own batching and thus to increase throughput.

On the consumer side a commit on a transacted session acknowledges delivery of all messages received within this transaction. Similarly for non-transacted sessions and client-acknowledge mode, an acknowledge-call acknowledges delivery of all messages since the last acknowledge-call. Both, commit and acknowledge are synchronous calls. Modes auto-acknowledge and dups-ok-acknowledge are synonyms in SwiftMQ (there is no further optimization). In both modes the messages are automatically asynchronously acknowledged after the message has been returned by a receive-call resp. the onMessage-method has been returned. For non-durable subscribers, the messages are autocommitted before they are delivered to the client's cache.

Therefore, the fastest mode on consumer sessions is non-transacted, auto-acknowledge.

Related links: Performance Profile

Flow Control

Flow control is used to establish a maximum throughput rate between producers and consumers and is enabled by default on all destinations. Each queue (incl. subscriber queues) measures the producing and consuming rate and, if certain conditions are met, return a flow control delay back to the producer. The delay is in milliseconds and the producer waits this amount of time before returning from a send or commit call.

Per default, flow control strives to keep all messages in a queue in the queue's cache. It is only activated if a threshold defined in attribute "flowcontrol-start-queuesize" is reached. The default value is 400 messages. Since the default cache-size of a queue is 500 messages, all messages are stored in the queue's cache and served from there.

"flowcontrol-start-queuesize" can be increased but the "cache-size" attribute should be increased as well otherwise the queue swaps non-persistent messages out to disk which decreases throughput. Same for the case if "flowcontrol-start-queuesize" is set to -1 which switches flow control off.

Flow control should always be used if a maximum throughput rate between producers and consumers is required.

Related links: Queue Manager configuration , Topic Manager configuration

SMQP and Prefetch Settings

A JMS producer acts on flow control within the send/commit method. For asynchronous sends there is an attribute in the connection factory called "smqp-producer-reply-interval" which specifies in which intervals a send method has to wait for a reply to act on flow control delays. The default value is 20 which means the send method waits every 20th call on a reply and may wait on a flow control delay contained in the reply. This default value 20 is what we found out to be an optimal value and is related to the default consumer cache size. It is not recommended to increase it without changing the default consumer cache size. Increasing "smqp-producer-reply-interval" would just increase the flow control delays.

Each MessageConsumer object has its own client side cache called the consumer cache. This cache is filled asynchronously in the background. Call to receive resp. onMessage are served out of the cache. The cache is dimensioned by attribute "smqp-consumer-cache-size" which defines the size in number messages (default 500) and can be further limited by attribute "smqp-consumer-cache-size-kb" (default 2048) which limits it in kilobytes.

The JMS Swiftlet contains an attribute "consumer-cache-low-water-mark" which defines the number of messages at which a fill-cache request will be sent from the client to the router. If this attribute would be set to 0 then the cache would be emptied before a refill would be initiated, thus there would be a gap in consuming messages. In our tests the default of 100 messages matched about the number of messages which will be consumed between refill request and arrival of new messages at the cache so that there is no gap at all.

Related links: Connection Factories , JMS Swiftlet configuration

Store Swiftlet

The most effective change you can do in the standard file-based Store Swiftlet is to enable or disable disk sync of the transaction log. Enabled, every write to the transaction log is sync'ed with the disk. This is the most reliable option but throughput is then bound to the speed of the disk. SwiftMQ 8.1.0 has huge improvements here; see Performance Profile.

If disk sync is enabled, the disk write cache must be disabled. On Linux systems this can be done with:

            hdparm -W 0 <device>

The transaction log disk sync is enabled by setting the transaction-log's attribute "force-sync" to "true". Default is "false".

The Store Swiftlet has a few more tuning option. See link below.

Related links: Store Swiftlet Tuning Section , Performance Profile

Network NIO Swiftlet

The Network NIO Swiftlet uses Java non-blocking I/O which can serve many connections with just a few threads. You can define a number of "select tasks", each serving a java.nio selector. Connections are evenly distributed over them. To avoid thread context switching, the number of select tasks may be reduced to 1 by setting attribute "number-selector-tasks" to "1".

If you have long-running connections (you don't connect/disconnect often), you may further use direct buffers by setting attribute "use-direct-buffers" to "true".

Related links: Network NIO Swiftlet

Threadpool Settings

Throughput-related thread pools are "jms.connection" and "jms.session".

Pool "jms.connection" is used for batching and outbound writes from router to client. Attribute "max-threads" may be reduces to "1" to avoid thread context switching and to increase the chance to get more content into the batches. Since it is a single thread, a write on a slow network connection would slow down the whole router. This is why the default is "5".

Pool "jms.session" does the whole JMS work. If you use persistent messages, it is important to get as much work as possible done in parallel so that the Store Swiftlet's log manager can write as much log records as possible in one iteration. Therefore, attribute "max-threads" should be set to "100" (default since 8.1.0).

If you only use non-persistent messages, you don't have the log manager involved and thus you must strive to reduce thread context switching. In that case set "max-threads" to "1" and you will get the highest throughput.

Related links: JMS Swiftlet Threadpool Section , Threadpool Swiftlet , Performance Profile

Duplicate Message Detection

SwiftMQ HA Router and SwiftMQ Universal Router have inbound duplicate message detection enabled by default on all destinations. Especially for SwiftMQ Universal Router this is only for the case when a router is shut down and restarted (e.g. version upgrade), JMS clients reconnect and may send a message twice. If you don't use this functionality or can affort to get duplicates, you may switch duplicate message detection for these or all destinations off. This can be done by setting attribute "duplicate-detection-enabled" of the queue/queue controller to "false" (default is "true").

Outbound duplicate message detection takes place on the client side and is enabled by default. To disable it, set attribute "duplicate-message-detection" of the connection factory to "false" (default is "true").

Related links: Connection Factories , JMS Swiftlet configuration , Queue Manager configuration

Message Expiration

If you don't send your messages with a time-to-live, you don't use message expiration and should switch it off. If you use message expiration, however, you may consider to convert the standard cleanup into a job-based cleanup. See links below.

Related links: Message Expiration , Queue Manager configuration

Multiple Queue Consumers

If you use multiple concurrent consumers without selectors per queue, you may consider to use a Clustered Queue instead.

Related links: JMS Usage / Multiple Queue Consumer vs Local Clustering

Selectors

You should always strive to avoid selectors. If you have a queue and many consumers on it, each with a different selector, each message has to be compared to it which may significantly drop performance. Multiple queue consumers with selectors can be converted into single consumers with each a distinct queue. Instead of defining selectors you then decide at the producer side which queue is used for sending.