Apache ActiveMQ 5.15.12 performance improvement on JDBC persistence adapter

Some weeks ago, I identified a performance issue on ActiveMQ (affecting up to 5.15.11 release) when using PostgreSQL as JDBC persistence adapter.

For instance, JDBC persistence adapter is an alternative to KahaBD and configured in activemq.xml as follow:

 <broker ...>
   ...
   <persistenceAdapter>
      <jdbcPersistenceAdapter dataSource="#postgres-ds"/>
   </persistenceAdapter>
   ...
 </broker>
 ...
 <bean id="postgres-ds" class="org.postgresql.ds.PGPoolingDataSource">
  <property name="url" value="jdbc:postgresql://192.168.99.100:5432/activemq"/>
  <property name="user" value="activemq"/>
  <property name="password" value="activemq"/>
  <property name="initialConnections" value="1"/>
  <property name="maxConnections" value="10"/>
 </bean>
 ...

The problem occurs when we have lot of pending messages in a destination (queue or topic). When using JDBC persistence adapter, ActiveMQ executes a query to prefetch messages for consumers. The query is the following:

SELECT ID, MSG FROM activemq_msgs WHERE CONTAINER=? AND ID < ? AND ID > ? AND XID IS NULL ORDER BY ID;

Unfortunately, we can see here there’s no limit for this query, meaning that it will be executed for all messages in the destination. When the pending messages count is huge, this query is very very slow, impacting the performance. We can see up to the behavior where no message is consumed at all, completely blocking the broker.

Just to be clear, this problem is NOT present on KahaDB, it only affects some JDBC persistence adapters: PostgreSQL, MySQL, HSQL are affected (Oracle is not affected as a fix has been already done in the past).

The fix is pretty simple: we just add a query limit in the persistence adapter.

I did the fix here: https://github.com/apache/activemq/pull/480

By default, the query limit is set to MaxPrefetchSize which is actually Short.MAX_VALUE. If it could be considered as “large”, it’s always better than taking all pending messages into account.

However, it’s possible to define the query limit on the JDBC persistence adapter using maxRows property. You have also the control on the ActiveMQ tables name, etc:

 <broker ...>
   ...
   <persistenceAdapter>
     <jdbcPersistenceAdapter dataSource="#postgres-ds" lockKeepAlivePeriod="5000" maxRows="500">
       <statements>
         <statements messageTableName="activemq_msgs" durableSubAcksTableName="activemq_acks" lockTableName="activemq_lock"/>
       </statements>
      <locker>
        <lease-database-locker lockAcquireSleepInterval="10000"/>
      </locker>
    </jdbcPersistenceAdapter>
  </persistenceAdapter>
   ...
 </broker>

We can see here maxRows=500 meaning that the query will now be (in the case of PostgreSQL):

SELECT ID, MSG FROM activemq_msgs WHERE CONTAINER=? AND ID < ? AND ID > ? AND XID IS NULL ORDER BY ID LIMIT 500;

A good practice is to define maxRows with same size as the consumer prefetch.

This fix is very important if you use PostgreSQL, MySQL or HSQL as JDBC persistence adapter. It will dramatically improve consuming performance.

I’m working on ActiveMQ 5.15.12 release (it should be available very soon), I strongly advice to update to 5.15.12 to get this fix.

You May Also Like

About the Author: jbonofre

ASF Member, PMC for Apache Karaf, PMC for Apache ServiceMix, PMC for Apache Archiva, PMC for Apache Felix, PMC for Apache Camel, PMC for Apache Syncope, PMC for Apache Beam, PMC for Apache CarbonData, PMC for Apache Bahir, PMC for Apache Brooklyn, PMC for Apache Falcon, PMC for Apache Guacamole, PMC for Apache Lens, Committer for Apache ActiveMQ and much more ! Twitter: jbonofre IRC: jbonofre on #servicemix,#karaf,#camel,#cxf on Freenode