Category: ‘Apache ActiveMQ’

Monitoring and alerting with Apache Karaf Decanter

July 28, 2015 Posted by jbonofre

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation 😉

Overview

Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it.

It’s very flexible, providing ready to use features, and also very easy to extend.

Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf.

Decanter provides collectors, appenders, and SLA.

Collectors

Decanter Collectors are responsible of harvesting the monitoring data.

Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic.

A Collector can be:

  • Event Driven, meaning that it will automatically react to an internal event
  • Polled, meaning that it’s periodically executed by the Decanter Scheduler

You can install multiple Decanter Collectors in the same time. In the 1.0.0 release, Decanter provides the following collectors:

  • log is an event-driven collector. It’s actually a Pax Logging PaxAppender that listens for any log messages and send the log details into the EventAdmin topic.
  • jmx is a polled collector. Periodically, the Decanter Scheduler executes this collector. It retrieves all attributes of all MBeans in the MBeanServer, and send the JMX metrics into the EventAdmin topic.
  • camel (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the Camel routes MBeans.
  • activemq (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the ActiveMQ MBeans.
  • camel-tracer is a Camel Tracer TraceEventHandler. In your Camel route definition, you can set this trace event handler to the default Camel tracer. Thanks to that, all tracing details (from URI, to URI, exchange with headers, body, etc) will be send into the EventAdmin topic.

Appenders

The Decanter Appenders receives the data harvested by the collectors. They consume OSGi EventAdmin Events from the decanter/collect/* topics.

They are responsible of storing the monitoring data into a backend.

You can install multiple Decanter Appenders in the same time. In the 1.0.0 release, Decanter provides the following appenders:

  • log creates a log message with the monitoring data
  • elasticsearch stores the monitoring data into an Elasticsearch instance
  • jdbc stores the monitoring data into a database
  • jms sends the monitoring data to a JMS broker
  • camel sends the monitoring data to a Camel route

SLA and alerters

Decanter also provides an alerting system when some data doesn’t validate a SLA.

For instance, you can define the maximum acceptable number of threads running in Karaf. If the current number of threads is over the limit, Decanter calls alerters.

Decanter Alerters are a special kind of appenders, consuming events from the OSGi EventAdmin decanter/alert/* topics.

As for the appenders, you can have multiple alerters active at the same time. Decanter 1.0.0 release provides the following alerters:

  • log to create a log message for each alert
  • e-mail to send an e-mail for each alert
  • camel to execute a Camel route for each alert

Let see Decanter in action to have details how to install and use it !

Quick start

Decanter is pretty easy to install and provide “key turn” functionalities.

The first thing to do is to register the Decanter features repository in the Karaf instance:

karaf@root()> feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features

NB: for the next Karaf releases, I will add Decanter features repository in etc/org.apache.karaf.features.repos.cfg, allowing to easily register Decanter features simply using feature:repo-add decanter 1.0.0.

We now have the Decanter features available:

karaf@root()> feature:list |grep -i decanter
decanter-common                 | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter API                                
decanter-simple-scheduler       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Simple Scheduler                   
decanter-collector-log          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Messages Collector             
decanter-collector-jmx          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMX Collector                      
decanter-collector-camel        | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Collector                    
decanter-collector-activemq     | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter ActiveMQ Collector                 
decanter-collector-camel-tracer | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Tracer Collector             
decanter-collector-system       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter OS Collector                       
decanter-appender-log           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Appender                       
decanter-appender-elasticsearch | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Elasticsearch Appender             
decanter-appender-jdbc          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JDBC Appender                      
decanter-appender-jms           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMS Appender                       
decanter-appender-camel         | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Appender                     
decanter-sla                    | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA support                        
decanter-sla-log                | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA log alerter                    
decanter-sla-email              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA email alerter                  
decanter-sla-camel              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA Camel alerter                  
elasticsearch                   | 1.6.0            |           | karaf-decanter-1.0.0     | Embedded Elasticsearch node                       
kibana                          | 3.1.1            |           | karaf-decanter-1.0.0     | Embedded Kibana dashboard

For a quick start, we will use elasticsearch embedded to store the monitoring data. Decanter provides a ready to use elasticsearch feature, starting an embedded elasticsearch node:

karaf@root()> feature:install elasticsearch

The elasticsearch feature installs the elasticsearch configuration: etc/elasticsearch.yml.

We now have a ready to use elasticsearch node, where we will store the monitoring data.

Decanter also provides a kibana feature, providing a ready to use set of kibana dashboards:

karaf@root()> feature:install kibana 


We can now install the Decanter Elasticsearch appender: this appender will get the data harvested by the collectors, and store it in elasticsearch:


karaf@root()> feature:install decanter-appender-elasticsearch

The decanter-appender-elasticsearch feature also installs etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file. You can configure the location of the Elasticsearch node there. By default, it uses a local elasticsearch node, especially the one embedded that we installed with the elasticsearch feature.

The etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file contains hostname, port and clusterName of the elasticsearch instance to use:

################################################
# Decanter Elasticsearch Appender Configuration
################################################

# Hostname of the elasticsearch instance
host=localhost
# Port number of the elasticsearch instance
port=9300
# Name of the elasticsearch cluster
clusterName=elasticsearch

Now, our Decanter appender and elasticsearch node are ready.

It's now time to install some collectors to harvest the data.

Karaf monitoring

First, we install the log collector:

karaf@root()> feature:install decanter-collector-log 

This collector is event-driven and will automatically listen for log events, and send into the EventAdmin collect topic.

We install a second collector: the JMX collector.

karaf@root()> feature:install decanter-collector-jmx

The JMX collector is a polled collector. So, it also installs and starts the Decanter Scheduler.

You can define the call execution period of the scheduler in etc/org.apache.karaf.decanter.scheduler.simple.cfg configuration file. By default, the Decanter Scheduler calls the polled collectors every 5 seconds.

The JMX collector is able to retrieve all metrics (attributes) from multiple MBeanServers.

By default, it uses the etc/org.apache.karaf.decanter.collector.jmx-local.cfg configuration file. This file polls the local MBeanServer.

You can create new configuration files (for instance etc/org.apache.karaf.decanter.collector.jmx-mystuff.cfg configuration file), to poll other remote or local MBeanServers.

The etc/org.apache.karaf.decanter.collector.jmx-*.cfg configuration file contains:

type=jmx-mystuff
url=service:jmx:rmi:///jndi/rmi://hostname:1099/karaf-root
username=karaf
password=karaf
object.name=*.*:*

The type property is a free field allowing you to identify the source of the metrics.

The url property allows you to define the JMX URL. You can also use the local keyword to poll the local MBeanServer.
The username and password allows you to define the username and password to connect to the MBeanServer.

The object.name property is optional. By default, the collector harvests all the MBeans in the server. But you can filter to harvest only some MBeans (for instance org.apache.camel:context=*,type=routes,name=* to harvest only the Camel routes metrics).

Now, we can go in the Decanter Kibana to see the dashboards using the harvested data.

You can access to the Decanter Kibana using http://localhost:8181/kibana.

You have the Decanter Kibana welcome page:

Decanter Kibana

Decanter provides ready to use dashboard. Let see the Karaf Dashboard.

Decanter Kibana Karaf 1

These histograms use the metrics harvested by the JMX collector.

You can also see the log details harvested by the log collector:

Decanter Karaf 2

As Kibana uses Lucene, you can extract exactly the data that you need using filtering or queries.

You can also define the time range to get the metrics and logs.

For instance, you can create the following query to filter only the message coming from Elasticsearch:

loggerName:org.elasticsearch*

Camel monitoring and tracing

We can also use Decanter for the monitoring of the Camel routes that you deploy in Karaf.

For instance, we add Camel in our Karaf instance:

karaf@root()> feature:repo-add camel 2.13.2
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features
karaf@root()> feature:install camel-blueprint

In the deploy, we create the following very simple route (using the route.xml file):

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana, we can go in the Camel dashboard:

Decanter Kibana Camel 1

We can see the histograms here, using the JMX metrics retrieved on the Camel MBeans (especially, we can see for our route the exchanges completed, failed, the last processing time, etc).

You can also see the log messages related to Camel.

Another feature provided by Decanter is a Camel Tracer collector: you can enable the Decanter Camel Tracer to log all exchange state in the backend.

For that, we install the Decanter Camel Tracer feature:

karaf@root()> feature:install decanter-collector-camel-tracer

We update our route.xml in the deploy folder like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="eventAdmin" interface="org.osgi.service.event.EventAdmin"/>

    <bean id="traceHandler" class="org.apache.karaf.decanter.collector.camel.DecanterTraceEventHandler">
        <property name="eventAdmin" ref="eventAdmin"/>
    </bean>

    <bean id="tracer" class="org.apache.camel.processor.interceptor.Tracer">
        <property name="traceHandler" ref="traceHandler"/>
        <property name="enabled" value="true"/>
        <property name="traceOutExchanges" value="true"/>
        <property name="logLevel" value="OFF"/>
    </bean>

    <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana Camel dashboard, you can see the details in the tracer panel:

Decanter Kibana Camel 2

Decanter Kibana also provides a ready to use ActiveMQ dashboard, using the JMX metrics retrieved from an ActiveMQ broker.

SLA and alerting

Another Decanter feature is the SLA (Service Level Agreement) checking.

The purpose is to check if a harvested data validate a check condition. If not, an alert is created and send to SLA alerters.

We want to send the alerts to two alerters:

  • log to create a log message for each alert (warn log level for serious alerts, error log level for critical alerts)
  • camel to call a Camel route for each alert.

First, we install the decanter-sla-log feature:

karaf@root()> feature:install decanter-sla-log

The SLA checker uses the etc/org.apache.karaf.decanter.sla.checker.cfg configuration file.

Here, we want to throw an alert when the number of threads in Karaf is greater to 60. So in the checker configuration file, we set:

ThreadCount.error=range:[0,60]

The syntax in this file is:

attribute.level=check

where:

  • attribute is the name of the attribute in the harvested data (coming from the collectors).
  • level is the alert level. The two possible values are: warn or error.
  • check is the check expression.

The check expression can be:

  • range for numeric attribute, like range:[x,y]. The alert is thrown if the attribute is out of the range.
  • equal for numeric attribute, like equal:x. The alert is thrown if the attribute is not equal to the value.
  • notequal for numeric attribute, like notequal:x. The alert is thrown if the attribute is equal to the value.
  • match for String attribute, like match:regex. The alert is thrown if the attribute doesn't match the regex.
  • notmatch for String attribute, like nomatch:regex. The alert is thrown if the attribute match the regex.

So, in our case, if the number of threads is greater than 60 (which is probably the case ;)), we can see the following messages in the log:

2015-07-28 22:17:11,950 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:17:11,951 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:5639 | alertLevel:error | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:22000000000 | PeakThreadCount:225 | AllThreadIds:[J@6d9ad2c5 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:22911917003 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:221 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/error | 

Let's now extend the range, add a new check on the thread, and add a new check to throw alerts when we have errors in the log:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,300]
loggerLevel.error=match:ERROR

Now, we want to call a Camel route to deal with the alerts.

We create the following Camel route, using the deploy/alert.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

        <camelContext xmlns="http://camel.apache.org/schema/blueprint">
                <route id="alerter">
                        <from uri="direct-vm:decanter-alert"/>
                        <to uri="log:alert"/>
                </route>
        </camelContext>

</blueprint>

Now, we can install the decanter-sla-camel feature:

karaf@root()> feature:install decanter-sla-camel

This feature also installs a etc/org.apache.karaf.decanter.sla.camel.cfg configuration file. In this file, you can define the Camel endpoint URI where you want to send the alert:

alert.destination.uri=direct-vm:decanter-alert

Now, let's decrease the thread range in etc/org.apache.karaf.decanter.sla.checker.cfg configuration file to throw some alerts:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,60]
loggerLevel.error=match:ERROR

Now, in the log, we can see the alerts.

From the SLA log alerter:

2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:6234 | alertLevel:warn | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:193150000000 | PeakThreadCount:225 | AllThreadIds:[J@28f0ef87 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:201484424892 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:222 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/warn | 

but also from the SLA Camel alerter:

2015-07-28 22:39:15,293 | INFO  | Thread-41        | alert                            | 114 - org.apache.camel.camel-core - 2.13.2 | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {hostName=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root, alertPattern=range:[0,60], ThreadAllocatedMemorySupported=true, ThreadContentionMonitoringEnabled=false, TotalStartedThreadCount=6236, alertLevel=warn, CurrentThreadCpuTimeSupported=true, CurrentThreadUserTime=193940000000, PeakThreadCount=225, AllThreadIds=[J@408db39f, type=jmx-local, ThreadAllocatedMemoryEnabled=true, CurrentThreadCpuTime=202296849879, ObjectName=java.lang:type=Threading, ThreadContentionMonitoringSupported=true, ThreadCpuTimeSupported=true, ThreadCount=222, event.topics=decanter/alert/warn, ThreadCpuTimeEnabled=true, ObjectMonitorUsageSupported=true, SynchronizerUsageSupported=true, alertAttribute=ThreadCount, DaemonThreadCount=198}]

Decanter also provides the SLA e-mail alerter to send the alerts by e-mail.

Now, you can play with the SLA checker, and add the checks on the attributes that you need. The Decanter Kibana dashboards help a lot there: in the "Event Monitoring" table, you can see all raw harvested data, allowing you to find the attributes.

What's next

It's just the first Decanter release, but I think it's an interesting one.

Now, we are in the process of adding:

  • a new Decanter CXF interceptor collector, thanks to this collector, you will be able to send details about the request/response on CXF endpoints (SOAP-Request, SOAP-Response, REST message, etc).
  • a new Decanter Redis appender, to send the harvested data to Redis
  • a new Decanter Cassandra appender, to send the harvested data to Cassandra
  • a Decanter WebConsole, allowing to easily manipulate the SLA
  • improvement the SLA support with "recovery" support to send only one alert when the check failed, and another alert when the value "recovered"

Anyway, if you have ideas and want to see new features in Decanter, please let us know.

I hope you like Decanter and see interest in this new Karaf project !

Apache Karaf Christmas gifts: docker.io, profiles, and decanter

December 15, 2014 Posted by jbonofre

We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you 😉

Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2.

Some sub-project releases are also in preparation, especially Cellar. We completely refactored Cellar internals, to provide a more reliable, predictable, and stable behavior. New sync policies are available, new properties, new commands, and also interesting new features like HTTP session replication, or HTTP load balancing. I will prepare a blog about this very soon.

But, we’re also preparing brand-new features.

Docker.io

I heard some people saying: “why do I need Karaf when I have docker.io ?”.

Honestly, I don’t understand this as the purpose is not the same: actually, Karaf on docker.io is a great value.

First, docker.io concepts are not new. It’s more or less new on Linux, but the same kind of features exists for a long time on other systems:

  • zones on Solaris
  • jail on FreeBSD
  • xen on Linux, in the past

So, nothing revolutionary in docker.io, however it’s a very convenient way to host multiple images/pseudo-system on the same machine.

However, docker.io (like the other systems) is focus on the OS: it doesn’t cover by its own the application container. For that, you have to prepare an images with OS plus the application container. For instance, you want to deploy your war file, you have to bootstrap a docker.io image with OS and tomcat (or Karaf ;)).

Moreover, remember the cool features provided by Karaf: ConfigAdmin and dynamic configuration, hotdeployment, features, etc.

You want to deploy your Camel routes, your ActiveMQ broker, your CXF webservices, your application: just use the docker.io image providing a Karaf instance!

And it’s what the Karaf docker.io feature provides. Actually, it provides two things:

  • a set of Karaf docker.io images ready to use, with ubuntu/centos images with ready to use Karaf instances (using different combinations)
  • a set of shell commands and Karaf commands to easily bootstrap the images from a Karaf instance. It’s actually a good alternative to the Karaf child instances (which are only local to the machine).

Basically, docker.io doesn’t replace Karaf. However, Karaf on docker.io provides a very flexible infrastructure, allowing you to easily bootstrap Karaf instances. Associated with Cellar, you can bootstrap a Karaf cluster very easily as well.

I will prepare the donation and I will blog about the docker.io feature very soon. Stay tuned !!!

Karaf Profiles

A new feature comes in Karaf 4: the Karaf profiles. The purpose is to apply a ready to use set of configurations and provisioning to a Karaf instance.

Thanks to that you can prepare a complete profile containing your configuration and your application (features) and apply multiple profiles to easily create a ready-to-go Karaf instance.

It’s a great complete to the Karaf docker.io feature: the docker.io feature bootstraps the Karaf image, on which you can apply your profiles, all in a row.

Some profiles description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201412.mbox/%3CCAA66TpodJWHVpOqDz2j1QfkPchhBepK_Mwdx0orz7dEVaw8tPQ%40mail.gmail.com%3E.

I’m working on the storage of profiles on Karaf Cave, the application of profiles on running/existing Karaf instances, support of cluster profiles in Cellar, etc.

Again, I will create a specific blog post about profiles soon. Stay tuned again !! 🙂

Karaf Decanter

As a fully enterprise ready container, Karaf has to provide monitoring and management feature. We already provide a bunch of metrics via JMX (we have multiple MBeans for Karaf, Camel, ActiveMQ, CXF, etc).

However, we should provide:

  • storage of metrics and messages to be able to have an activity timeline
  • SLA definition of the metrics and messages, raising alerts when some metrics are not in the expected value range or when the messages contain a pattern
  • dashboard to configure the SLA, display messages, and graph the metrics

As always in Karaf, it should be very simple to install such kind of feature, with an integration of the supported third parties.

That’s why we started to work on Karaf Decanter, a complete and flexible monitoring solution for Karaf and the applications hosted by Karaf (Camel, ActiveMQ, CXF, etc).

The Decanter proposal and description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201410.mbox/%3C543D3D62.6050608%40nanthrax.net%3E.

The current codebase is also available: https://github.com/jbonofre/karaf-decanter.

I’m preparing the donation (some cleansing/polishing in progress).

Again, I will blog about Karaf Decanter asap. Stay tuned again again !! 🙂

Conclusion

You can see like, as always, the Karaf team is committed and dedicated to provide to you very convenient and flexible features. Lot of those features come from your ideas, discussions, proposals. So, keep on discussing with us, we love our users 😉

We hope you will enjoy those new features. We will document and blog about these Christmas gifts soon.

Enjoy Karaf, and Happy Christmas !

Apache JMeter to test Apache ActiveMQ on CI with Maven/Jenkins

August 27, 2014 Posted by jbonofre

Apache JMeter is a great tool for testing, especially performance testing.
It provides a lot of samplers that you can use to test your web services, web applications, etc.

It also includes a couple of samplers for JMS that we can use with ActiveMQ.

The source code of this blog post is https://github.com/jbonofre/blog-jmeter.

Preparing JMeter for ActiveMQ

For this article, I downloaded JMeter 2.10 from http://jmeter.apache.org.

We uncompress jmeter in a folder:

$ tar zxvf apache-jmeter-2.10.tgz

We are going to create a test plan for ActiveMQ. After downloading ActiveMQ 5.9.0 from http://activemq.apache.org, we install and start an ActiveMQ broker on the machine.

$ tar zxvf apache-activemq-5.9.0-bin.tar.gz
$ cd apache-activemq-5.9.0/bin
$ ./activemq console
...
 INFO | Apache ActiveMQ 5.9.0 (localhost, ID:latitude-45782-1409139630277-0:1) started

In order to use ActiveMQ with JMeter, we have to copy the activemq-all-5.9.0.jar file provided in the ActiveMQ distribution into the JMeter lib folder:

$ cp apache-activemq-5.9.0/activemq-all-5.9.0.jar apache-jmeter-2.11/lib/

We can now start jmeter and start to create our ActiveMQ test plan:

$ cd apache-jmeter-2.10/bin
$ ./jmeter.sh

In the default test plan, we add a thread group to simulate 5 JMS clients that will perform the samplers 10 times:

jmeter1

In this thread group, we add a JMS Publisher sampler that will produce a message in ActiveMQ:

jmeter2

We can note the ActiveMQ configuration:

  • the sampler uses the ActiveMQ JNDI initial context factory (org.apache.activemq.jndi.ActiveMQInitialContextFactory)
  • the Provider URL is the ActiveMQ connection URL (tcp://localhost:61616 in my case). You can use here any kind of ActiveMQ URL, for instance failover:(tcp://host1:61616,tcp://host2:61616)).
  • the connection factory is simply the default one provided by ActiveMQ: ConnectionFactory.
  • the destination is the name of the JMS queue where we want to produce the message, prefixed with dynamicQueues: dynamicQueues/MyQueue.
  • by default, ActiveMQ 5.9.0 uses the authorization plugin. So, the client has to use authentication to be able to produce a message. The default ActiveMQ username is admin, and admin is the default password.
  • finally, we set the body of the message as static using the textarea: JMeter message ...

Now, we save the plan in a file named activemq.jmx.

For a quick test, we can add a Graph Results listener to the thread group and run the plan:

jmeter3

We can check in the ActiveMQ console (pointing a browser on http://localhost:8161/admin) that we can see the queue MyQueue containing the messages sent by JMeter:

activemq1

activemq2

Our test plan is working, we have some metrics about the execution in the graph (it’s really fast on my laptop ;)).

This approach is great to easily implement performance benchmark, and creates some load on ActiveMQ (to test some tuning and configuration for instance).

It can make sense to do it in a continuous integration process. So, let’s see how we can run JMeter with Maven and integrate it in Jenkins.

Using jmeter maven plugin

We have two ways to call JMeter with Maven:

  • we can call the local JMeter instance using the exec-maven-plugin. JMeter can be called in “batch mode” (without the GUI) using the following command:
    $ apache-jmeter-2.10/bin/jmeter.sh -n -t activemq.jmx -l activemq.jtl -j activemq.jmx.log
    

    We use the options:

    • -n to disable the GUI
    • -t to specify the location of the test plan file (.jmx)
    • -l to specify the location of the test plan execution results
    • -j to specify the location of the test plan execution log
  • we have a JMeter Maven plugin. It’s the one that I will use for this blog.

The JMeter Maven plugin allows you to run a JMeter meter plan directly from Maven. It doesn’t require a local JMeter instance: the plugin will download and bootstrap a JMeter instance.

The plugin will look for JMeter JMX files in the src/test/jmeter folder by default.

We create a POM to run JMeter:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>net.nanthrax.blog</groupId>
    <artifactId>jmeter</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>pom</packaging>

    <build>
        <plugins>
            <plugin>
                <groupId>com.lazerycode.jmeter</groupId>
                <artifactId>jmeter-maven-plugin</artifactId>
                <version>1.9.1</version>
                <executions>
                    <execution>
                        <id>jmeter-test</id>
                        <phase>verify</phase>
                        <goals>
                            <goal>jmeter</goal>
                        </goals>
                    </execution>
                </executions>
                <dependencies>
                    <dependency>
                        <groupId>org.apache.activemq</groupId>
                        <artifactId>activemq-all</artifactId>
                        <version>5.9.0</version>
                    </dependency>
                </dependencies>
            </plugin>
        </plugins>
    </build>

</project>

We can now run the JMeter test plan:

$ mvn clean verify
...
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building jmeter 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- jmeter-maven-plugin:1.9.1:jmeter (jmeter-test) @ jmeter ---
[INFO]
[INFO] -------------------------------------------------------
[INFO]  P E R F O R M A N C E    T E S T S
[INFO] -------------------------------------------------------
[INFO]
[INFO]
[info]
[debug] JMeter is called with the following command line arguments: -n -t /home/jbonofre/Workspace/jmeter/src/test/jmeter/activemq.jmx -l /home/jbonofre/Workspace/jmeter/target/jmeter/results/20140827-activemq.jtl -d /home/jbonofre/Workspace/jmeter/target/jmeter -j /home/jbonofre/Workspace/jmeter/target/jmeter/logs/activemq.jmx.log
[info] Executing test: activemq.jmx
[info] Completed Test: activemq.jmx
[INFO]
[INFO] Test Results:
[INFO]
[INFO] Tests Run: 1, Failures: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.077s
[INFO] Finished at: Wed Aug 27 14:58:09 CEST 2014
[INFO] Final Memory: 14M/303M
[INFO] ------------------------------------------------------------------------

We can see in the ActiveMQ console that the JMeter messages have been sent.

We are now ready to integrate this build in Jenkins:

jenkins1

jenkins2

We have now included the performance tests in our Jenkins CI.

I would advice to execute the performance tests on a dedicated module or profile, and configure the Jenkins job to execute once per week for instance, or link to a release.

So, we still have our development oriented nightly builds, and we can periodically execute performance tests, and execute the performance tests for a release.

Hadoop CDC and processes notification with Apache Falcon, Apache ActiveMQ, and Apache Camel

March 19, 2014 Posted by jbonofre

Some weeks (months ? ;)) ago, I started to work on Apache Falcon. First of all, I would like to thanks all Falcon guys: they are really awesome and do a great job (special thanks to Srikanth, Venkatesh, Swetha).

This blog post is a preparation to a set of “recipes documentation” that I will propose in Apache Falcon.

Falcon is in incubation at Apache. The purpose is to provide a data processing and management solution for Hadoop designed for data motion, coordination of data pipelines, lifecycle management, and data discovery. Falcon enables end consumers to quickly onboard their data and its associated processing and management tasks on Hadoop clusters.

A interesting feature provided by Falcon is notifications of the activities in the Hadoop cluster “outside” of the cluster 😉
In this article, we will see how to get two kinds of notification in Camel routes “outside” of the Hadoop cluster:

  • a Camel route will be notified and triggered when a process is executed in the Hadoop cluster
  • a Camel route will be notified and triggered when a HDFS location changes (a first CDC feature)

Requirements

If you already have your Hadoop cluster, or you know to install/prepare it, you can skip this step.

In this section, I will create a “real fake” Hadoop cluster on one machine. It’s not really a pseudo-distributed as I will use multiple datanodes and tasktrackers, but all on one machine (of course, it doesn’t make sense, but it’s just for demo purpose ;)).

In addition of Hadoop common components (HDFS namenode/datanodes and M/R jobtracker/tasktracker), Falcon requires Oozie (for scheduling) and ActiveMQ.

By default, Falcon embeds ActiveMQ, but for the demo (and provide screenshots to the ActiveMQ WebConsole), I will use a standalone ActiveMQ instance.

Hadoop “fake” cluster preparation

For the demo, I will “fake” three machines.

I create a demo folder on my machine, and I uncompress hadoop-1.1.2-bin.tar.gz tarball in node1, node2, node3 folders:

$ mkdir demo
$ cd demo
$ tar zxvf ~/hadoop-1.1.2-bin.tar.gz
$ cp -r hadoop-1.1.2 node1
$ cp -r hadoop-1.1.2 node2
$ cp -r hadoop-1.1.2 node3
$ mkdir storage

I also create a storage folder where I will put the nodes’ files. This folder is just for convenience, as it’s easier to restart from scratch, just be deleting the storage folder content.

Node1 will hosts:

  • the HDFS namenode
  • a HDFS datanode
  • the M/R jobtracker
  • a M/R tasktracker

So, the node1/conf/core-site.xml file contains the location of the namenode:

<?xml version="1.0">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/core-site.xml -->
<configuration>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost</value>
  </property>
</configuration>

In the node1/conf/hdfs-site.xml file, we define the storage location for the namenode and the datanode (in the storage folder), and the default replication:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.name.dir</name>
    <value>/home/jbonofre/demo/storage/node1/namenode</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node1/datanode</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
</configuration>

Finally, in node1/conf/mapred-site.xml file, we define the location of the job tracker:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
</configuration>

Node1 is not ready.

Node2 hosts a datanode and a tasktracker. As for node1, the node2/conf/core-site.xml file contains the location of the namenode:

<?xml version="1.0">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/core-site.xml -->
<configuration>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost</value>
  </property>
</configuration>

The node2/conf/hdfs-site.xml file contains:

  • the storage location of the datanode
  • the network location of the namenode (from node1)
  • the port numbers used by the datanode (core, IPC, and HTTP in order to be able to start multiple datanodes on the same machine)
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsd"?>
<!-- node2 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node2/datanode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>localhost:50110</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>localhost:50120</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>localhost:50175</value>
  </property>
</configuration>

The node2/conf/mapred-site.xml file contains the network location of the jobtracker, and the HTTP port number used by the tasktracker (in order to be able to run multiple tasktracker on the same machine):

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
  <property>
    <name>mapred.task.tracker.http.address</name>
    <value>localhost:50160</value>
  </property>
</configuration>

Node3 is very similar to node2: it hosts a datanode and a tasktracker. So the configuration is very similar to node2 (just the storage location, and the datanode and tasktracker port numbers are different).

Here’s the node3/conf/core-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node3 conf/core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost</value>
  </property>
</configuration>

Here’s the node3/conf/hdfs-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node3 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node3/datanode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>localhost:50210</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>localhost:50220</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>localhost:50275</value>
  </property>
</configuration>

Here’s the node3/conf/mapred-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
  <property>
    <name>mapred.task.tracker.http.address</name>
    <value>localhost:50260</value>
  </property>
</configuration>

Our “fake” cluster configuration is now ready.

We can format the namenode on node1:

$ cd node1/bin
$ ./hadoop namenode -format
14/03/06 17:26:38 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = vostro/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.1.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
14/03/06 17:26:39 INFO util.GSet: VM type       = 64-bit
14/03/06 17:26:39 INFO util.GSet: 2% max memory = 17.78 MB
14/03/06 17:26:39 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/03/06 17:26:39 INFO util.GSet: recommended=2097152, actual=2097152
14/03/06 17:26:39 INFO namenode.FSNamesystem: fsOwner=jbonofre
14/03/06 17:26:39 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/06 17:26:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/06 17:26:39 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/06 17:26:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/06 17:26:39 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/03/06 17:26:40 INFO common.Storage: Image file of size 114 saved in 0 seconds.
14/03/06 17:26:40 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/jbonofre/demo/storage/node1/namenode/current/edits
14/03/06 17:26:40 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/jbonofre/demo/storage/node1/namenode/current/edits
14/03/06 17:26:40 INFO common.Storage: Storage directory /home/jbonofre/demo/storage/node1/namenode has been successfully formatted.
14/03/06 17:26:40 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vostro/127.0.0.1
************************************************************/

We are now ready to start the namenode on node1:

$ cd node1/bin
$ ./hadoop namenode &

We start the datanode on node1:

$ cd node1/bin
$ ./hadoop datanode &

We start the jobtracker on node1:

$ cd node1/bin
$ ./hadoop jobtracker &

We start the tasktracker on node1:

$ cd node1/bin
$ ./hadoop tasktracker &

Node1 is fully started with the namenode, a datanode, the jobtracker, and a tasktracker.

We start a datanode and a tasktracker on node2:

$ cd node2/bin
$ ./hadoop datanode &
$ ./hadoop tasktracker &

And finally, we start a datanode and a tasktracker on node3:

$ cd node3/bin
$ ./hadoop datanode &
$ ./hadoop tasktracker &

We access to the HDFS web console (http://localhost:50070) to verify that the namenode is able to see the 3 live datanodes:
hdfs1
We also access to the MapReduce web console (http://localhost:50030) to verify that the jobtracker is able to see the 3 live tasktrackers:
mapred1

Oozie

Falcon delegates scheduling of jobs (plannification, re-execution, etc) to Oozie.

Oozie is a workflow scheduler system to manage hadoop jobs, using Quartz internally.

It uses a “custom” Oozie distribution: Falcon adds some addition EL extensions on top of a “regular” Oozie.

Falcon provides a script to create the Falcon custom Oozie distribution: we provide the Hadoop and Oozie version that we need.

We can clone Falcon sources from git and call the src/bin/package.sh with the Hadoop and Oozie target versions that we want:

$ git clone https://git-wip-us.apache.org/repos/asf/incubator-falcon falcon
$ cd falcon
$ src/bin/package.sh 1.1.2 4.0.0

The package.sh script creates target/oozie-4.0.0-distro.tar.gz in the Falcon sources folder.

In the demo folder, I uncompress oozie-4.0.0-distro.tar.gz tarball:

$ cp ~/oozie-4.0.0-distro.tar.gz
$ tar zxvf oozie-4.0.0-distro.tar.gz

We now have a oozie-4.0.0-falcon folder.

Oozie requires a special configuration on the namenode (so on node1). We have to update the node1/conf/core-site.xml file to define the system user “proxied” by Oozie:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost</value>
  </property>
  <property>
    <name>hadoop.proxyuser.jbonofre.hosts</name>
    <value>localhost</value>
  </property>
  <property>
    <name>hadoop.proxyuser.jbonofre.groups</name>
    <value>localhost</value>
  </property>
</configuration>

NB: don’t forget to restart the namenode to include these changes.

Now, we can prepare the Oozie webapplication. Due to license restriction, it’s up to you to add ExtJS library for Oozie webconsole. To enable it, first, we create a oozie-4.0.0-falcon/libext folder and put ext-2.2.zip archive:

$ cd oozie-4.0.0-falcon
$ mkdir libext
$ cd libext
$ wget "http://extjs.com/deploy/ext-2.2.zip"

We have to populate the libext folder with different additional jar files:

  • the Hadoop jar files:

    $ cp node1/hadoop-core-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/hadoop-client-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/hadoop-tools-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-beanutils-1.7.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-beanutils-core-1.8.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-codec-1.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-collections-3.2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-configuration-1.6.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-digester-1.8.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-el-1.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-io-2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-lang-2.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-logging-api-1.0.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-logging-1.1.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-math-2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-net-3.1.jar oozie-4.0.0-falcon/libext
    
  • the Falcon Oozie extender:

    $ cp falcon-0.5-incubating-SNAPSHOT/oozie/libext/falcon-oozie-el-extension-0.5-incubating-SNAPSHOT.jar oozie-4.0.0-falcon/libext
    
  • jar files required for hcatalog, pig from oozie-sharelib-4.0.0-falcon.tar.gz:

    $ cd oozie-4.0.0-falcon
    $ tar zxvf oozie-sharelib-4.0.0-falcon.tar.gz
    $ cp share/lib/hcatalog/hcatalog-core-0.5.0-incubating.jar libext
    $ cp share/lib/hcatalog/hive-* libext
    $ cp share/lib/pig/hsqldb-1.8.0.7.jar libext
    $ cp share/lib/pig/jackson-* libext
    $ cp share/lib/hcatalog/libfb303-0.7.0.jar libext
    $ cp share/lib/hive/log4j-1.2.16.jar libext
    $ cp libtools/oro-2.0.8.jar libext
    $ cp share/lib/hcatalog/webhcat-java-client-0.5.0-incubating.jar libext
    $ cp share/lib/pig/xmlenc-0.52.jar libext
    $ cp share/lib/pig/guava-11.0.2.jar libext
    $ cp share/lib/hcatalog/oozie-* libext
    

We are now ready to setup Oozie.

First, we “assemble” the oozie webapplication (war):

$ cd oozie-4.0.0-falcon/bin
$ ./oozie-setup.sh prepare-war
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

INFO: Adding extension: /home/jbonofre/demo/oozie-4.0.0-falcon/libext/ant-1.6.5.jar
...

New Oozie WAR file with added 'ExtJS library, JARs' at /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/webapps/oozie.war


INFO: Oozie is ready to be started

Now, we “upload” the oozie shared libraries on our HDFS, including the falcon shared lib:

$ cd oozie-4.0.0/bin
$ ./oozie-setup.sh sharelib create -fs hdfs://localhost -locallib ../oozie-sharelib-4.0.0-falcon.tar.gz
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
the destination path for sharelib is: /user/jbonofre/share/lib

If we browse the HDFS, we can see the folders created by Oozie.
hdfs2
Finally, we create the Oozie database (where it stores the jobs definition, etc).

$ cd oozie-4.0.0-falcon/bin
$ ./oozie-setup.sh db create -run
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

Validate DB Connection
DONE
Check DB schema does not exist
DONE
Check OOZIE_SYS table does not exist
DONE
Create SQL schema
DONE
Create OOZIE_SYS table
DONE

Oozie DB has been created for Oozie version '4.0.0'


The SQL commands have been written to: /tmp/ooziedb-4527318150729236810.sql

The Oozie configuration is done, we start it:

$ cd oozie-4.0.0-falcon/bin
$ ./oozied.sh start

Setting OOZIE_HOME:          /home/jbonofre/demo/oozie-4.0.0-falcon
Setting OOZIE_CONFIG:        /home/jbonofre/demo/oozie-4.0.0-falcon/conf
Sourcing:                    /home/jbonofre/demo/oozie-4.0.0-falcon/conf/oozie-env.sh
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
Setting OOZIE_CONFIG_FILE:   oozie-site.xml
Setting OOZIE_DATA:          /home/jbonofre/demo/oozie-4.0.0-falcon/data
Setting OOZIE_LOG:           /home/jbonofre/demo/oozie-4.0.0-falcon/logs
Setting OOZIE_LOG4J_FILE:    oozie-log4j.properties
Setting OOZIE_LOG4J_RELOAD:  10
Setting OOZIE_HTTP_HOSTNAME: vostro.nanthrax.net
Setting OOZIE_HTTP_PORT:     11000
Setting OOZIE_ADMIN_PORT:     11001
Setting OOZIE_HTTPS_PORT:     11443
Setting OOZIE_BASE_URL:      http://vostro.nanthrax.net:11000/oozie
Setting CATALINA_BASE:       /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Setting OOZIE_HTTPS_KEYSTORE_FILE:     /home/jbonofre/.keystore
Setting OOZIE_HTTPS_KEYSTORE_PASS:     password
Setting CATALINA_OUT:        /home/jbonofre/demo/oozie-4.0.0-falcon/logs/catalina.out
Setting CATALINA_PID:        /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp/oozie.pid

Using   CATALINA_OPTS:        -Xmx1024m -Dderby.stream.error.file=/home/jbonofre/demo/oozie-4.0.0-falcon/logs/derby.log
Adding to CATALINA_OPTS:     -Doozie.home.dir=/home/jbonofre/demo/oozie-4.0.0-falcon -Doozie.config.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/conf -Doozie.log.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/logs -Doozie.data.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/data -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=vostro.nanthrax.net -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://vostro.nanthrax.net:11000/oozie -Doozie.https.keystore.file=/home/jbonofre/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=

Using CATALINA_BASE:   /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Using CATALINA_HOME:   /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Using CATALINA_TMPDIR: /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp
Using JRE_HOME:        /opt/jdk/1.7.0_51
Using CLASSPATH:       /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/bin/bootstrap.jar
Using CATALINA_PID:    /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp/oozie.pid

We access to the Oozie webconsole on http://localhost:11000/oozie/:
oozie1

ActiveMQ

By default, Falcon embeds ActiveMQ, so generally speaking, you don’t have to install ActiveMQ. However, for the demo, I would like to show how to use a external and standalone ActiveMQ.

I uncompress the apache-activemq-5.7.0-bin.tar.gz tarball in the demo folder:

$ cd demo
$ tar zxvf ~/apache-activemq-5.7.0-bin.tar.gz

The default ActiveMQ configuration is fine, we can just start the broker on the default port (61616):

$ cd demo/apache-activemq-5.7.0/bin
$ ./activemq console

All the Falcon pre-requirements are done.

Falcon installation

Falcon can be deployed:

  • standalone: it’s the “regular” deployment mode when you have only one hadoop cluster. It’s the deployment mode that I will use for this CDC demo.
  • distributed: it’s the deployment to use when you have multiple hadoop clusters, especially if you want to use the Falcon replication feature.

For the installation, we uncompress the falcon-0.5-incubating-SNAPSHOT-bin.tar.gz tarball in the demo folder:

$ cd demo
$ tar zxvf ~/falcon-0.5-incubating-SNAPSHOT-bin.tar.gz

Before starting Falcon, we disable the default embedded ActiveMQ broker in the conf/falcon-env.sh file:

# conf/falcon-env.sh
...
export FALCON_OPTS="-Dfalcon.embeddedmq=false"
...

We start the falcon server:

$ cd falcon-0.5-incubating-SNAPSHOT/bin
$ ./falcon-start 
Could not find installed hadoop and HADOOP_HOME is not set.
Using the default jars bundled in /home/jbonofre/demo/falcon-0.5-incubating-SNAPSHOT/hadooplibs/
/home/jbonofre/demo/falcon-0.5-incubating-SNAPSHOT/bin
falcon started using hadoop version:  Hadoop 1.1.2

The falcon server starts actually a Jetty container with jersey to expose the Falcon REST API.

You can check if the falcon server started correctly using bin/falcon-status or bin/falcon:

$ bin/falcon-status
Falcon server is running (on http://localhost:15000/)
$ bin/falcon admin -status
Falcon server is running (on http://localhost:15000/)
$ bin/falcon admin -version
Falcon server build version: {"properties":[{"key":"Version","value":"0.5-incubating-SNAPSHOT-r5445e109bc7fbfea9295f3411a994485b65d1477"},{"key":"Mode","value":"embedded"}]}

Falcon usage: the entities

In Falcon, the configuration is defined by “entity”. Falcon supports three types of entity:

  • cluster entity defines the hadoop cluster (location of the namenode, location of the jobtracker), related falcon module (Oozie, ActiveMQ), and the location of the Falcon working directories (on HDFS)
  • feed entity defines a location on HDFS
  • process entity defines a hadoop job scheduled by Oozie

An entity is described using XML. You can do different actions on an entity:

  • Submit: register an entity in Falcon. Submitted entity are not scheduled, meaning it would simply be in the configuration store of Falcon.
  • List: provide the list of all entities registered in the configuration store of Falcon.
  • Dependency: provide the dependency of an entity. For example, a feed would show process that are dependent on the feed and the clusters that it depends on.
  • Schedule: feeds or processes that are already submitted and present in the configuration store can be scheduled. Upon schedule, Falcon system wraps the required repeatable action as a bundle of oozie coordinators and executes them on the Oozie scheduler.
  • Suspend: this action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.
  • Resume: put a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
  • Status: to display the current status of an entity.
  • Definition: dump the entity definition from the configuration store.
  • Delete: remote an entity from the Falcon configuration store.
  • Update: update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed. Feed update can cause cascading update to all the processes already scheduled. The following set of actions are performed in Oozie to realize an update.
    • Suspend the previously scheduled Oozie coordinator. This is prevent any new action from being triggered.
    • Update the coordinator to set the end time to “now”
    • Resume the suspended coordinators
    • Schedule as per the new process/feed definition with the start time as “now”

Cluster

The cluster entity defines the configuration of the hadoop cluster and components used by Falcon.

We will store the entity descriptors in the entity folder:

$ mkdir entity

For the cluster, we create entity/local.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<cluster colo="local" description="Local cluster" name="local" xmlns="uri:falcon:cluster:0.1">
  <interfaces>
    <interface type="readonly" endpoint="hftp://localhost:50010" version="1.1.2"/>
    <interface type="write" endpoint="hdfs://localhost:8020" version="1.1.2"/>
    <interface type="execute" endpoint="localhost:8021" version="1.1.2"/>
    <interface type="workflow" endpoint="http://localhost:11000/oozie/" version="4.0.0"/>
    <interface type="messaging" endpoint="tcp://localhost:61616" version="5.7.0"/>
  </interfaces>
  <locations>
    <location name="staging" path="/falcon/staging"/>
    <location name="temp" path="/falcon/temp"/>
    <location name="working" path="/falcon/working"/>
  </locations>
  <properties></properties>
</cluster>

A cluster contains different interfaces and locations used by Falcon. A cluster is referenced by feeds and processes entities (using the cluster name). A cluster can’t be scheduled (it doesn’t make sense).

The colo specifies a kind of cluster grouping. It’s used in distributed deployment mode, so not useful in our demo (as we have only one cluster).
The readonly interface specifies the Hadoop’s HFTP protocol, only used in the case of feed replication between clusters (again, not use in our demo).
The write interface specifies the write access to hdfs, containing the fs.default.name value. Falcon uses this interface to write system data to hdfs and feeds referencing this cluster are written to hdfs using this interface.
The execute interface specifies the location of the jobtracker, containing the mapred.job.tracker value. Falcon uses this interface to submit the processes as jobs in the jobtracker defined here.
The workflow interface specifies the interface for worklow engine (the Oorie URL). Falcon uses this interface to schedule the processes referencing this cluster on workflow engine defined here.
Optionally, you can have a registry interface (defininng thrift URL) to specify the metadata catalog, such as Hive Metastore (or HCatalog). We don’t use it in our demo.
The messaging interface specifies the interface for sending feed availability messages. It’s the URL of the ActiveMQ broker.

A cluster has a list of locations with a name (working, temp, staging) and a path on HDFS. Falcon would use the location to do intermediate processing of entities in hdfs and hence Falcon should have read/write/execute permission on these locations.

Optionally, a cluster may have a list of properties. It’s a list of key-value pairs used in Falcon and propagated to the workflow engine. For instance, you can specify the JMS broker connection factory:

<property name="brokerImplClass" value="org.apache.activemq.ActiveMQConnectionFactory" />

Now, that we have the XML description, we can register our cluster in Falcon. We use the Falcon client commandline to do submit our cluster definition:

$ cd falcon-0.5-incubating-SNAPSHOT
$ bin/falcon entity -submit -type cluster -file ~/demo/local.xml
default/Submit successful (cluster) local

We can check that our local cluster is actually present in the Falcon configuration store:

$ bin/falcon entity -list -type cluster
(cluster) local(null)

We can see our cluster “local”, for now without any dependency (null).

If we take a look on hdfs, we can see that the falcon directory has been created:

$ cd node1
$ bin/hadoop fs -ls /
Found 3 items
drwxr-xr-x   - jbonofre supergroup          0 2014-03-08 07:48 /falcon
drwxr-xr-x   - jbonofre supergroup          0 2014-03-06 17:32 /tmp
drwxr-xr-x   - jbonofre supergroup          0 2014-03-06 18:05 /user

Feed

A feed entity is a location on the cluster. It also defines additional attributes like frequency, late-arrival handling, and retention policies. A feed can be scheduled, meaning that Falcon will create processes to deal with retention and replication on the cluster.

As other entity, a feed is described using a XML. We create entity/output.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<feed description="RandomProcess output feed" name="output" xmlns="uri:falcon:feed:0.1">
  <group>output</group>
 
  <frequency>minutes(1)</frequency>
  <timezone>UTC</timezone>
  <late-arrival cut-off="minutes(5)"/>

  <clusters>
    <cluster name="local">
       <validity start="2012-07-20T03:00Z" end="2099-07-16T00:00Z"/>
       <retention limit="hours(10)" action="delete"/>
    </cluster>
  </clusters>

  <locations>
    <location type="data" path="/data/output"/>
  </locations>

  <ACL owner="jbonofre" group="supergroup" permission="0x644"/>

  <schema location="none" provider="none"/>
</feed>

The locations element define the feed storage. It’s paths on HDFS or table names for Hive. A location is define on a cluster, identified by name. In our example, we use the “local” cluster that we submitted before.

The group element defines a list of comma separated groups. A group is a logical grouping of feeds. A group is said available if all the feeds belonging to a group are available. The frequency of all the feeds which belong to the same group must be same.

The frequency element specifies the frequency by which this feed is generated (for instance, it can generated every hour, every 5 minutes, daily, weekly, etc). Falcon uses this frequency to check if the feed has changed or not (the size has changed). In our example, we define a frequency of every minute. Falcon creates a job in Oozie to monitor the feed.
Falcon system can handle late arrival of input data and appropriately re-trigger processing for the affected instance. From the perspective of late handling, there are two main configuration parameters late-arrival cut-off and late-inputs section in feed and process entity definition that are central. These configurations govern how and when the late processing happens. In the current implementation (oozie based) the late handling is very simple and basic. The falcon system looks at all dependent input feeds for a process and computes the max late cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in late data file by Falcons “record-size” action, if it detects any changes then the workflow will be rerun with the new set of feed data.

The retention element specifies how long the feed is retained on the cluster and the action to be taken on the feed after the expiration of the retention period. In our example, we delete the feed after a retention of 10 days.

The validity of a feed on cluster specifies duration for which this feed is valid on this cluster (considered for scheduling by Falcon).

The ACL defines the permission on the feed (owner/group/permission).

The schema allows you to specific the “format” of the feed (for instance csv). In our case, we don’t define any schema.

We can now submit the feed (register the feed) into Falcon:

$ cd falcon-0.5-incubating-SNAPSHOT
$ bin/falcon entity -submit -type feed -file ~/demo/entity/output.xml
default/Submit successful (feed) output

Process

A process entity defines a job in the cluster.

Like other entity, a process is described with XML (entity/process.xml):

<?xml version="1.0" encoding="UTF-8"?>
<process name="my-process" xmlns="uri:falcon:process:0.1">
    <clusters>
        <cluster name="local">
            <validity start="2013-11-15T00:05Z" end="2030-11-15T01:05Z"/>
        </cluster>
    </clusters>

    <parallel>1</parallel>
    <order>FIFO</order>
    <frequency>minutes(5)</frequency>
    <timezone>UTC</timezone>

    <inputs>
        <!-- In the workflow, the input paths will be available in a variable 'inpaths' -->
        <input name="inpaths" feed="input" start="now(0,-5)" end="now(0,-1)"/>
    </inputs>

    <outputs>
        <!-- In the workflow, the output path will be available in a variable 'outpath' -->
        <output name="outpath" feed="output" instance="now(0,0)"/>
    </outputs>

    <properties>
        <!-- In the workflow, these properties will be available with variable - key -->
        <property name="queueName" value="default"/>
        <!-- The schedule time available as a property in workflow -->
        <property name="time" value="${instanceTime()}"/>
    </properties>

    <workflow engine="oozie" path="/app/mr"/>

    <late-process policy="periodic" delay="minutes(1)">
       <late-input input="inpaths" workflow-path="/app/mr"/>
    </late-process>

</process>

The cluster element defines where the process will be executed. Each cluster has a validity period, telling the times between which the job should run on the cluster. For the demo, we set a large validity period.

The parallel element defines how many instances of the process can run concurrently. We set a value of 1 here to ensure that only one instance of the process can run at a time.

The order element defines the order in which the ready instances are picked up. The possible values are FIFO(First In First Out), LIFO(Last In First Out), and ONLYLAST(Last Only). It’s not really used in our case.

The frequency element defines how frequently the process should run. In our case, minutes(5) means that the job will run every 5 minutes.

The inputs element defines the input data for the process. The process job will start executing only after the schedule time and when all the inputs are available. There can be 0 or more inputs and each of the input maps to a feed. The path and frequency of input data is picked up from feed definition. Each input should also define start and end instances in terms of EL expressions and can optionally specify specific partition of input that the process requires. The components in partition should be subset of partitions defined in the feed.
For each input, Falcon will create a property with the input name that contains the comma separated list of input paths. This property can be used in process actions like pig scripts and so on.

The outputs element defines the output data that is generated by the process. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of EL expression.
For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path.

The properties element contains key value pairs that are passed to the process. These properties are optional and can be used to parameterize the process.

The workflow element defines the workflow engine that should be used and the path to the workflow on hdfs. The workflow definition on hdfs contains the actual job that should run and it should confirm to the workflow specification of the engine specified. The libraries required by the workflow should be in lib folder inside the workflow path.
The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also be available for the workflow.
Currently, Falcon supports three workflow engines:

  • oozie enables users to provide a Oozie workflow definition (in XML).
  • pig enables users to embed a Pig script as a process
  • hive enables users to embed a Hive script as a process. This would enable users to create materialized queries in a declarative way.

NB: I proposed to support a new type of workflow: MapReduce, to be able to directly execute MapReduce job.

In this demo, we use the oozie workflow engine.

We create a Oozie workflow.xml:

<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">
    <start to="mr-node"/>
    <action name="mr-node">
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <prepare>
                <delete path="${outpath}"/>
            </prepare>
            <configuration>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
                <property>
                    <name>mapred.mapper.class</name>
                    <value>org.apache.hadoop.mapred.lib.IdentityMapper</value>
                </property>
                <property>
                    <name>mapred.reducer.class</name>
                    <value>org.apache.hadoop.mapred.lib.IdentityReducer</value>
                </property>
                <property>
                    <name>mapred.map.tasks</name>
                    <value>1</value>
                </property>
                <property>
                    <name>mapred.input.dir</name>
                    <value>${inpaths}</value>
                </property>
                <property>
                    <name>mapred.output.dir</name>
                    <value>${outpath}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end"/>
        <error to="fail"/>
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
</workflow-app>

This workflow is very simple: it uses IdentityMapper and IdentityReducer (provided in Hadoop core) to copy input data as output data.

We upload this workflow.xml on HDFS (in the location specified in the Falcon process workflow element):

$ cd node1
$ bin/hadoop fs -mkdir /app/mr
$ bin/hadoop fs -put ~/demo/workflow.xml /app/mr

The late-process allows the process to react with the input feed changes and trigger an action (here, we re-execute the oozie workflow).

We are now ready to submit the process in Falcon:

$ cd falcon-*
$ bin/falcon entity -submit -type process -file ~/entity/process.xml

The process is ready to be scheduled.

Before scheduling the process, we create the input data. The input data is a simple file (containing a string) that we upload to HDFS:

$ cat > file1
This is a test file
$ node1/bin/hadoop fs -mkdir /data/input
$ node1/bin/hadoop fs -put file1 /data/input

We can now trigger the process:

$ cd falcon*
$ bin/falcon entity -schedule -type process -name my-process
default/my-process(process) scheduled successfully

We can see the different jobs in Oozie (accessing http://localhost:11000/oozie):
oozie_bj
oozie_cj
oozie_wj

On the other hand, we see new topics and queues created in ActiveMQ:
mq_queues
amq_topics

Especially, in ActiveMQ, we have two topics:

  • Falcon publishes messages in the FALCON.my-process topic for each execution of the process
  • Falcon publishes messages in the FALCON.ENTITY.TOPIC topic for each change on the feeds

It’s where our Camel routes subscribe.

Camel routes in Karaf

Now that we have our Falcon platform ready, we just have to create Camel routes (hosted in Karaf container), subscribing on the Falcon topics in ActiveMQ.

We uncompress a Karaf container, and install the Camel features (camel-spring, activemq-camel):

$ tar zxvf apache-karaf-2.3.1.tar.gz
$ cd apache-karaf-2.3.1
$ bin/karaf
karaf@root> features:chooseurl camel
adding feature url mvn:org.apache.camel.karaf/apache-camel/LATEST/xml/features
karaf@root> features:install camel-spring
karaf@root> features:chooseurl activemq
karaf@root> features:install activemq-camel

We create a falcon-route.xml route file containing the Camel routes (using Spring DSL):

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
         http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
         http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

  <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
    <route id="process-listener">
       <from uri="jms:topic:FALCON.my-process"/>
       <to uri="log:process-listener"/>
    </route>
    <route id="feed-listener">
       <from uri="jms:topic:FALCON.ENTITY.TOPIC"/>
       <to uri="log:feed-listener"/>
    </route>
  </camelContext>

  <bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
    <property name="connectionFactory">
      <bean class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="brokerURL" value="tcp://localhost:61616"/>
      </bean>
    </property>
  </bean>

</beans>

In the Camel context, we create two routes, both connecting on the ActiveMQ broker, and listening on the two topics.

We drop the falcon-routes.xml in the deploy folder, and we can see it active:

karaf@root> la|grep -i falcon
[ 114] [Active     ] [            ] [Started] [   80] falcon-routes.xml (0.0.0)
karaf@root> camel:route-list 
 Context        Route              Status   
 -------        -----              ------   
 camel          feed-listener      Started  
 camel          process-listener   Started 

The routes subscribed on the topics and just send to the log (it’s very very simple).

So, we just have to take a look on the log (log:tail):

2014-03-19 11:25:43,273 | INFO  | LCON.my-process] | process-listener                 | rg.apache.camel.util.CamelLogger  176 | 74 - org.apache.camel.camel-core - 2.13.0.SNAPSHOT | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {brokerUrl=tcp://localhost:61616, timeStamp=2014-03-19T10:24Z, status=SUCCEEDED, logFile=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/instancePaths-2013-11-15-06-05.csv, feedNames=output, runId=0, entityType=process, nominalTime=2013-11-15T06:05Z, brokerTTL=4320, workflowUser=null, entityName=my-process, feedInstancePaths=hdfs://localhost:8020/data/output, operation=GENERATE, logDir=null, workflowId=0000026-140319105443372-oozie-jbon-W, cluster=local, brokerImplClass=org.apache.activemq.ActiveMQConnectionFactory, topicName=FALCON.my-process}]
2014-03-19 11:25:43,693 | INFO  | ON.ENTITY.TOPIC] | feed-listener                    | rg.apache.camel.util.CamelLogger  176 | 74 - org.apache.camel.camel-core - 2.13.0.SNAPSHOT | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {brokerUrl=tcp://localhost:61616, timeStamp=2014-03-19T10:24Z, status=SUCCEEDED, logFile=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/instancePaths-2013-11-15-06-05.csv, feedNames=output, runId=0, entityType=process, nominalTime=2013-11-15T06:05Z, brokerTTL=4320, workflowUser=jbonofre, entityName=my-process, feedInstancePaths=hdfs://localhost:8020/data/output, operation=GENERATE, logDir=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/job-2013-11-15-06-05/, workflowId=0000026-140319105443372-oozie-jbon-W, cluster=local, brokerImplClass=org.apache.activemq.ActiveMQConnectionFactory, topicName=FALCON.ENTITY.TOPIC}]

And we can see our notifications:

  • on the process-listener logger, we can see that my-process (entityName) has been executed with SUCCEEDED (status) at 2014-03-19T10:24Z (timeStamp). We also have the location of the job execution log on HDFS.
  • on the feed-listener logger, we can see quite the same messages. This message comes from the late-arrival, so it means that the input field changed.

For sure, the Camel routes are very simple now (just a log), but there is no limit: you bring all the powerful from ESB and BigData all together.
Once the Camel routes get the messages on ActiveMQ coming from Falcon, you can implement the integration process of your choice (sending e-mails, using Camel EIPs, calling beans, etc).

What’s next ?

I’m working on different enhancements on the late-arrival/CDC feature:

  1. The late-arrival messages in the FALCON.ENTITY.TOPIC should be improved: the message should contain a message with the feed changed, the location of the feed, eventually the size gap.
  2. We should provide a more straight forward CDC feature which doesn’t require a process to monitor a feed. Just scheduling a feed should be enough with the late cut-off.
  3. In addition of the oozie, pig, and hive workflow engine, we should provide a “pure” MapReduce jar workflow engine.
  4. The package.sh should be improved to provide a more “ready” to use Falcon Oozie custom distribution.

I’m working on this different enhancements and improvements.

On the other hand, I will propose a set of documentation improvements, especially some kind of “recipe documentation” like this one.

Stay tuned, I’m preparing a new blog about Falcon, this time about the replication between two Hadoop clusters.

Apache Karaf, Cellar, Camel, ActiveMQ monitoring with ELK (ElasticSearch, Logstash, and Kibana)

March 17, 2014 Posted by jbonofre

Apache Karaf, Cellar, Camel, and ActiveMQ provides a lot of information via JMX.

More over, another very useful source of information is in the log files.

If these two sources are very interesting, for a “real life” monitoring, we need some additional features:

  • The JMX information and log messages should be stored in order to be requested later and history. For instance, using jconsole, you can request all the JMX attributes to get the number, but these numbers have to be store somewhere. It’s quite the same for the log. Most of the time, you define a log file rotation, or you periodically cleanup the logs. So the log messages should be store as well to be requested later.
  • Numbers are good, graphics are even better. Once the JMX “numbers” are stored somewhere, a good feature is to use these numbers to create some charts. And also, we can define some kind of SLA: at some point, if a number is not “acceptable” for instance greater than a “watermark” value), we should raise a alert.
  • For high availability and scalability, most of production systems use multiple Karaf instances (synchronize with Cellar for instance). It means that the log files are spread on different machines. In that case, it’s really helpful to “centralize” the log messages.

Of course, there are already open source solutions (zabbix, nagios, etc) or commercial solutions (dynatrace, etc) to cover these needs.

In this blog, I just introduce a possible solution leveraging “big data” tools: we will see how to use the ELK (Elasticsearch, Logstash, and Kibana) solution.

Toplogy

For this example, let say we have to following architecture:

  • node1 is a machine hosting a Karaf container with a set of Camel routes.
  • node2 is a machine hosting a Karaf container with another set of Camel routes.
  • node3 is a machine hosting a ActiveMQ broker (used by the Camel routes from node1 and node2).
  • monitor is a machine hosting the monitoring platform.

Local to node1, node2, and node3, we install and configure logstash with both file and JMX input plugins. This logstash will get the log messages and pool JMX MBeans attributes, and send to a “central” Redis server (using the redis output plugin).

On monitor, we install:

  • redis server to receive the messages and events coming from logstash installed on node1, node2, and node3
  • elasticsearch to store the messages and events
  • a first logstash acting as an indexer to take the messages/events from redis and store into elasticsearch (including the update of indexes, etc)
  • a second logstash providing the kibana web console

Redis and Elasticsearch

Redis

Redis is a key-value store. But it also may acts as a broker to receive the messages/events from the different logstash (node1, node2, and node3).

For the demo, I use Redis 2.8.7 (that you can download from http://download.redis.io/releases/redis-2.8.7.tar.gz.

We uncompress the redis tarball in the /opt/monitor folder:

cp redis-2.8.7.tar.gz /opt/monitor
tar zxvf redis-2.8.7.tar.gz

Now, we have to compile Redis server on the machine. To do so, we have to execute make in the Redis src folder:

cd redis-2.8.7/src
make

NB: this step requires make and gcc installed on the machine.

make created a redis-server binary in the src folder. It’s the binary that we use to start Redis:

./redis-server --loglevel verbose
[12130] 16 Mar 21:04:28.387 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984.
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 2.8.7 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 12130
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

[12130] 16 Mar 21:04:28.388 # Server started, Redis version 2.8.7
[12130] 16 Mar 21:04:28.388 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[12130] 16 Mar 21:04:28.388 * The server is now ready to accept connections on port 6379
[12130] 16 Mar 21:04:28.389 - 0 clients connected (0 slaves), 443376 bytes in use

The redis server is now ready to accept connections coming from the “remote” logstash.

Elasticsearch

We use elasticsearch as storage backend for all messages and events. For this demo, I use elasticsearch 1.0.1, that you can download from https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.1.tar.gz.

We uncompress the elasticsearch tarball in the /opt/monitor folder:

cp elasticsearch-1.0.1.tar.gz /opt/monitore
tar zxvf elasticsearch-1.0.1.tar.gz

We start elasticsearch with the bin/elasticsearch binary (the default configuration is OK):

cd elasticsearch-1.0.1
bin/elasticsearch
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] version[1.0.1], pid[12466], build[5c03844/2014-02-25T15:52:53Z]
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] initializing ...
[2014-03-16 21:16:13,786][INFO ][plugins                  ] [Solarr] loaded [], sites []
[2014-03-16 21:16:15,763][INFO ][node                     ] [Solarr] initialized
[2014-03-16 21:16:15,764][INFO ][node                     ] [Solarr] starting ...
[2014-03-16 21:16:15,902][INFO ][transport                ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.134.11:9300]}
[2014-03-16 21:16:18,990][INFO ][cluster.service          ] [Solarr] new_master [Solarr][V9GO0DiaT4SFmRmxgwYv0A][vostro][inet[/192.168.134.11:9300]], reason: zen-disco-join (elected_as_master)
[2014-03-16 21:16:19,010][INFO ][discovery                ] [Solarr] elasticsearch/V9GO0DiaT4SFmRmxgwYv0A
[2014-03-16 21:16:19,021][INFO ][http                     ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.134.11:9200]}
[2014-03-16 21:16:19,072][INFO ][gateway                  ] [Solarr] recovered [0] indices into cluster_state
[2014-03-16 21:16:19,072][INFO ][node                     ] [Solarr] started

Logstash

Logstash is a tool for managing events and logs.

It works with a chain of input, filter, output.

On node1, node2, and node3, we will setup logstash with:

  • a file input plugin to read the log files
  • a jmx input plugin to read the different MBeans attributes
  • a redis output to send the messages and events to the monitor machine.

For this blog, I use logstash 1.4 SNAPSHOT with a contrib that I did. You can find my modified plugin on my github: https://github.com/jbonofre/logstash-contrib.

The first thing is to checkout latest logstach codebase and build it:

git clone https://github.com/elasticsearch/logstash/
cd logstash
make tarball

It will create the logstash distribution tarball in the build folder.

We can install logstash in a folder (for instance /opt/monitor/logstash):

mkdir /opt/monitor
cp build/logstash-1.4.0.rc1.tar.gz /opt/monitor
cd /opt/monitor
tar zxvf logstash-1.4.0.rc1.tar.gz
rm logstash-1.4.0.rc1.tar.gz

JMX is not a “standard” logstash plugin. It’s a plugin from logstash-contrib project. As I modified the logstash JMX plugin (to work “smoothly” with Karaf MBeanServer), waiting that my pull request will be integrated in logstash-contrib (I hope ;)), you have to clone my github fork:

git clone https://github.com/jbonofre/logstash-contrib/
cd logstash-contrib
make tarball

We can add the contrib plugins into our logstash installation (in /opt/monitor/logstash-1.4.0.rc1 folder):

cd build
tar zxvf logstash-contrib-1.4.0.beta2.tar.gz
cd logstash-contrib-1.4.0.beta2
cp -r * /opt/monitor/logstash-1.4.0.rc1

Our logstash installation is now ready, including the logstash-contrib plugins.

It means that on node1, node2, node3 and monitor, you should have the /opt/monitor/logstash-1.4.0.rc1 folder with the installation (you can use scp or rsync to install logstash on the machines).

Indexer

On monitor machine, we have a logstash instance acting as an indexer: it gets the messages from redis and stores in elasticsearch.

We create the /opt/monitor/logstash-1.4.0.rc1/conf/indexer.conf file containing:

input {
  redis {
    host => "localhost"
    data_type => "list"
    key => "logstash"
    codec => json
  }
}
output {
  elasticsearch {
    host => "localhost"
  }
}

We can start logstash using this configuration file:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/indexer.conf

Collector

On node1, node2, and node3, logstash will act as a collector:

  • the file input plugin will read the messages from the log files (you can configure multiple log files)
  • the jmx input plugin will periodically pool MBean attributes

Both will send messages to the redis server using the redis output plugin.

We create a folder /opt/monitor/logstash-1.4.0.rc1/conf. It’s where we store the logstash configuration. In this folder, we create a collector.conf file.

For node1 and node2 (both hosting a karaf container with camel routes), the collector.conf file contains:

input {
  file {
    type => "log"
    path => ["/opt/karaf/data/log/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

On node3 (hosting an ActiveMQ broker), the collector.conf is the same, just the location of the log file is different:

input {
  file {
    type => "log"
    path => ["/opt/activemq/data/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

The redis output plugin send the messages/events to the redis server located on “monitor” machine.

These messages and events come from two input plugins:

  • the file input plugin takes the path of the log file (using glob)
  • the jmx input plugin takes a folder. This folder contains json file (see later) with the MBeans queries. The plugin executes the queries every 10 seconds (polling_frequency).

So, the jmx input plugin reads all files located in the /opt/monitor/logstash-1.4.0.rc1/conf/jmx folder.

On node1 and node2 (again hosting a karaf container with camel routes), for instance, we want to monitor the number of thread on the Karaf instance (using the thread MBean), and a route named “route1” (using the Camel route MBean).
We specify this in /opt/monitor/logstash-1.4.0.rc1/conf/jmx/karaf file:

{
  "host" : "localhost",
  "port" : 1099,
  "url" : "service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root",
  "username" : "karaf",
  "password" : "karaf",
  "alias" : "node1",
  "queries" : [
    {
      "object_name" : "java.lang:type=Threading",
      "object_alias" : "Threading"
    }, {
      "object_name" : "org.apache.camel:context=*,type=routes,name=\"route1\"",
      "object_alias" : "Route1"
    }
   ]
}

On node3, we will have a different JMX configuration file (for instance /opt/monitor/logstash-1.4.0.rc1/conf/jmx/activemq) containing the ActiveMQ MBeans that we want to query.

Now, we can start the logstash “collector”:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/collector.conf

We can see the clients connected in the redis log:

[12130] 17 Mar 14:33:27.041 - Accepted 127.0.0.1:46598
[12130] 17 Mar 14:33:31.267 - 2 clients connected (0 slaves), 484992 bytes in use

and the data populated in the elasticsearch log:

[2014-03-17 14:21:59,539][INFO ][cluster.service          ] [Solarr] added {[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false}])
[2014-03-17 14:30:59,584][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [_default_]
[2014-03-17 14:31:00,665][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [log] (dynamic)
[2014-03-17 14:33:28,247][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [jmx] (dynamic)

Now, we have JMX data and log messages from different containers, brokers, etc stored in one centralized place (the monitor machine).

We can now add a web application to read the data and create charts using the data.

Kibana

Kibana is the web application provided with logstash. The default configuration use elasticsearch default port. So, we just have to start Kibana on the monitor machine:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash-web

We access to Kibana on http://monitor:9292.

On the welcome page, we click on the “Logstash dashboard” link, and we arrive on a console looking like:
logstash1

It’s time to configure Kibana.

We remove the default histogram, to add a custom one to chart the thread count.

First, we create a query to isolate the thread count for node1. Kibana uses the Apache Lucene query syntax.
Our query is here very simple: metric_path:"node1.Threading.ThreadCount".

Now, we can create a histogram using this query, getting the metric_value_number:
kibana2

Now, we want to chart the lastProcessingTime on the Camel route (to see for instance if the route takes more time at some point).
We create a new query to isolate the route1 lastProcessingTime on node1: metric_path:"node1.Route1.LastProcessingTime".

We can now create a histogram using this query, getting the metric_value_number:
kibana3

For the demo, we can create a histogram chart to display the exchanges completed and failed for route1 on node1. We create two queries:

  • metric_path:”node1.Route1.ExchangesFailed”
  • metric_path:”node1.Route1.ExchangesCompleted”

We create a new chart in the same row:
kibana4

We cleanup a bit the events panel. We create a query to display only the log messages (not the JMX queries): type:"log".
We configure the log event panel to change the name and use the log query:
kibana6

We have now a kibana console looking like:

kibana5

With this very simple kibana configuration, we have:
– a chart of the thread count on node1
– a chart of the last processing time for route1 (on node1)
– a chart of the exchanges (failed/completed) for route1 (on node1)
– a view of all logs messages

You can now play with Kibana, add a lot of new charts leveraging all information that you have into elasticsearch (both log messages and JMX data).

Next

I’m working on some new Karaf, Cellar, ActiveMQ, Camel features providing “native” and “enhanced” support for logstash. The purpose is to just type feature:install monitoring to get:

  • jmx:* commands in Karaf
  • broadcast of event in elasticsearch
  • integration of redis, elasticsearch, logstash in Karaf (to avoid to install it “externally” from Karaf) and provide ready to use configuration (pre-configured logstash jmx input plugin, pre-configured kibana console/charts, …).

If you have other ideas to enhance and improve monitoring in Karaf, Cellar, Camel, ActiveMQ, don’t hesitate to propose on the mailing lists ! Any idea is welcome.

Apache ActiveMQ 5.7, 5.9 and Master-Slave

October 3, 2013 Posted by jbonofre

With my ActiveMQ friends (especially Dejan and Claus), I’m working on ActiveMQ 5.9 next release.

Today, I focus on the HA with ActiveMQ, and especially Master-Slave configuration.

Update of the documentation

The first thing that I noticed is that the documentation is not really up to date.

If you do a search on the ActiveMQ website about Master-Slave, you will probably find these two links:

On the first link (about KahaDB), we can see a note “This is under review – and not currently supported”. It’s confusing for the users as this mechanism is the prefered one !
On the other hand, the second link should be flagged as deprecated as this mechanism is no more maintained.

I sent a message on the dev mailing list to updated these pages.

Lease Database Locker to avoid “dual masters”

In my test cases, I used a JDBC database backend (MySQL) for HA (instead of using KahaDB).

I have two brokers, that use the following configuration:


  <persistenceAdapter>
    <jdbcPersistenceAdapter dataDirectory="${activemq.data}" dataSource="#mysql-ds" />
  </persistenceAdapter>

Broker1 starts, connects to MySQL, and acquires the lock. Broker1 is the master.

Broker2 starts, connects to MySQL, and waits for the lock (as the lock is hold by Broker1). Broker2 is a slave.

Now, I stop MySQL, for instance to do a cold backup. My backup is very fast, and I start MySQL server again, very quickly.

The lock is available in the database, so Broker2 get the lock, whereas Broker1 didn’t yet release it. So I’m in a bad situation where I have two “masters”.

ActiveMQ 5.7.0 introduced the change on locking strategies for shared storage master/slave topologies. Previously storage locking (and thus master election) was hard-coded directly in the particular store. So KahaDB has only the option to use shared file lock, while JDBC was using database lock.

Now, the storage locking is separated from the store, so you can implement your own locking strategies if necessary (or tune existing ones). Of course, every store has its own default locker.

In our previsou use case, to solve the “dual master” issue, we can use a new locker: the lease database locker.

To use it, we update the configuration of each locker like this:


  <persistenceAdapter>
    <jdbcPersistenceAdapter dataDirectory="${activemq.data}" dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
      <locker>
        <lease-database-locker lockAcquireSleepInterval="10000"/>
      </locker>
    </jdbcPersistenceAdapter>
  </persistenceAdapter>

Lease database locker solves master/slave problem of the default database locker. Master acquires a lock only for a certain period and must extend it’s lease from time to time. Slave also checks periodically to see if the lease has expired. The lease can survive a db replica failover.

The lease based lock is acquired by blocking at start and retained by the keepAlivePeriod. To retain, the lease is extended by the lockAcquireSleepInterval, so in theory the master is always (lockAcquireSleepInterval-lockKeepAlivePeriod) ahead of the slave w.r.t the lease. It is imperative that lockAcquireSleepInterval > lockKeepAlivePeriod, to ensure the lease is always current.

In the simplest case, the clocks between master and slave must be in sync for this solution to work properly. If the clocks cannot be in sync, the locker can use the system time from the database CURRENT TIME and adjust the timeouts in accordance with their local variance from the db system time. If maxAllowableDiffFromDBTime is > 0 the local periods will be adjusted by any delta that exceeds maxAllowableDiffFromDBTime.

How to know who is the master ?

The “new” mechanism for Master/Slave is great and very easy to set up. You don’t really define who is the master, and who are the slaves. The first broker which get the lock will be the master.

So, a fair question is: how can I know which broker is the master ?

Actually, you already have the response on the JMX layer.

If you connect a JMX client (for instance jconsole) on the broker, and you take a look on the org.apache.activemq:BrokerName=Broker2,Type=Broker MBean, you can see the Slave attribute.

If Slave is true, it means that this broker is a slave. If Slave is false, it’s the master.

Another way to get this information is to use directly the activemq command with bstat argument (instead of JMX):


bin/activemq bstat
...
Connecting to pid: 563
BrokerVersion = 5.9-SNAPSHOT
TempLimit = 53687091200
Persistent = true
MemoryLimit = 67108864
TempPercentUsage = 0
SslURL =
StorePercentUsage = 0
TransportConnectors = {openwire=tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600}
Type = Broker
StompSslURL =
OpenWireURL = tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600
Uptime = 3 minutes
DataDirectory = /home/jbonofre/broker2/data
StoreLimit = 107374182400
BrokerName = broker2
VMURL = vm://broker2
StompURL =
MemoryPercentUsage = 0
Slave = true

You can see the Slave attribute there.

If you want to “script” this and get only the Slave attribute, you can use the query argument:


bin/activemq query --objname Type=Broker --view Slave
...
Slave = true