What’s new in Apache Karaf Cellar 4.0.0 ?

September 22, 2015 Posted by jbonofre

Apache Karaf Cellar 4.0.0 release is now on vote, and hopefully, it should be available very soon.

This release is a major new release. More than just bug fixes, this release brings several refactoring and new features.

It’s time to take a tour in the new Cellar 4.0.0.

HTTP features

Cellar 4.0.0 brings new HTTP features.

HTTP load balancer

Cellar 4.0.0 provides a new feature: cellar-http-balancer.

The purpose is to use any nodes in a cluster group to access a web application, even if the web application is not actually deployed on the local node.

For instance, you have a cluster group containing four nodes. You deploy a web application on two nodes. So basically, to access your web application, you have to:

  • specify the address of one of two nodes where the web application is deployed in your browser
  • use a load balancer (mod_proxy_balancer, Cisco, Juniper, F5, whatever) to load balance on the two nodes. The drawback of this is that the load balancer is a single point of failure, and adding a new node providing the web application needs to update the load balancer configuration.

The cellar-http-balancer feature install a proxy in the nodes where the web application is not deployed. It means that you can use any node in the cluster group to access your web application, even if the application is not deployed there.

To illustrate this, let’s take a cluster with two nodes: node1 and node2.

On node1, we install http, http-whiteboard, and cellar feature:

karaf@node1()> feature:install http
karaf@node1()> feature:install http-whiteboard
karaf@node1()> feature:repo-add cellar 4.0.0
karaf@node1()> feature:install cellar

We now install the cellar-http-balancer feature on the cluster:

karaf@node1()> cluster:feature-install default cellar-http-balancer

Now, we install the webconsole only on node1:

karaf@node1()> feature:install webconsole

We can see the webconsole locally deployed using the http:list command:

karaf@node1()> http:list 
ID  | Servlet          | Servlet-Name    | State       | Alias               | Url
101 | KarafOsgiManager | ServletModel-2  | Undeployed  | /system/console     | [/system/console/*]
105 | InstancePlugin   | ServletModel-7  | Deployed    | /instance           | [/instance/*]
101 | ResourceServlet  | /res            | Deployed    | /system/console/res | [/system/console/res/*]
103 | GogoPlugin       | ServletModel-5  | Deployed    | /gogo               | [/gogo/*]
101 | KarafOsgiManager | ServletModel-11 | Deployed    | /system/console     | [/system/console/*]
102 | FeaturesPlugin   | ServletModel-9  | Deployed    | /features           | [/features/*]

Using a browser, we can access the webconsole using the http://localhost:8181/system/console URL.

But we can also see that the webconsole is also available on the cluster group:

karaf@node1()> cluster:http-list default
Alias               | Locations
/system/console/res |
/gogo               |
/instance           |
/system/console     |
/features           |

It means that I can use any node member of this cluster group to access the webconsole from node1 (I agree it’s not really interesting, but it’s just for the example, imagine that the webconsole is your own web application).

On node2, as I’m using the same machine, I have to use another port than 8181 for the HTTP service, so I’m adding etc/org.ops4j.pax.web.cfg file containing:


It means that the HTTP service on node2 will listen on port 8041.

Now, on node2, I install the http, http-whiteboard, and cellar features:

karaf@node2()> feature:install http
karaf@node2()> feature:install http-whiteboard
karaf@node2()> feature:repo-add cellar 4.0.0
karaf@node2()> feature:install cellar

As we installed the cellar-http-balancer feature on the default cluster group, it’s automatically installed on node2 when we enable Cellar.

Of course, on node2, we can see the HTTP applications available on the cluster, with node1 as location:

karaf@node2()> cluster:http-list default 
Alias               | Locations
/system/console/res |
/gogo               |
/instance           |
/system/console     |
/features           |

Now, if we take a look on the “local” HTTP applications on node2 (using http:list), we can see:

karaf@node2()> http:list 
ID  | Servlet                    | Servlet-Name   | State       | Alias               | Url
100 | CellarBalancerProxyServlet | ServletModel-3 | Deployed    | /gogo               | [/gogo/*]
100 | CellarBalancerProxyServlet | ServletModel-2 | Deployed    | /system/console/res | [/system/console/res/*]
100 | CellarBalancerProxyServlet | ServletModel-6 | Deployed    | /features           | [/features/*]
100 | CellarBalancerProxyServlet | ServletModel-5 | Deployed    | /system/console     | [/system/console/*]
100 | CellarBalancerProxyServlet | ServletModel-4 | Deployed    | /instance           | [/instance/*]

We can see the same URLs available on node2, providing by CellarBalancerProxyServlet. In your browser, if you access to http://localhost:8041/system/console you will access to the webconsole deployed on node1 whereas you use node2.

It means that the CellarBalancerProxyServlet act as a proxy. It does:

  1. Cellar HTTP Balancer is listening for HTTP servlets on local node. When a servlet is deployed locally to the node, it updates the servlets set on the cluster, and send a cluster event to the other nodes in the same cluster group.
  2. When a node receives a cluster event from the HTTP balancer, if a servlet with the same alias is not already deployed locally, Cellar HTTP balancer creates a CellarBalancerProxyServlet with the same alias.
  3. When the CellarBalancerProxyServlet receives a HTTP request, it retrieves the locations where the servlet is actually deployed from the cluster set, and randomly choose one, where the request is proxied.

HTTP sessions replication

Cellar 4.0.0 also brings support of HTTP session replication.

You don’t need any specific Cellar feature: just install http, http-whiteboard, and cellar features (in this order):

karaf@node1()> feature:install http
karaf@node1()> feature:install http-whiteboard
karaf@node1()> feature:repo-add cellar 4.0.0
karaf@node1()> feature:install cellar

To be able to use HTTP sessions replication, you have to implement serializable HTTP sessions in your web application.

Now, the only change in your application, is to add a specific filter. For that, you have to update the WEB-INF/web.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"

                Name of the distributed map storing
                your web session objects
            <!-- How is your load-balancer configured? stick-session means all requests of
                a session is routed to the node where the session is first created. This is
                excellent for performance. If sticky-session is set to false, when a session
                 is updated on a node, entry for this session on all other nodes is invalidated.
                 You have to know how your load-balancer is configured before setting this
                 parameter. Default is true. -->
                Are you debugging? Default is false.


That’s all: if you deploy your web application on several nodes, then the sessions will be replicated and available on all node. It means that your clients will be able to transparently switch from a node to another.

Refactoring of the synchronizers

Cellar 4.0.0 also brings refactoring of the different synchronizers.

Now the synchronizers:

  • support new sync policies
  • send cluster events to the other nodes allowing a complete sync when a node joins a cluster group

If you take a look in the etc/org.apache.karaf.cellar.groups.cfg file, you will see:

default.bundle.sync = cluster
default.config.sync = cluster
default.feature.sync = cluster
default.obr.urls.sync = cluster
default.balanced.servlet.sync = cluster

Now, the synchronizers support the following policies:

  • disabled means that the synchronizer doesn’t do anything
  • cluster means that, first the synchronizer retrieves the state from the cluster, and update the node state if needed (pull first), and then, push the node state to cluster and send cluster events if needed (push after)
  • node means that, first the synchronizer pushed the state of the node to the cluster and send cluster events if needed (push first), and then, retrieves the state from the cluster and update the local node if needed (pull after)
  • clusterOnly means that the synchronizer only retrieve the state from the cluster and the local node if needed, nothing is pushed to the cluster. With this policy, the cluster acts as a “master”.
  • nodeOnly means that the synchronizer only pushed the local node state to the cluster and send cluster events if required. With this policy, the node acts as a “master”.

Karaf 4 powered and complete refactoring

Cellar 4.0.0 is a complete refactoring compared to previous versions, as it’s designed for Karaf 4.0.0:

  • blueprint is not used anymore, Cellar modules use their own activator extending Karaf BaseActivator, and leveraging the Karaf annotations (@Services, @RequireService, @ProvideService, etc) and the karaf-services-maven-plugin.
  • the Cellar commands use the new Karaf 4 API, and annotations

It allows Cellar to install faster than before, and ready to support the new Karaf Features Resolver, including requirements/capabilities definitions.

What’s next

But, Cellar 4.0.0 is the first release on the new 4.x serie. I’m already planning a 4.1.0 bringing new features and enhancements (and of course bug fixes).

DOSGi refactoring and load balancing policies

I would like to refactore the DOSGi layer:

  • right now, Cellar DOSGi uses two ServiceListeners. It would like to replace the ServiceListeners with pure ServiceTrackers, and use the same design used for the Cellar HTTP Balancer (tracking services, sending cluster events to the other nodes, where the handler creates proxies). It will gives more flexibility and easier lifecycle/tracking of DOSGi.
  • Cellar DOSGi doesn’t support cluster group. A remote OSGi service is available on all cluster nodes, whatever in which cluster group the node is. The refactoring will leverage the cluster group, as we will have the OSGi services available per cluster group, so the proxies on cluster group members.
  • Cellar DOSGi will also support balancing policy. Assuming that several nodes provide the same service, the client nodes will be able to use random, round-robin, weight based balancing selection of the remote node. After this refactoring, it could make sense to include the local service as part of the balancing selection (I have to think about that ;)).

New HTTP balancer policies

Right now, the Cellar HTTP balancer (in the CellarBalancerProxyServlet) only supports random balancing. For instance, if two nodes provides the same service, the balancer randomly choses one of the two.

I will introduce new balancing policies, configurable using the etc/org.apache.karaf.cellar.groups.cfg file:

  • random: as we have right now, it will still be there
  • round-robin: in the cluster, I will keep the index of the last node used in the proxy. The next call will use the next node in the list.
  • weight-based: the user will be able to give a weight on each node (based on the node ID). It’s a ratio of the number of requests that each node should deal with. The proxies will proxy the requests according to these ratios.

New Cellar HTTP sessions replication

Right now, the Cellar HTTP replications directly leverage the Hazelcast WebFilter and sessions replication.

The only drawback is that we don’t leverage the Cellar cluster groups.

In 4.1.0, Cellar will provide its own WebFilter (extending the Hazelcast one) in order to support cluster groups: it means that the sessions replication can be narrowed to only nodes member of the same cluster group.

It will give more flexibility to the users and advanced sessions replications.


Of course, Cellar 4.0.0 also brings lot of bug fixes. I think it’s a good start in the new Cellar 4 series, leveraging Karaf 4.

I hope you will enjoy it and let me know if you have any new ideas !

Talend ESB: query a database directly in the mediation perspective

August 3, 2015 Posted by jbonofre

When exposing a database as a REST or SOAP service, lot of users use:

  • the integration perspective to create a service, but they don’t leverage Camel
  • the mediation perspective to create a route containing a cTalendJob

However, there’s an easy alternative.

Using the mediation perspective, we can use directly a datasource exposed in the runtime (Karaf) as an OSGi service, and directly use Camel components.

The advantages of this approach are:

  • The same DataSource is shared by different routes and services. It means that we can use a PooledDataSource and optimize the connections to the database.
  • We don’t use any Talend job, and directly leverage Camel native components.

Route design in the studio

We create a route in the mediation perspective of the studio.


First, in the Spring tab, we add the DataSource OSGi service lookup. To do so, we add the spring-osgi namespace and use the osgi:reference element:

<beans ....
      http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi.xsd

  <osgi:reference id="demoDS" interface="javax.sql.DataSource" filter="(osgi.jndi.service.name=jdbc/demo)"/>



You can use the same mechanism to load a JMS connection factory: you can reference a JMS ConnectionFactory OSGi service, and use the camel-jms component in a cMessagingEndpoint, or use cJMS component with an empty custom cJMSConnectionFactory.

Now, we can start the actual design of our route.

As we want to expose a REST service, the route starts with a CXFRS endpoint.

The CXFRS endpoint property is set to "/demo". It’s not an abolute URL: I recommend to use a relative URL as it will bind relatively to the CXF servlet in the runtime, and so leverage CXF and Jetty configuration of the runtime.

We also create an API mapping getMembers producing a JSON output.


NB: the methodName (here getMembers) is available as header. You can use a Content Base Router just after the CXFRS endpoint to route the different methods/REST action to different sub-routes/endpoints. Here, as we have only one method/action (getMembers), I directly route to an unique endpoint.

We now add a cMessagingEndpoint to use the camel-sql component. In the URI, we set the SQL query and the datasource reference:

"sql:select * from members?dataSource=demoDS"


The dataSource property corresponds to the reference id as defined in the Spring tab.

And in the advanced settings, we define the sql component:


The SQL endpoint populate the body of the in message with the query result, as a List>. I transform this as JSON using a marshaler (which use camel-xstream by default).

For that, I add a cJavaDSLProceddor which does:



For the demo, I directly marshal the list of map as JSON. If you want to have more control in the generated JSON, you can use a cProcessor before the cJavaDSLProcessor (marshaler). In this cProcessor, you create a simple instance of a POJO which will be marshaled as JSON, generating the JSON as you want.

Our route is now ready, let’s create a kar file.


We are now ready to deploy in the runtime (Karaf).

Deployment in the runtime

Let’s start a runtime:

$ bin/trun

First, we install the jdbc feature to easily create a datasource:

karaf@trun> features:install jdbc

Now, we can create the demo datasource (as expected in our route):

karaf@trun> jdbc:create -i -t derby demo

We now have the demo datasource ready:

karaf@trun> jdbc:datasources
                Name         Product    Version                                           URL Status
    jdbc/demoxa, 418    Apache Derby - (1181258)                               jdbc:derby:demo    OK
      jdbc/demo, 419    Apache Derby - (1181258)                               jdbc:derby:demo    OK

We create the members table using the jdbc:execute command:

karaf@trun> jdbc:execute jdbc/demo "create table members(id int, name varchar(256))"

Now, let’s insert a record in the members table:

karaf@trun> jdbc:execute jdbc/demo "insert into members values(1, 'jbonofre')"

As our route uses JSON marshaler, we need the camel-xstream component:

karaf@trun> features:install camel-xstream

We are now ready to deploy our route. We drop the kar file in the deploy folder. We can see in the log:

2015-08-03 10:02:09,895 | INFO  | pool-10-thread-1 | OsgiSpringCamelContext           | e.camel.impl.DefaultCamelContext 1673 | 170 - org.apache.camel.camel-core - 2.13.2 | Apache Camel 2.13.2 (CamelContext: BlogDataSourceRoute-ctx) started in 0.302 seconds

If we access to http://localhost:8040/services and see our REST service:


And the generated WADL:


Using a browse, you can use the REST service accessing http://localhost:8040/services/demo/. We have the JSON generated containing the database data:



When you want to manipulate a database, you don’t have to use a Talend Service or a cTalendJob. Camel provides components:

  • camel-sql
  • camel-jdbc
  • camel-jpa

Using the DataSource as an OSGi service, you can share an unique datasource from different routes, services, applications, and bundles, leveraging the pooling.

It’s an efficient way to leverage some runtime feature, with design in the studio.

Monitoring and alerting with Apache Karaf Decanter

July 28, 2015 Posted by jbonofre

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation 😉


Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it.

It’s very flexible, providing ready to use features, and also very easy to extend.

Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf.

Decanter provides collectors, appenders, and SLA.


Decanter Collectors are responsible of harvesting the monitoring data.

Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic.

A Collector can be:

  • Event Driven, meaning that it will automatically react to an internal event
  • Polled, meaning that it’s periodically executed by the Decanter Scheduler

You can install multiple Decanter Collectors in the same time. In the 1.0.0 release, Decanter provides the following collectors:

  • log is an event-driven collector. It’s actually a Pax Logging PaxAppender that listens for any log messages and send the log details into the EventAdmin topic.
  • jmx is a polled collector. Periodically, the Decanter Scheduler executes this collector. It retrieves all attributes of all MBeans in the MBeanServer, and send the JMX metrics into the EventAdmin topic.
  • camel (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the Camel routes MBeans.
  • activemq (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the ActiveMQ MBeans.
  • camel-tracer is a Camel Tracer TraceEventHandler. In your Camel route definition, you can set this trace event handler to the default Camel tracer. Thanks to that, all tracing details (from URI, to URI, exchange with headers, body, etc) will be send into the EventAdmin topic.


The Decanter Appenders receives the data harvested by the collectors. They consume OSGi EventAdmin Events from the decanter/collect/* topics.

They are responsible of storing the monitoring data into a backend.

You can install multiple Decanter Appenders in the same time. In the 1.0.0 release, Decanter provides the following appenders:

  • log creates a log message with the monitoring data
  • elasticsearch stores the monitoring data into an Elasticsearch instance
  • jdbc stores the monitoring data into a database
  • jms sends the monitoring data to a JMS broker
  • camel sends the monitoring data to a Camel route

SLA and alerters

Decanter also provides an alerting system when some data doesn’t validate a SLA.

For instance, you can define the maximum acceptable number of threads running in Karaf. If the current number of threads is over the limit, Decanter calls alerters.

Decanter Alerters are a special kind of appenders, consuming events from the OSGi EventAdmin decanter/alert/* topics.

As for the appenders, you can have multiple alerters active at the same time. Decanter 1.0.0 release provides the following alerters:

  • log to create a log message for each alert
  • e-mail to send an e-mail for each alert
  • camel to execute a Camel route for each alert

Let see Decanter in action to have details how to install and use it !

Quick start

Decanter is pretty easy to install and provide “key turn” functionalities.

The first thing to do is to register the Decanter features repository in the Karaf instance:

karaf@root()> feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features

NB: for the next Karaf releases, I will add Decanter features repository in etc/org.apache.karaf.features.repos.cfg, allowing to easily register Decanter features simply using feature:repo-add decanter 1.0.0.

We now have the Decanter features available:

karaf@root()> feature:list |grep -i decanter
decanter-common                 | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter API                                
decanter-simple-scheduler       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Simple Scheduler                   
decanter-collector-log          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Messages Collector             
decanter-collector-jmx          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMX Collector                      
decanter-collector-camel        | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Collector                    
decanter-collector-activemq     | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter ActiveMQ Collector                 
decanter-collector-camel-tracer | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Tracer Collector             
decanter-collector-system       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter OS Collector                       
decanter-appender-log           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Appender                       
decanter-appender-elasticsearch | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Elasticsearch Appender             
decanter-appender-jdbc          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JDBC Appender                      
decanter-appender-jms           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMS Appender                       
decanter-appender-camel         | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Appender                     
decanter-sla                    | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA support                        
decanter-sla-log                | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA log alerter                    
decanter-sla-email              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA email alerter                  
decanter-sla-camel              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA Camel alerter                  
elasticsearch                   | 1.6.0            |           | karaf-decanter-1.0.0     | Embedded Elasticsearch node                       
kibana                          | 3.1.1            |           | karaf-decanter-1.0.0     | Embedded Kibana dashboard

For a quick start, we will use elasticsearch embedded to store the monitoring data. Decanter provides a ready to use elasticsearch feature, starting an embedded elasticsearch node:

karaf@root()> feature:install elasticsearch

The elasticsearch feature installs the elasticsearch configuration: etc/elasticsearch.yml.

We now have a ready to use elasticsearch node, where we will store the monitoring data.

Decanter also provides a kibana feature, providing a ready to use set of kibana dashboards:

karaf@root()> feature:install kibana 

We can now install the Decanter Elasticsearch appender: this appender will get the data harvested by the collectors, and store it in elasticsearch:

karaf@root()> feature:install decanter-appender-elasticsearch

The decanter-appender-elasticsearch feature also installs etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file. You can configure the location of the Elasticsearch node there. By default, it uses a local elasticsearch node, especially the one embedded that we installed with the elasticsearch feature.

The etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file contains hostname, port and clusterName of the elasticsearch instance to use:

# Decanter Elasticsearch Appender Configuration

# Hostname of the elasticsearch instance
# Port number of the elasticsearch instance
# Name of the elasticsearch cluster

Now, our Decanter appender and elasticsearch node are ready.

It's now time to install some collectors to harvest the data.

Karaf monitoring

First, we install the log collector:

karaf@root()> feature:install decanter-collector-log 

This collector is event-driven and will automatically listen for log events, and send into the EventAdmin collect topic.

We install a second collector: the JMX collector.

karaf@root()> feature:install decanter-collector-jmx

The JMX collector is a polled collector. So, it also installs and starts the Decanter Scheduler.

You can define the call execution period of the scheduler in etc/org.apache.karaf.decanter.scheduler.simple.cfg configuration file. By default, the Decanter Scheduler calls the polled collectors every 5 seconds.

The JMX collector is able to retrieve all metrics (attributes) from multiple MBeanServers.

By default, it uses the etc/org.apache.karaf.decanter.collector.jmx-local.cfg configuration file. This file polls the local MBeanServer.

You can create new configuration files (for instance etc/org.apache.karaf.decanter.collector.jmx-mystuff.cfg configuration file), to poll other remote or local MBeanServers.

The etc/org.apache.karaf.decanter.collector.jmx-*.cfg configuration file contains:


The type property is a free field allowing you to identify the source of the metrics.

The url property allows you to define the JMX URL. You can also use the local keyword to poll the local MBeanServer.
The username and password allows you to define the username and password to connect to the MBeanServer.

The object.name property is optional. By default, the collector harvests all the MBeans in the server. But you can filter to harvest only some MBeans (for instance org.apache.camel:context=*,type=routes,name=* to harvest only the Camel routes metrics).

Now, we can go in the Decanter Kibana to see the dashboards using the harvested data.

You can access to the Decanter Kibana using http://localhost:8181/kibana.

You have the Decanter Kibana welcome page:

Decanter Kibana

Decanter provides ready to use dashboard. Let see the Karaf Dashboard.

Decanter Kibana Karaf 1

These histograms use the metrics harvested by the JMX collector.

You can also see the log details harvested by the log collector:

Decanter Karaf 2

As Kibana uses Lucene, you can extract exactly the data that you need using filtering or queries.

You can also define the time range to get the metrics and logs.

For instance, you can create the following query to filter only the message coming from Elasticsearch:


Camel monitoring and tracing

We can also use Decanter for the monitoring of the Camel routes that you deploy in Karaf.

For instance, we add Camel in our Karaf instance:

karaf@root()> feature:repo-add camel 2.13.2
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features
karaf@root()> feature:install camel-blueprint

In the deploy, we create the following very simple route (using the route.xml file):

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>


Now, in Decanter Kibana, we can go in the Camel dashboard:

Decanter Kibana Camel 1

We can see the histograms here, using the JMX metrics retrieved on the Camel MBeans (especially, we can see for our route the exchanges completed, failed, the last processing time, etc).

You can also see the log messages related to Camel.

Another feature provided by Decanter is a Camel Tracer collector: you can enable the Decanter Camel Tracer to log all exchange state in the backend.

For that, we install the Decanter Camel Tracer feature:

karaf@root()> feature:install decanter-collector-camel-tracer

We update our route.xml in the deploy folder like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="eventAdmin" interface="org.osgi.service.event.EventAdmin"/>

    <bean id="traceHandler" class="org.apache.karaf.decanter.collector.camel.DecanterTraceEventHandler">
        <property name="eventAdmin" ref="eventAdmin"/>

    <bean id="tracer" class="org.apache.camel.processor.interceptor.Tracer">
        <property name="traceHandler" ref="traceHandler"/>
        <property name="enabled" value="true"/>
        <property name="traceOutExchanges" value="true"/>
        <property name="logLevel" value="OFF"/>

    <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>


Now, in Decanter Kibana Camel dashboard, you can see the details in the tracer panel:

Decanter Kibana Camel 2

Decanter Kibana also provides a ready to use ActiveMQ dashboard, using the JMX metrics retrieved from an ActiveMQ broker.

SLA and alerting

Another Decanter feature is the SLA (Service Level Agreement) checking.

The purpose is to check if a harvested data validate a check condition. If not, an alert is created and send to SLA alerters.

We want to send the alerts to two alerters:

  • log to create a log message for each alert (warn log level for serious alerts, error log level for critical alerts)
  • camel to call a Camel route for each alert.

First, we install the decanter-sla-log feature:

karaf@root()> feature:install decanter-sla-log

The SLA checker uses the etc/org.apache.karaf.decanter.sla.checker.cfg configuration file.

Here, we want to throw an alert when the number of threads in Karaf is greater to 60. So in the checker configuration file, we set:


The syntax in this file is:



  • attribute is the name of the attribute in the harvested data (coming from the collectors).
  • level is the alert level. The two possible values are: warn or error.
  • check is the check expression.

The check expression can be:

  • range for numeric attribute, like range:[x,y]. The alert is thrown if the attribute is out of the range.
  • equal for numeric attribute, like equal:x. The alert is thrown if the attribute is not equal to the value.
  • notequal for numeric attribute, like notequal:x. The alert is thrown if the attribute is equal to the value.
  • match for String attribute, like match:regex. The alert is thrown if the attribute doesn't match the regex.
  • notmatch for String attribute, like nomatch:regex. The alert is thrown if the attribute match the regex.

So, in our case, if the number of threads is greater than 60 (which is probably the case ;)), we can see the following messages in the log:

2015-07-28 22:17:11,950 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:17:11,951 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:5639 | alertLevel:error | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:22000000000 | PeakThreadCount:225 | AllThreadIds:[J@6d9ad2c5 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:22911917003 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:221 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/error | 

Let's now extend the range, add a new check on the thread, and add a new check to throw alerts when we have errors in the log:


Now, we want to call a Camel route to deal with the alerts.

We create the following Camel route, using the deploy/alert.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

        <camelContext xmlns="http://camel.apache.org/schema/blueprint">
                <route id="alerter">
                        <from uri="direct-vm:decanter-alert"/>
                        <to uri="log:alert"/>


Now, we can install the decanter-sla-camel feature:

karaf@root()> feature:install decanter-sla-camel

This feature also installs a etc/org.apache.karaf.decanter.sla.camel.cfg configuration file. In this file, you can define the Camel endpoint URI where you want to send the alert:


Now, let's decrease the thread range in etc/org.apache.karaf.decanter.sla.checker.cfg configuration file to throw some alerts:


Now, in the log, we can see the alerts.

From the SLA log alerter:

2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:6234 | alertLevel:warn | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:193150000000 | PeakThreadCount:225 | AllThreadIds:[J@28f0ef87 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:201484424892 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:222 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/warn | 

but also from the SLA Camel alerter:

2015-07-28 22:39:15,293 | INFO  | Thread-41        | alert                            | 114 - org.apache.camel.camel-core - 2.13.2 | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {hostName=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root, alertPattern=range:[0,60], ThreadAllocatedMemorySupported=true, ThreadContentionMonitoringEnabled=false, TotalStartedThreadCount=6236, alertLevel=warn, CurrentThreadCpuTimeSupported=true, CurrentThreadUserTime=193940000000, PeakThreadCount=225, AllThreadIds=[J@408db39f, type=jmx-local, ThreadAllocatedMemoryEnabled=true, CurrentThreadCpuTime=202296849879, ObjectName=java.lang:type=Threading, ThreadContentionMonitoringSupported=true, ThreadCpuTimeSupported=true, ThreadCount=222, event.topics=decanter/alert/warn, ThreadCpuTimeEnabled=true, ObjectMonitorUsageSupported=true, SynchronizerUsageSupported=true, alertAttribute=ThreadCount, DaemonThreadCount=198}]

Decanter also provides the SLA e-mail alerter to send the alerts by e-mail.

Now, you can play with the SLA checker, and add the checks on the attributes that you need. The Decanter Kibana dashboards help a lot there: in the "Event Monitoring" table, you can see all raw harvested data, allowing you to find the attributes.

What's next

It's just the first Decanter release, but I think it's an interesting one.

Now, we are in the process of adding:

  • a new Decanter CXF interceptor collector, thanks to this collector, you will be able to send details about the request/response on CXF endpoints (SOAP-Request, SOAP-Response, REST message, etc).
  • a new Decanter Redis appender, to send the harvested data to Redis
  • a new Decanter Cassandra appender, to send the harvested data to Cassandra
  • a Decanter WebConsole, allowing to easily manipulate the SLA
  • improvement the SLA support with "recovery" support to send only one alert when the check failed, and another alert when the value "recovered"

Anyway, if you have ideas and want to see new features in Decanter, please let us know.

I hope you like Decanter and see interest in this new Karaf project !

Apache Karaf Christmas gifts: docker.io, profiles, and decanter

December 15, 2014 Posted by jbonofre

We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you 😉

Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2.

Some sub-project releases are also in preparation, especially Cellar. We completely refactored Cellar internals, to provide a more reliable, predictable, and stable behavior. New sync policies are available, new properties, new commands, and also interesting new features like HTTP session replication, or HTTP load balancing. I will prepare a blog about this very soon.

But, we’re also preparing brand-new features.


I heard some people saying: “why do I need Karaf when I have docker.io ?”.

Honestly, I don’t understand this as the purpose is not the same: actually, Karaf on docker.io is a great value.

First, docker.io concepts are not new. It’s more or less new on Linux, but the same kind of features exists for a long time on other systems:

  • zones on Solaris
  • jail on FreeBSD
  • xen on Linux, in the past

So, nothing revolutionary in docker.io, however it’s a very convenient way to host multiple images/pseudo-system on the same machine.

However, docker.io (like the other systems) is focus on the OS: it doesn’t cover by its own the application container. For that, you have to prepare an images with OS plus the application container. For instance, you want to deploy your war file, you have to bootstrap a docker.io image with OS and tomcat (or Karaf ;)).

Moreover, remember the cool features provided by Karaf: ConfigAdmin and dynamic configuration, hotdeployment, features, etc.

You want to deploy your Camel routes, your ActiveMQ broker, your CXF webservices, your application: just use the docker.io image providing a Karaf instance!

And it’s what the Karaf docker.io feature provides. Actually, it provides two things:

  • a set of Karaf docker.io images ready to use, with ubuntu/centos images with ready to use Karaf instances (using different combinations)
  • a set of shell commands and Karaf commands to easily bootstrap the images from a Karaf instance. It’s actually a good alternative to the Karaf child instances (which are only local to the machine).

Basically, docker.io doesn’t replace Karaf. However, Karaf on docker.io provides a very flexible infrastructure, allowing you to easily bootstrap Karaf instances. Associated with Cellar, you can bootstrap a Karaf cluster very easily as well.

I will prepare the donation and I will blog about the docker.io feature very soon. Stay tuned !!!

Karaf Profiles

A new feature comes in Karaf 4: the Karaf profiles. The purpose is to apply a ready to use set of configurations and provisioning to a Karaf instance.

Thanks to that you can prepare a complete profile containing your configuration and your application (features) and apply multiple profiles to easily create a ready-to-go Karaf instance.

It’s a great complete to the Karaf docker.io feature: the docker.io feature bootstraps the Karaf image, on which you can apply your profiles, all in a row.

Some profiles description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201412.mbox/%3CCAA66TpodJWHVpOqDz2j1QfkPchhBepK_Mwdx0orz7dEVaw8tPQ%40mail.gmail.com%3E.

I’m working on the storage of profiles on Karaf Cave, the application of profiles on running/existing Karaf instances, support of cluster profiles in Cellar, etc.

Again, I will create a specific blog post about profiles soon. Stay tuned again !! :)

Karaf Decanter

As a fully enterprise ready container, Karaf has to provide monitoring and management feature. We already provide a bunch of metrics via JMX (we have multiple MBeans for Karaf, Camel, ActiveMQ, CXF, etc).

However, we should provide:

  • storage of metrics and messages to be able to have an activity timeline
  • SLA definition of the metrics and messages, raising alerts when some metrics are not in the expected value range or when the messages contain a pattern
  • dashboard to configure the SLA, display messages, and graph the metrics

As always in Karaf, it should be very simple to install such kind of feature, with an integration of the supported third parties.

That’s why we started to work on Karaf Decanter, a complete and flexible monitoring solution for Karaf and the applications hosted by Karaf (Camel, ActiveMQ, CXF, etc).

The Decanter proposal and description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201410.mbox/%3C543D3D62.6050608%40nanthrax.net%3E.

The current codebase is also available: https://github.com/jbonofre/karaf-decanter.

I’m preparing the donation (some cleansing/polishing in progress).

Again, I will blog about Karaf Decanter asap. Stay tuned again again !! :)


You can see like, as always, the Karaf team is committed and dedicated to provide to you very convenient and flexible features. Lot of those features come from your ideas, discussions, proposals. So, keep on discussing with us, we love our users 😉

We hope you will enjoy those new features. We will document and blog about these Christmas gifts soon.

Enjoy Karaf, and Happy Christmas !

Encrypt ConfigAdmin properties values in Apache Karaf

October 3, 2014 Posted by jbonofre

Apache Karaf loads all the configuration from etc/*.cfg files by default, using a mix of Felix FileInstall and Felix ConfigAdmin.

These files are regular properties file looking like:


Some values may be critical, and so not store in plain text. It could be critical business data (credit card number, etc), or technical data (password to different systems, like database for instance).

We want to encrypt such kind of data in the etc/*.cfg files, but being able to use it regulary in the application.

Karaf provides a nice feature for that: jasypt-encryption.

It’s very easy to use especially with Blueprint.

The jasypt-encryption feature is an optional feature, so it means that you have to install it first:

karaf@root()> feature:install jasypt-encryption

This feature provides:

  • jasypt bundle
  • a namespace handler (enc:*) for blueprint

Now, we can create a cfg file containing encrypted value. The encrypted value is “wrapped” in a ENC() function.

For instance, we can create etc/my.cfg file containing:


In the Blueprint descriptor of our application (like a Camel route Blueprint XML for instance), we use the “regular” cm namespace (to load ConfigAdmin), but we add a Jasypt configuration using the enc namespace.

For instance, the blueprint XML could look like:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"

  <cm:property-placeholder persistent-id="my" update-strategy="reload">
      <cm:property name="mydb.url" value="localhost:9999"/>
      <cm:property name="mydb.username" value="sa"/>
      <cm:property name="mydb.password" value="ENC(xxxxx)"/>

    <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
      <property name="config">
        <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
          <property name="algorithm" value="PBEWithMD5AndDES"/>
          <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/>

  <bean id="dbbean" class="...">
    <property name="url" value="${mydb.url}"/>
    <property name="username" value="${mydb.username}"/>
    <property name="password" value="${mydb.password}"/>

  <camelContext xmlns="http://camel.apache.org/schemas/blueprint">
        <process ref="dbbean"/>


It’s also possible to use encryption not in ConfigAdmin, directly loading an “external” properties file using the ext blueprint namespace:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"


    <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
      <property name="config">
        <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
          <property name="algorithm" value="PBEWithMD5AndDES"/>
          <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/>



where etc/db.properties looks like:


It’s also possible to use directly the ConfigAdmin in code. In that case, you have to create the Jasypt configuration programmatically:

StandardPBEStringEncryptor enc = new StandardPBEStringEncryptor();
EnvironmentStringPBEConfig env = new EnvironmentStringPBEConfig();

MDC logging with Apache Karaf and Camel

August 31, 2014 Posted by jbonofre

MDC (Mapped Diagnostic Context) logging is an interesting feature to log contextual messages.

It’s classic to want to log contextual messages in your application. For instance, we want to log the actions performed by an user (identified by an username or user id). As you have a lot of simultaneous users on your application, it’s easier to “follow” the log.

MDC is supported by several logging frameworks, like log4j or slf4j, and so by Karaf (thanks to pax-logging) as well.
The approach is pretty simple:

  1. You define the context using a key ID and a value for the key:
    MDC.put("userid", "user1");
  2. You use the logger as usual, the log messages to this logger will be contextual to the context:
    logger.debug("my message");
  3. After that, we can change the context by overriding the key:
    MDC.put("userid", "user2");
    logger.debug("another message");

    Or you can remove the key, so to remove the context, and the log will be “global” (not local to a context):

    MDC.remove("userid"); // or MDC.clear() to remove all
    logger.debug("my global message");
  4. In the configuration, we can use pattern with %X{key} to log context. A pattern like %X{userid} - %m%n will result to a log file looking like:
    user1 - my message
    user2 - another message

In this blog, we will see how to use MDC in different cases (directly in your bundle, generic Karaf OSGi, and in Camel routes.

The source code of the blog post are available on my github: http://github.com/jbonofre/blog-mdc.

Using MDC in your application/bundle

The purpose here is to use slf4j MDC in our bundle and configure Karaf to create one log file per context.

To illustrate this, we will create multiple threads in the bundle, given a different context key for each thread:

package net.nanthrax.blog.mdc;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;

public class MdcExampleBean {

    private Logger logger = LoggerFactory.getLogger(MdcExampleBean.class);

    public void init() throws Exception {
        CycleThread thread1 = new CycleThread("thread1");
        CycleThread thread2 = new CycleThread("thread2");
        CycleThread thread3 = new CycleThread("thread3");

    class CycleThread extends Thread {
        private String mdcContext;
        public CycleThread(String mdcContext) {
            this.mdcContext = mdcContext;
        public void run() {
            MDC.put("threadId", mdcContext);
            for (int i = 0; i < 20; i++) {
                logger.info("Cycle {}", i);


After deploying this bundle in Karaf 3.0.1, we can see the log messages:

karaf@root()> bundle:install mvn:net.nanthrax.blog/mdc-bundle/1.0-SNAPSHOT
karaf@root()> log:display
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 17
2014-08-30 09:44:25,594 | INFO  | Thread-13        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 18
2014-08-30 09:44:25,595 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19

Now, we can setup the Karaf etc/org.ops4j.pax.logging.cfg file to use our MDC. For that, we add a MDCSiftingAppender, providing the threadId as MDC key, and displaying the threadId in the log message pattern. We will create one log file per key (threadId in our case), and finally, we add this appender to the rootLogger:

log4j.rootLogger=INFO, out, mdc-bundle, osgi:*
# MDC Bundle appender
log4j.appender.mdc-bundle.appender.layout.ConversionPattern=%d | %-5.5p | %X{threadId} | %m%n

Now, in the Karaf data/log folder, we can see:


each file containing the log messages contextual to the thread:

$ cat data/log/mdc-bundle-thread1.log
2014-08-30 09:54:48,287 | INFO  | thread1 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 4
$ cat data/log/mdc-bundle-thread2.log
2014-08-30 09:54:48,287 | INFO  | thread2 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 4
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 5

In addition, Karaf “natively” provides OSGi MDC data that we can use.

Using Karaf OSGi MDC

So, in Karaf, you can use directly some OSGi headers for MDC logging, especially the bundle name.

We can use this MDC key to create one log file per bundle.

Karaf already provides a pre-defined appender configuration in etc/org.ops4j.pax.logging.cfg:

# Sift appender
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %m%n

The only thing that we have to do is to add this appender to the rootLogger:

log4j.rootLogger=INFO, out, sift, osgi:*

Now, in the Karaf data/log folder, we can see one file per bundle:

data/log$ ls -1

Especially, we can see our mdc-bundle, containing the log messages “local” to the bundle.

However, if this approach works great, it doesn’t always create interesting log file. For instance, when you use Camel, using OSGi headers for MDC logging will gather most of the log messages into the camel-core bundle log file, so, not really contextual to something or easy to read/seek.

The good news is that Camel also provides MDC logging support.

Using Camel MDC

If Camel provides MDC logging support, it’s not enabled by default. It’s up to you to enable it on the camel context.

Once enabled, Camel provides the following MDC logging properties:

  • camel.exchangeId providing the exchange ID
  • camel.messageId providing the message ID
  • camel.routeId providing the route ID
  • camel.contextId providing the Camel Context ID
  • camel.breadcrumbId providing an unique id used for tracking messages across transports
  • camel.correlationId providing the correlation ID of the exchange (if it’s correlated, for instance like in Splitter EIP)
  • camel.trasactionKey providing the ID of the transaction (for transacted exchange).

To enable the MDC logging, you have to:

  • if you use the Blueprint or Spring XML DSL:
    <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true">
  • if you use the Java DSL:
    CamelContext context = ...
  • using the Talend ESB studio, you have to use a cConfig component from the palette:

So, let say, we create the following route using the Blueprint DSL:

<?xml version="1.0" encoding="UTF-8"?> 
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> 

   <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true"> 
      <route id="my-route"> 
         <from uri="timer:fire?period=5000"/> 
            <constant>Hello Blog</constant> 
         <to uri="log:net.nanthrax.blog?level=INFO"/>

We want to create one log file per route (using the routeId). So, we update the Karaf etc/org.ops4j.pax.logging.cfg file to add a MDC sifting appender using the Camel MDC properties, and we add this appender to the rootLogger:

log4j.rootLogger=INFO, out, camel-mdc, osgi:*
# Camel MDC appender
log4j.appender.camel-mdc.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{camel.exchangeId} | %m%n

The camel-mdc appender will create one log file by route (named camel-(routeId).log). The log messages will contain the exchange ID.

We start Karaf, and after the installation of the camel-blueprint feature, we can drop our route.xml directly in the deploy folder:

karaf@root()> feature:repo-add camel 2.12.1
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.12.1/xml/features
karaf@root()> feature:install camel-blueprint
cp route.xml apache-karaf-3.0.1/deploy/

Using log:display command in Karaf, we can see the messages for our route:

karaf@root()> log:display

2014-08-31 08:58:24,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Now, if we go into the Karaf data/log folder, we can see the log file for our route:

$ ls -1 data/log

If we take a look in the camel-my-route.log file, we can see the messages contextual to the route, including the exchange ID:

2014-08-31 08:58:19,196 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-2 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:24,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-4 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-6 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:34,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-8 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Testing (utest and itest) Apache Camel Blueprint route

August 28, 2014 Posted by jbonofre

In any integration project, testing is vital for multiple reasons:

  • to guarantee that the integration logic matches the expectations
  • to quickly identify some regression issues
  • to test some special cases, like the errors for instance
  • to validate the succesful provisioning (deployment) on a runtime as close as possible to the target platform

We distinguish two kinds of tests:

  • the unit tests (utest) aim to test the behaviors of integration logic, and define the expectations that the logic has to match
  • the integration tests (itest) aim to provision the integration logic artifact to a runtime, and check the behaviors on the actual platform

Camel is THE framework to implement your integration logic (mediation).

It provides the Camel Test Kit, based on JUnit to implement utest. In combinaison with Karaf and Pax Exam, we can cover both utest and itest.

In this blog, we will:

  • create an OSGi service
  • create a Camel route using the Blueprint DSL, using the previously created OSGi service
  • implement the utest using the Camel Blueprint Test
  • implement the itest using Pax Exam and Karaf

You can find the whole source code used for this blog post on my github: https://github.com/jbonofre/blog-camel-blueprint.

Blueprint Camel route and features

We create a project (using Maven) containing the following modules:

  • my-service is the OSGi bundle providing the service that we will use in the Camel route
  • my-route is the OSGi bundle providing the Camel route, using the Blueprint DSL. This route uses the OSGi service provided by my-service. It’s where we will implement the utest.
  • features packages the OSGi bundles as a Karaf features XML, ready to be deployed.
  • itests contains the integration test (itest) leveraging Karaf and Pax Exam.

It means we have the following parent pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">


    <name>Nanthrax Blog :: Camel :: Blueprint</name>



OSGi service

The my-service Maven module provides an OSGi bundle providing an echo service.

It uses the following Maven POM:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">



    <name>Nanthrax Blog :: Camel :: Blueprint :: Service</name>




The echo service is described by the net.nanthrax.blog.service.EchoService interface:

package net.nanthrax.blog.service;

public interface EchoService {

    public String echo(String message);


We expose the package containing this interface using OSGi Export-Package header.

The implementation of the EchoService is hidden using the OSGi Private-Package header. This implementation is very simple, it gets a message and return the same message with the “Echoing ” prefix:

package net.nanthrax.blog.service.internal;

import net.nanthrax.blog.service.EchoService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class EchoServiceImpl implements EchoService {

    private final static Logger LOGGER = LoggerFactory.getLogger(EchoServiceImpl.class);

    public String echo(String message) {
        return "Echoing " + message;


To expose this service in OSGi, we use blueprint. We create the blueprint descriptor in src/main/resources/OSGI-INF/blueprint/blueprint.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <service interface="net.nanthrax.blog.service.EchoService">
        <bean class="net.nanthrax.blog.service.internal.EchoServiceImpl"/>


The Camel route will use this Echo service.

Camel route and utest

We use the Camel Blueprint DSL to design the route.

The route is packaged as an OSGi bundle, in the my-route Maven module, using the following pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">



    <name>Nanthrax Blog :: Camel :: Blueprint :: My Route</name>




The src/main/resources/OSGI-INF/blueprint/route.xml contains the route definition:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="myService" interface="net.nanthrax.blog.service.EchoService"/>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
            <from uri="timer:fire?period=5000"/>
                <constant>Hello Blog</constant>
            <to uri="bean:myService"/>
            <to uri="log:net.nanthrax.blog.route"/>
            <to uri="file:camel-output"/>


This route:

  • creates an exchange every 5 secondes, using a Camel timer
  • we set the body of the “in” message in the exchange to “Hello Blog”
  • the message is sent to the EchoService, which prefix the message with “Echoing”, resulting to an updated message containing “Echoing Hello Blog”
  • we log the exchange
  • we create a file for each exchange, in the camel-output folder, using the Camel file component

We are now to create the utest for this route.

As this route uses Blueprint, and Blueprint is an OSGi specific technology, normally, we would have to deploy the route on Karaf to test it.

However, thanks to Camel Blueprint Test and the use of PojoSR, we can test the Blueprint route “outside” of OSGi. Camel Blueprint Test also supports a mock of the OSGi service registry, allowing to mock the OSGi service as well.

Basically, in the unit test, we:

  • load the route Blueprint XML by overridding the getBlueprintDescriptor() method
  • mock the timer and file endpoints by overridding the isMockEndpointsAndSkip() method (skip means that we don’t send the message to the actual endpoint)
  • mock the Echo OSGi service by overriding the addServicesOnStartup() method
  • finally implement a test in the testMyRoute() method

The test itself get the mocked file endpoint, and define the expectations on this endpoint: we expect one message containing “Echoing Hello Blog” on the file endpoint.
Instead of using the actual timer endpoint, we mock it and we use the producer template to send an exchange (in order to control the number of created exchange).
Finally, we check if the expectations are satisfied on the mocked file endpoint.

package net.nanthrax.blog;

import net.nanthrax.blog.service.EchoService;
import net.nanthrax.blog.service.internal.EchoServiceImpl;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.language.ConstantExpression;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.apache.camel.util.KeyValueHolder;
import org.junit.Test;

import java.util.Dictionary;
import java.util.Map;

public class MyRouteTest extends CamelBlueprintTestSupport {

    protected String getBlueprintDescriptor() {
        return "OSGI-INF/blueprint/route.xml";

    public String isMockEndpointsAndSkip() {
        return "((file)|(timer)):(.*)";

    protected void addServicesOnStartup(Map<String, KeyValueHolder<Object, Dictionary>> services) {
        KeyValueHolder serviceHolder = new KeyValueHolder(new EchoServiceImpl(), null);
        services.put(EchoService.class.getName(), serviceHolder);

    public void testMyRoute() throws Exception {

        // mocking the file endpoint and define the expectation
        MockEndpoint mockEndpoint = getMockEndpoint("mock:file:camel-output");
        mockEndpoint.expectedBodiesReceived("Echoing Hello Blog");

        // send a message at the timer endpoint level
        template.sendBody("mock:timer:fire", "empty");

        // check if the expectation is satisfied


We can see that we mock the Echo OSGi service using the actual EchoServiceImpl. However, of course, it’s possible to use your own local test implementation of the EchoService. It’s interesting to test some use cases, or to simulate errors.

We can note that we use a regex (((file)|(timer)):(.*)) to mock both timer and file endpoints.

We load the route.xml blueprint descriptor directly from the bundle location (OSGI-INF/blueprint/route.xml).

We can run mvn to test the route:

my-route$ mvn clean install
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Nanthrax Blog :: Camel :: Blueprint :: My Route 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 1 resource
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] No sources to compile
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/resources
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 1 source file to /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/test-classes
[WARNING] /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java uses unchecked or unsafe operations.
[WARNING] /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java: Recompile with -Xlint:unchecked for details.
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Surefire report directory: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/surefire-reports

 T E S T S
Running net.nanthrax.blog.MyRouteTest
[main] INFO org.apache.camel.test.blueprint.CamelBlueprintHelper - Using Blueprint XML file: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/classes/OSGI-INF/blueprint/route.xml
Aug 28, 2014 2:57:43 PM org.ops4j.pax.swissbox.tinybundles.core.metadata.RawBuilder run
INFO: Copy thread finished.
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator starting
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator started
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - No quiesce support is available, so blueprint components will not participate in quiesce operations
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Testing: testMyRoute(net.nanthrax.blog.MyRouteTest)
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Skipping starting CamelContext as system property skipStartingCamelContext is set to be true.
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is starting
[main] INFO org.apache.camel.management.DefaultManagementStrategy - JMX is disabled
[main] INFO org.apache.camel.impl.InterceptSendToMockEndpointStrategy - Adviced endpoint [timer://fire?period=5000] with mock endpoint [mock:timer:fire]
[main] INFO org.apache.camel.impl.InterceptSendToMockEndpointStrategy - Adviced endpoint [file://camel-output] with mock endpoint [mock:file:camel-output]
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Route: route1 started and consuming from: Endpoint[timer://fire?period=5000]
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Total 1 routes, of which 1 is started.
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) started in 0.069 seconds
[main] INFO org.apache.camel.component.mock.MockEndpoint - Asserting: Endpoint[mock://file:camel-output] is satisfied
[Camel (23-camel-3) thread #0 - timer://fire] INFO net.nanthrax.blog.route - Exchange[ExchangePattern: InOnly, BodyType: String, Body: Echoing Hello Blog]
[main] INFO org.apache.camel.component.mock.MockEndpoint - Asserting: Endpoint[mock://timer:fire] is satisfied
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Testing done: testMyRoute(net.nanthrax.blog.MyRouteTest)
[main] INFO net.nanthrax.blog.MyRouteTest - Took: 1.094 seconds (1094 millis)
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is shutting down
[main] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 10 seconds)
[Camel (23-camel-3) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[timer://fire?period=5000]
[main] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) uptime 1.117 seconds
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is shutdown in 0.021 seconds
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle MyRouteTest
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle net.nanthrax.blog.camel.route.blueprint.service
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle org.apache.aries.blueprint
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle org.apache.camel.camel-blueprint
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator stopping
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator stopped
[main] INFO org.apache.camel.test.blueprint.CamelBlueprintHelper - Deleting work directory target/bundles/1409230663118
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.581 sec - in net.nanthrax.blog.MyRouteTest

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO] --- maven-bundle-plugin:2.4.0:bundle (default-bundle) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Bundle net.nanthrax.blog:net.nanthrax.blog.camel.route.blueprint.myroute:bundle:1.0-SNAPSHOT : Unused Private-Package instructions, no such package(s) on the class path: [!*]
[INFO] --- maven-install-plugin:2.5.1:install (default-install) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar to /home/jbonofre/.m2/repository/net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/my-route/pom.xml to /home/jbonofre/.m2/repository/net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.pom
[INFO] --- maven-bundle-plugin:2.4.0:install (default-install) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Installing net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar
[INFO] Writing OBR metadata
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.906s
[INFO] Finished at: Thu Aug 28 14:57:47 CEST 2014
[INFO] Final Memory: 22M/557M
[INFO] ------------------------------------------------------------------------

Again, the purpose of the utest is to test the behaviors of the route: check if the message content is what we expect, if the message arrives on the expected endpoint, etc.

Karaf features and itests

The purpose of the itest is not really to test the behavior of the route: it’s more to test if the provisioning (deployment) of the route is OK, if the route starts without problem, and, when possible, if the “default” behavior is what we expect.

If it’s possible to deploy bundle per bundle (first the one providing the Echo service, and after the one providing the route), with Karaf, it’s largely easier to create a features XML.

It’s what we do in the features Maven module, grouping the bundles in two features as show in the following features XML:

<?xml version="1.0" encoding="UTF-8"?>
<features name="blog-camel-blueprint" xmlns="http://karaf.apache.org/xmlns/features/v1.0.0">

    <feature name="blog-camel-blueprint-service" version="${project.version}">

    <feature name="blog-camel-blueprint-route" version="${project.version}">


Now, we can use Pax Exam to implement our itests, by:

  • bootstrap a Karaf container, where we deploy the camel-blueprint, and our features
  • test if the provisioning is OK
  • create a local route to test the output of my-route

We do that in the itests Maven module, where we define the Pax Exam dependency:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">






        <!-- Pax Exam -->

        <!-- Camel Test -->

        <!-- Karaf -->



We create MyRouteTest in src/test/java/net/nanthrax/blog/itests:

package net.nanthrax.blog.itests;

import static org.ops4j.pax.exam.CoreOptions.maven;
import static org.ops4j.pax.exam.karaf.options.KarafDistributionOption.*;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.language.ConstantExpression;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.apache.karaf.features.FeaturesService;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.ops4j.pax.exam.Configuration;
import org.ops4j.pax.exam.Option;
import org.ops4j.pax.exam.junit.PaxExam;
import org.ops4j.pax.exam.karaf.options.LogLevelOption;
import org.osgi.framework.BundleContext;

import javax.inject.Inject;

import java.io.File;

public class MyRouteTest extends CamelTestSupport {

    protected FeaturesService featuresService;

    protected BundleContext bundleContext;

    public static Option[] configure() throws Exception {
        return new Option[] {
                        .unpackDirectory(new File("target/paxexam/unpack")),
                features(maven().groupId("org.apache.camel.karaf").artifactId("apache-camel").type("xml").classifier("features").version("2.12.1"), "camel-blueprint", "camel-test"),
                features(maven().groupId("net.nanthrax.blog").artifactId("camel-blueprint").type("xml").classifier("features").version("1.0-SNAPSHOT"), "blog-camel-blueprint-route"),

    public void testProvisioning() throws Exception {
        // first check that the features are installed

        // now we check if the OSGi services corresponding to the camel context and route are there


    public void testMyRoute() throws Exception {
        MockEndpoint itestMock = getMockEndpoint("mock:itest");
        itestMock.whenAnyExchangeReceived(new Processor() {
            public void process(Exchange exchange) {




    protected RouteBuilder createRouteBuilder() {
        return new RouteBuilder() {
            public void configure() {


In this test class, we can see:

  • the configure() method where we define the Karaf distribution to use, the log level, the Camel features XML location and the Camel features that we want to install (camel-blueprint and camel-test), the location of our features XML and the feature that we want to install (blog-camel-blueprint-route)
  • the testProvisioning() method where we check if the features have been correctly installed
  • the createRouteBuilder() method where we programmatically create a new route (using the Java DSL here) consuming the files created by my-route and sending to a mock endpoint
  • the testMyRoute() gets the itest mock endpoint (from the route created by the createRouteBuilder() method) and check that it receives at least 3 messages, during an update of 20 secondes (and also display the content of the message)

Running mvn, it bootstraps a Karaf instance, install the features, deploy our test bundle, and check the execution:

itests$ mvn clean install
 T E S T S
Running net.nanthrax.blog.itests.MyRouteTest
[org.ops4j.pax.exam.spi.DefaultExamSystem] : Pax Exam System (Version: 3.4.0) created.
[org.ops4j.store.intern.TemporaryStore] : Storage Area is /tmp/1409248259083-0
[org.ops4j.pax.exam.junit.impl.ProbeRunner] : creating PaxExam runner for class net.nanthrax.blog.itests.MyRouteTest
[org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer] : Test Container started in 3 millis
[org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer] : Wait for test container to finish its initialization [ RelativeTimeout value = 180000 ]
[org.ops4j.pax.exam.rbc.client.RemoteBundleContextClient] : Waiting for remote bundle context.. on 21414 name: 7cd8df34-0ed2-4449-8d60-d51f395cfa1d timout: [ RelativeTimeout value = 180000 ]
        __ __                  ____
       / //_/____ __________ _/ __/
      / ,<  / __ `/ ___/ __ `/ /_
     / /| |/ /_/ / /  / /_/ / __/
    /_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (2.3.6)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'osgi:shutdown' or 'logout' to shutdown Karaf.

karaf@root> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[org.ops4j.pax.exam.rbc.client.RemoteBundleContextClient] : Remote bundle context found after 5774 millis
[org.ops4j.pax.tinybundles.core.intern.RawBuilder] : make()
[org.ops4j.store.intern.TemporaryStore] : Enter store()
[org.ops4j.pax.tinybundles.core.intern.RawBuilder] : Creating manifest from added headers.
[org.ops4j.pax.exam.container.remote.RBCRemoteTarget] : Installed bundle (from stream) as ID: 102
[org.ops4j.pax.exam.container.remote.RBCRemoteTarget] : call [[TestAddress:PaxExam-d7899c82-74e1-445e-9fcb-ab9b18e286b4 root:PaxExam-5dfb0f4b-96d9-4226-bdea-5b057e7e7335]]
Echoing Hello Blog
Echoing Hello Blog
Echoing Hello Blog
Echoing Hello Blog
Results :

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

[INFO] --- maven-jar-plugin:2.3.2:jar (default-jar) @ itests ---
[WARNING] JAR will be empty - no content was marked for inclusion!
[INFO] Building jar: /home/jbonofre/Workspace/blog-camel-blueprint/itests/target/itests-1.0-SNAPSHOT.jar
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ itests ---
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/itests/target/itests-1.0-SNAPSHOT.jar to /home/jbonofre/.m2/repository/net/nanthrax/blog/itests/1.0-SNAPSHOT/itests-1.0-SNAPSHOT.jar
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/itests/pom.xml to /home/jbonofre/.m2/repository/net/nanthrax/blog/itests/1.0-SNAPSHOT/itests-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35.904s
[INFO] Finished at: Thu Aug 28 19:51:32 CEST 2014
[INFO] Final Memory: 28M/430M
[INFO] ------------------------------------------------------------------------

Integration in Jenkins

We can now integrate our project in Jenkins CI. We now have a complete CI covering, build of the service, packaging of the route, utest on the route, itest of the service and route in Karaf.




Apache JMeter to test Apache ActiveMQ on CI with Maven/Jenkins

August 27, 2014 Posted by jbonofre

Apache JMeter is a great tool for testing, especially performance testing.
It provides a lot of samplers that you can use to test your web services, web applications, etc.

It also includes a couple of samplers for JMS that we can use with ActiveMQ.

The source code of this blog post is https://github.com/jbonofre/blog-jmeter.

Preparing JMeter for ActiveMQ

For this article, I downloaded JMeter 2.10 from http://jmeter.apache.org.

We uncompress jmeter in a folder:

$ tar zxvf apache-jmeter-2.10.tgz

We are going to create a test plan for ActiveMQ. After downloading ActiveMQ 5.9.0 from http://activemq.apache.org, we install and start an ActiveMQ broker on the machine.

$ tar zxvf apache-activemq-5.9.0-bin.tar.gz
$ cd apache-activemq-5.9.0/bin
$ ./activemq console
 INFO | Apache ActiveMQ 5.9.0 (localhost, ID:latitude-45782-1409139630277-0:1) started

In order to use ActiveMQ with JMeter, we have to copy the activemq-all-5.9.0.jar file provided in the ActiveMQ distribution into the JMeter lib folder:

$ cp apache-activemq-5.9.0/activemq-all-5.9.0.jar apache-jmeter-2.11/lib/

We can now start jmeter and start to create our ActiveMQ test plan:

$ cd apache-jmeter-2.10/bin
$ ./jmeter.sh

In the default test plan, we add a thread group to simulate 5 JMS clients that will perform the samplers 10 times:


In this thread group, we add a JMS Publisher sampler that will produce a message in ActiveMQ:


We can note the ActiveMQ configuration:

  • the sampler uses the ActiveMQ JNDI initial context factory (org.apache.activemq.jndi.ActiveMQInitialContextFactory)
  • the Provider URL is the ActiveMQ connection URL (tcp://localhost:61616 in my case). You can use here any kind of ActiveMQ URL, for instance failover:(tcp://host1:61616,tcp://host2:61616)).
  • the connection factory is simply the default one provided by ActiveMQ: ConnectionFactory.
  • the destination is the name of the JMS queue where we want to produce the message, prefixed with dynamicQueues: dynamicQueues/MyQueue.
  • by default, ActiveMQ 5.9.0 uses the authorization plugin. So, the client has to use authentication to be able to produce a message. The default ActiveMQ username is admin, and admin is the default password.
  • finally, we set the body of the message as static using the textarea: JMeter message ...

Now, we save the plan in a file named activemq.jmx.

For a quick test, we can add a Graph Results listener to the thread group and run the plan:


We can check in the ActiveMQ console (pointing a browser on http://localhost:8161/admin) that we can see the queue MyQueue containing the messages sent by JMeter:



Our test plan is working, we have some metrics about the execution in the graph (it’s really fast on my laptop ;)).

This approach is great to easily implement performance benchmark, and creates some load on ActiveMQ (to test some tuning and configuration for instance).

It can make sense to do it in a continuous integration process. So, let’s see how we can run JMeter with Maven and integrate it in Jenkins.

Using jmeter maven plugin

We have two ways to call JMeter with Maven:

  • we can call the local JMeter instance using the exec-maven-plugin. JMeter can be called in “batch mode” (without the GUI) using the following command:
    $ apache-jmeter-2.10/bin/jmeter.sh -n -t activemq.jmx -l activemq.jtl -j activemq.jmx.log

    We use the options:

    • -n to disable the GUI
    • -t to specify the location of the test plan file (.jmx)
    • -l to specify the location of the test plan execution results
    • -j to specify the location of the test plan execution log
  • we have a JMeter Maven plugin. It’s the one that I will use for this blog.

The JMeter Maven plugin allows you to run a JMeter meter plan directly from Maven. It doesn’t require a local JMeter instance: the plugin will download and bootstrap a JMeter instance.

The plugin will look for JMeter JMX files in the src/test/jmeter folder by default.

We create a POM to run JMeter:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">





We can now run the JMeter test plan:

$ mvn clean verify
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building jmeter 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] --- jmeter-maven-plugin:1.9.1:jmeter (jmeter-test) @ jmeter ---
[INFO] -------------------------------------------------------
[INFO]  P E R F O R M A N C E    T E S T S
[INFO] -------------------------------------------------------
[debug] JMeter is called with the following command line arguments: -n -t /home/jbonofre/Workspace/jmeter/src/test/jmeter/activemq.jmx -l /home/jbonofre/Workspace/jmeter/target/jmeter/results/20140827-activemq.jtl -d /home/jbonofre/Workspace/jmeter/target/jmeter -j /home/jbonofre/Workspace/jmeter/target/jmeter/logs/activemq.jmx.log
[info] Executing test: activemq.jmx
[info] Completed Test: activemq.jmx
[INFO] Test Results:
[INFO] Tests Run: 1, Failures: 0
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.077s
[INFO] Finished at: Wed Aug 27 14:58:09 CEST 2014
[INFO] Final Memory: 14M/303M
[INFO] ------------------------------------------------------------------------

We can see in the ActiveMQ console that the JMeter messages have been sent.

We are now ready to integrate this build in Jenkins:



We have now included the performance tests in our Jenkins CI.

I would advice to execute the performance tests on a dedicated module or profile, and configure the Jenkins job to execute once per week for instance, or link to a release.

So, we still have our development oriented nightly builds, and we can periodically execute performance tests, and execute the performance tests for a release.

Webex on Ubuntu 14.04

August 22, 2014 Posted by jbonofre

Webex is a great tool but unfortunately, it doesn’t work “out of the box” on Ubuntu 14.04 (and also with previous Ubuntu releases).

For instance, the webex applet starts but it doesn’t refresh correctly, or the share of desktop/application doesn’t work.

Actually, the issue is due to:

  • some libraries required by webex are missing on the Ubuntu installation
  • webex expects to run in i386 (not amd64) platform, so, even if you have the libraries installed, you have to install the i386 version.

To find the libraries required, you have to go in $HOME/.webex/1324 and $HOME/.webex/1424 folders and check the libraries with:

ldd *.so|grep -i not

For all missing libraries (not found), you have to find the package providing the library using:

apt-file search xxxxxx.so

Once you found the package providing the library, you have to install the package for both x64 (that should be the default if your machine is 64bits) and i386. For instance:

aptitude install libpangox-1.0-0
aptitude install libpangox-1.0-0:i386

For instance, on my laptop, I had to install:


Apache Syncope backend with Apache Karaf

August 17, 2014 Posted by jbonofre

Apache Syncope is an identity manager (IdM). It comes with a web console where you can manage users, attributes, roles, etc.
It also comes with a REST API allowing to integrate with other applications.

By default, Syncope has its own database, but it can also “façade” another backend (LDAP, ActiveDirectory, JDBC) by using ConnId.

In the next releases (4.0.0, 3.0.2, 2.4.0, and 2.3.7), Karaf provides (by default) a SyncopeLoginModule allowing you to use Syncope as backend for users and roles.

This blog introduces this new feature and explains how to configure and use it.

Installing Apache Syncope

The easiest way to start with Syncope is to use the Syncope standalone distribution. It comes with a Apache Tomcat instance already installed with the different Syncope modules.

You can download the Syncope standalone distribution archive from http://www.apache.org/dyn/closer.cgi/syncope/1.1.8/syncope-standalone-1.1.8-distribution.zip.

Uncompress the distribution in the directory of your choice:

$ unzip syncope-standalone-1.1.8-distribution.zip

You can find the ready to use Tomcat instance in directory. We can start the Tomcat:

$ cd syncope-standalone-1.1.8
$ cd apache-tomcat-7.0.54
$ bin/startup.sh

The Tomcat instance runs on the 9080 port.

You can access the Syncope console by pointing your browser on http://localhost:9080/syncope-console.

The default admin username is “admin”, and password is “password”.

The Syncope REST API is available on http://localhost:9080/syncope/cxf.

The purpose is to use Syncope as backend for Karaf users and roles (in the “karaf” default security realm).
So, first, we create the “admin” role in Syncope:




Now, we can create an user of our choice, let say “myuser” with “myuser01” as password.


As we want “myuser” as Karaf administrator, we define the “admin” role for “myuser”.


“myuser” has been created.



Syncope is now ready to be used by Karaf (including users and roles).

Karaf SyncopeLoginModule

Karaf provides a complete security framework allowing to use JAAS in an OSGi compliant way.

Karaf itself uses a realm named “karaf”: it’s the one used by SSH, JMX, WebConsole by default.

By default, Karaf uses two login modules for the “karaf” realm:

  • the PropertiesLoginModule uses the etc/users.properties as storage for users and roles (with user password)
  • the PublickeyLoginModule uses the etc/keys.properties as storage for users and roles (with user public key)

In the coming Karaf versions (3.0.2, 2.4.0, 2.3.7), a new login module is available: the SyncopeLoginModule.

To enable the SyncopeLoginModule, we just create a blueprint descriptor that we drop into the deploy folder. The configuration of the Syncope login module is pretty simple, it just requires the address of the Syncope REST API:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"

    <jaas:config name="karaf" rank="1">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"


You can see that the login module is enabled for the “karaf” realm using the jaas:realm-list command:

karaf@root()> jaas:realm-list 
Index | Realm Name | Login Module Class Name                                 
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule

We can now login on SSH using “myuser” which is configured in Syncope:

~$ ssh -p 8101 myuser@localhost
The authenticity of host '[localhost]:8101 ([]:8101)' can't be established.
DSA key fingerprint is b3:4a:57:0e:b8:2c:7e:e6:1c:f1:e2:88:dc:bf:f9:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:8101' (DSA) to the list of known hosts.
Password authentication
        __ __                  ____      
       / //_/____ __________ _/ __/      
      / ,<  / __ `/ ___/ __ `/ /_        
     / /| |/ /_/ / /  / /_/ / __/        
    /_/ |_|\__,_/_/   \__,_/_/         

  Apache Karaf (4.0.0-SNAPSHOT)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.


Our Karaf instance now uses Syncope for users and roles.

Karaf SyncopeBackendEngine

In addition of the login module, Karaf also ships a SyncopeBackendEngine. The purpose of the Syncope backend engine is to manipulate the users and roles back directly from Karaf. Thanks to the backend engine, you can list the users, add a new user, etc directly from Karaf.

However, for security reason and consistency, the SyncopeBackendEngine only supports the listing of users and roles defined in Syncope: the creation/deletion of an user or role directly from Karaf is disabled as those operations should be performed directly from the Syncope console.

To enable the Syncope backend engine, you have to register the backend engine as an OSGi service. Moreoever, the SyncopeBackendEngine requires two additional options on the login module: the admin.user and admin.password corresponding a Syncope admin user.

We have to update the blueprint descriptor like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"

    <jaas:config name="karaf" rank="5">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"

    <service interface="org.apache.karaf.jaas.modules.BackingEngineFactory">
        <bean class="org.apache.karaf.jaas.modules.syncope.SyncopeBackingEngineFactory"/>


With the SyncopeBackendEngineFactory register as an OSGi service, for instance, we can list the users (and their roles) defined in Syncope.

To do it, we can use the jaas:user-list command:

myuser@root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule
myuser@root()> jaas:realm-manage --index 1
myuser@root()> jaas:user-list
User Name | Group | Role
rossini   |       | root
rossini   |       | otherchild
verdi     |       | root
verdi     |       | child
verdi     |       | citizen
vivaldi   |       |
bellini   |       | managingDirector
puccini   |       | artDirector
myuser    |       | admin

We can see all the users and roles defined in Syncope, including our “myuser” with our “admin” role.

Using Karaf JAAS realms

In Karaf, you can create any number of JAAS realms that you want.
It means that existing applications or your own applications can directly use a realm to delegate authentication and authorization.

For instance, Apache CXF provides a JAASLoginInterceptor allowing you to add authentication by configuration. The following Spring or Blueprint snippet shows how to use the “karaf” JAAS realm:

<jaxws:endpoint address="/service">
   <ref bean="authenticationInterceptor"/>
<bean id="authenticationInterceptor" class="org.apache.cxf.interceptor.security.JAASLoginInterceptor">
   <property name="contextName" value="karaf"/>

The same configuration can be applied for jaxrs endpoint instead of jaxws endpoint.

As Pax Web leverages and uses Jetty, you can also define your Jetty security configuration in your Web Application.
For instance, in the META-INF/spring/jetty-security.xml of your application, you can define the security contraints:

<?xml version="1.0" encoding="UTF-8"?>
<beans    xmlns="http://www.springframework.org/schema/beans"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
  <bean id="loginService" class="org.eclipse.jetty.plus.jaas.JAASLoginService">       
    <property name="name" value="karaf" />
    <property name="loginModuleName" value="karaf" />    
 <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint">        
   <property name="name" value="BASIC"/>       
   <property name="roles" value="user"/>        
   <property name="authenticate" value="true"/>   
<bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping">        
  <property name="constraint" ref="constraint"/>        
  <property name="pathSpec" value="/*"/>    
 <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">       
   <property name="authenticator">          
     <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/>     
   <property name="constraintMappings">          
     <ref bean="constraintMapping"/>     
  <property name="loginService" ref="loginService" />      
  <property name="strict" value="false" />   

We can link the security constraint in the web.xml:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
            <web-resource-name>All files</web-resource-name>

Thanks to that, your web application will use the “karaf” JAAS realm, which can delegates the storage of users and roles to Syncope.

Thanks to the Syncope Login Module, Karaf becomes even more flexible for the authentication and authorization of the users, as the users/roles backend doesn’t have to be embedded in Karaf itself (as for the PropertiesLoginModule): Karaf can delegates to Syncope which is able to façade a lot of different actual backends.