Posts Tagged: ‘karaf’

Talend ESB: query a database directly in the mediation perspective

August 3, 2015 Posted by jbonofre

When exposing a database as a REST or SOAP service, lot of users use:

  • the integration perspective to create a service, but they don’t leverage Camel
  • the mediation perspective to create a route containing a cTalendJob

However, there’s an easy alternative.

Using the mediation perspective, we can use directly a datasource exposed in the runtime (Karaf) as an OSGi service, and directly use Camel components.

The advantages of this approach are:

  • The same DataSource is shared by different routes and services. It means that we can use a PooledDataSource and optimize the connections to the database.
  • We don’t use any Talend job, and directly leverage Camel native components.

Route design in the studio

We create a route in the mediation perspective of the studio.

screen1

First, in the Spring tab, we add the DataSource OSGi service lookup. To do so, we add the spring-osgi namespace and use the osgi:reference element:

<beans ....
   xmlns:osgi="http://www.springframework.org/schema/osgi"
   xsi:schemaLocation="
      ...
      http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi.xsd
      ...">

  <osgi:reference id="demoDS" interface="javax.sql.DataSource" filter="(osgi.jndi.service.name=jdbc/demo)"/>

</beans>

screen2

You can use the same mechanism to load a JMS connection factory: you can reference a JMS ConnectionFactory OSGi service, and use the camel-jms component in a cMessagingEndpoint, or use cJMS component with an empty custom cJMSConnectionFactory.

Now, we can start the actual design of our route.

As we want to expose a REST service, the route starts with a CXFRS endpoint.

The CXFRS endpoint property is set to "/demo". It’s not an abolute URL: I recommend to use a relative URL as it will bind relatively to the CXF servlet in the runtime, and so leverage CXF and Jetty configuration of the runtime.

We also create an API mapping getMembers producing a JSON output.

screen3

NB: the methodName (here getMembers) is available as header. You can use a Content Base Router just after the CXFRS endpoint to route the different methods/REST action to different sub-routes/endpoints. Here, as we have only one method/action (getMembers), I directly route to an unique endpoint.

We now add a cMessagingEndpoint to use the camel-sql component. In the URI, we set the SQL query and the datasource reference:

"sql:select * from members?dataSource=demoDS"

screen4

The dataSource property corresponds to the reference id as defined in the Spring tab.

And in the advanced settings, we define the sql component:

screen5

The SQL endpoint populate the body of the in message with the query result, as a List>. I transform this as JSON using a marshaler (which use camel-xstream by default).

For that, I add a cJavaDSLProceddor which does:

.marshal().json()

screen6

For the demo, I directly marshal the list of map as JSON. If you want to have more control in the generated JSON, you can use a cProcessor before the cJavaDSLProcessor (marshaler). In this cProcessor, you create a simple instance of a POJO which will be marshaled as JSON, generating the JSON as you want.

Our route is now ready, let’s create a kar file.

screen7

We are now ready to deploy in the runtime (Karaf).

Deployment in the runtime

Let’s start a runtime:

$ bin/trun
karaf@trun>

First, we install the jdbc feature to easily create a datasource:

karaf@trun> features:install jdbc

Now, we can create the demo datasource (as expected in our route):

karaf@trun> jdbc:create -i -t derby demo

We now have the demo datasource ready:

karaf@trun> jdbc:datasources
                Name         Product    Version                                           URL Status
    jdbc/demoxa, 418    Apache Derby 10.8.2.2 - (1181258)                               jdbc:derby:demo    OK
      jdbc/demo, 419    Apache Derby 10.8.2.2 - (1181258)                               jdbc:derby:demo    OK

We create the members table using the jdbc:execute command:

karaf@trun> jdbc:execute jdbc/demo "create table members(id int, name varchar(256))"

Now, let’s insert a record in the members table:

karaf@trun> jdbc:execute jdbc/demo "insert into members values(1, 'jbonofre')"

As our route uses JSON marshaler, we need the camel-xstream component:

karaf@trun> features:install camel-xstream

We are now ready to deploy our route. We drop the kar file in the deploy folder. We can see in the log:

2015-08-03 10:02:09,895 | INFO  | pool-10-thread-1 | OsgiSpringCamelContext           | e.camel.impl.DefaultCamelContext 1673 | 170 - org.apache.camel.camel-core - 2.13.2 | Apache Camel 2.13.2 (CamelContext: BlogDataSourceRoute-ctx) started in 0.302 seconds

If we access to http://localhost:8040/services and see our REST service:

screen8

And the generated WADL:

screen10

Using a browse, you can use the REST service accessing http://localhost:8040/services/demo/. We have the JSON generated containing the database data:

screen11

Conclusion

When you want to manipulate a database, you don’t have to use a Talend Service or a cTalendJob. Camel provides components:

  • camel-sql
  • camel-jdbc
  • camel-jpa

Using the DataSource as an OSGi service, you can share an unique datasource from different routes, services, applications, and bundles, leveraging the pooling.

It’s an efficient way to leverage some runtime feature, with design in the studio.

Monitoring and alerting with Apache Karaf Decanter

July 28, 2015 Posted by jbonofre

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation 😉

Overview

Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it.

It’s very flexible, providing ready to use features, and also very easy to extend.

Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf.

Decanter provides collectors, appenders, and SLA.

Collectors

Decanter Collectors are responsible of harvesting the monitoring data.

Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic.

A Collector can be:

  • Event Driven, meaning that it will automatically react to an internal event
  • Polled, meaning that it’s periodically executed by the Decanter Scheduler

You can install multiple Decanter Collectors in the same time. In the 1.0.0 release, Decanter provides the following collectors:

  • log is an event-driven collector. It’s actually a Pax Logging PaxAppender that listens for any log messages and send the log details into the EventAdmin topic.
  • jmx is a polled collector. Periodically, the Decanter Scheduler executes this collector. It retrieves all attributes of all MBeans in the MBeanServer, and send the JMX metrics into the EventAdmin topic.
  • camel (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the Camel routes MBeans.
  • activemq (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the ActiveMQ MBeans.
  • camel-tracer is a Camel Tracer TraceEventHandler. In your Camel route definition, you can set this trace event handler to the default Camel tracer. Thanks to that, all tracing details (from URI, to URI, exchange with headers, body, etc) will be send into the EventAdmin topic.

Appenders

The Decanter Appenders receives the data harvested by the collectors. They consume OSGi EventAdmin Events from the decanter/collect/* topics.

They are responsible of storing the monitoring data into a backend.

You can install multiple Decanter Appenders in the same time. In the 1.0.0 release, Decanter provides the following appenders:

  • log creates a log message with the monitoring data
  • elasticsearch stores the monitoring data into an Elasticsearch instance
  • jdbc stores the monitoring data into a database
  • jms sends the monitoring data to a JMS broker
  • camel sends the monitoring data to a Camel route

SLA and alerters

Decanter also provides an alerting system when some data doesn’t validate a SLA.

For instance, you can define the maximum acceptable number of threads running in Karaf. If the current number of threads is over the limit, Decanter calls alerters.

Decanter Alerters are a special kind of appenders, consuming events from the OSGi EventAdmin decanter/alert/* topics.

As for the appenders, you can have multiple alerters active at the same time. Decanter 1.0.0 release provides the following alerters:

  • log to create a log message for each alert
  • e-mail to send an e-mail for each alert
  • camel to execute a Camel route for each alert

Let see Decanter in action to have details how to install and use it !

Quick start

Decanter is pretty easy to install and provide “key turn” functionalities.

The first thing to do is to register the Decanter features repository in the Karaf instance:

karaf@root()> feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features

NB: for the next Karaf releases, I will add Decanter features repository in etc/org.apache.karaf.features.repos.cfg, allowing to easily register Decanter features simply using feature:repo-add decanter 1.0.0.

We now have the Decanter features available:

karaf@root()> feature:list |grep -i decanter
decanter-common                 | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter API                                
decanter-simple-scheduler       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Simple Scheduler                   
decanter-collector-log          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Messages Collector             
decanter-collector-jmx          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMX Collector                      
decanter-collector-camel        | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Collector                    
decanter-collector-activemq     | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter ActiveMQ Collector                 
decanter-collector-camel-tracer | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Tracer Collector             
decanter-collector-system       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter OS Collector                       
decanter-appender-log           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Appender                       
decanter-appender-elasticsearch | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Elasticsearch Appender             
decanter-appender-jdbc          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JDBC Appender                      
decanter-appender-jms           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMS Appender                       
decanter-appender-camel         | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Appender                     
decanter-sla                    | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA support                        
decanter-sla-log                | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA log alerter                    
decanter-sla-email              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA email alerter                  
decanter-sla-camel              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA Camel alerter                  
elasticsearch                   | 1.6.0            |           | karaf-decanter-1.0.0     | Embedded Elasticsearch node                       
kibana                          | 3.1.1            |           | karaf-decanter-1.0.0     | Embedded Kibana dashboard

For a quick start, we will use elasticsearch embedded to store the monitoring data. Decanter provides a ready to use elasticsearch feature, starting an embedded elasticsearch node:

karaf@root()> feature:install elasticsearch

The elasticsearch feature installs the elasticsearch configuration: etc/elasticsearch.yml.

We now have a ready to use elasticsearch node, where we will store the monitoring data.

Decanter also provides a kibana feature, providing a ready to use set of kibana dashboards:

karaf@root()> feature:install kibana 


We can now install the Decanter Elasticsearch appender: this appender will get the data harvested by the collectors, and store it in elasticsearch:


karaf@root()> feature:install decanter-appender-elasticsearch

The decanter-appender-elasticsearch feature also installs etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file. You can configure the location of the Elasticsearch node there. By default, it uses a local elasticsearch node, especially the one embedded that we installed with the elasticsearch feature.

The etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file contains hostname, port and clusterName of the elasticsearch instance to use:

################################################
# Decanter Elasticsearch Appender Configuration
################################################

# Hostname of the elasticsearch instance
host=localhost
# Port number of the elasticsearch instance
port=9300
# Name of the elasticsearch cluster
clusterName=elasticsearch

Now, our Decanter appender and elasticsearch node are ready.

It's now time to install some collectors to harvest the data.

Karaf monitoring

First, we install the log collector:

karaf@root()> feature:install decanter-collector-log 

This collector is event-driven and will automatically listen for log events, and send into the EventAdmin collect topic.

We install a second collector: the JMX collector.

karaf@root()> feature:install decanter-collector-jmx

The JMX collector is a polled collector. So, it also installs and starts the Decanter Scheduler.

You can define the call execution period of the scheduler in etc/org.apache.karaf.decanter.scheduler.simple.cfg configuration file. By default, the Decanter Scheduler calls the polled collectors every 5 seconds.

The JMX collector is able to retrieve all metrics (attributes) from multiple MBeanServers.

By default, it uses the etc/org.apache.karaf.decanter.collector.jmx-local.cfg configuration file. This file polls the local MBeanServer.

You can create new configuration files (for instance etc/org.apache.karaf.decanter.collector.jmx-mystuff.cfg configuration file), to poll other remote or local MBeanServers.

The etc/org.apache.karaf.decanter.collector.jmx-*.cfg configuration file contains:

type=jmx-mystuff
url=service:jmx:rmi:///jndi/rmi://hostname:1099/karaf-root
username=karaf
password=karaf
object.name=*.*:*

The type property is a free field allowing you to identify the source of the metrics.

The url property allows you to define the JMX URL. You can also use the local keyword to poll the local MBeanServer.
The username and password allows you to define the username and password to connect to the MBeanServer.

The object.name property is optional. By default, the collector harvests all the MBeans in the server. But you can filter to harvest only some MBeans (for instance org.apache.camel:context=*,type=routes,name=* to harvest only the Camel routes metrics).

Now, we can go in the Decanter Kibana to see the dashboards using the harvested data.

You can access to the Decanter Kibana using http://localhost:8181/kibana.

You have the Decanter Kibana welcome page:

Decanter Kibana

Decanter provides ready to use dashboard. Let see the Karaf Dashboard.

Decanter Kibana Karaf 1

These histograms use the metrics harvested by the JMX collector.

You can also see the log details harvested by the log collector:

Decanter Karaf 2

As Kibana uses Lucene, you can extract exactly the data that you need using filtering or queries.

You can also define the time range to get the metrics and logs.

For instance, you can create the following query to filter only the message coming from Elasticsearch:

loggerName:org.elasticsearch*

Camel monitoring and tracing

We can also use Decanter for the monitoring of the Camel routes that you deploy in Karaf.

For instance, we add Camel in our Karaf instance:

karaf@root()> feature:repo-add camel 2.13.2
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features
karaf@root()> feature:install camel-blueprint

In the deploy, we create the following very simple route (using the route.xml file):

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana, we can go in the Camel dashboard:

Decanter Kibana Camel 1

We can see the histograms here, using the JMX metrics retrieved on the Camel MBeans (especially, we can see for our route the exchanges completed, failed, the last processing time, etc).

You can also see the log messages related to Camel.

Another feature provided by Decanter is a Camel Tracer collector: you can enable the Decanter Camel Tracer to log all exchange state in the backend.

For that, we install the Decanter Camel Tracer feature:

karaf@root()> feature:install decanter-collector-camel-tracer

We update our route.xml in the deploy folder like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="eventAdmin" interface="org.osgi.service.event.EventAdmin"/>

    <bean id="traceHandler" class="org.apache.karaf.decanter.collector.camel.DecanterTraceEventHandler">
        <property name="eventAdmin" ref="eventAdmin"/>
    </bean>

    <bean id="tracer" class="org.apache.camel.processor.interceptor.Tracer">
        <property name="traceHandler" ref="traceHandler"/>
        <property name="enabled" value="true"/>
        <property name="traceOutExchanges" value="true"/>
        <property name="logLevel" value="OFF"/>
    </bean>

    <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana Camel dashboard, you can see the details in the tracer panel:

Decanter Kibana Camel 2

Decanter Kibana also provides a ready to use ActiveMQ dashboard, using the JMX metrics retrieved from an ActiveMQ broker.

SLA and alerting

Another Decanter feature is the SLA (Service Level Agreement) checking.

The purpose is to check if a harvested data validate a check condition. If not, an alert is created and send to SLA alerters.

We want to send the alerts to two alerters:

  • log to create a log message for each alert (warn log level for serious alerts, error log level for critical alerts)
  • camel to call a Camel route for each alert.

First, we install the decanter-sla-log feature:

karaf@root()> feature:install decanter-sla-log

The SLA checker uses the etc/org.apache.karaf.decanter.sla.checker.cfg configuration file.

Here, we want to throw an alert when the number of threads in Karaf is greater to 60. So in the checker configuration file, we set:

ThreadCount.error=range:[0,60]

The syntax in this file is:

attribute.level=check

where:

  • attribute is the name of the attribute in the harvested data (coming from the collectors).
  • level is the alert level. The two possible values are: warn or error.
  • check is the check expression.

The check expression can be:

  • range for numeric attribute, like range:[x,y]. The alert is thrown if the attribute is out of the range.
  • equal for numeric attribute, like equal:x. The alert is thrown if the attribute is not equal to the value.
  • notequal for numeric attribute, like notequal:x. The alert is thrown if the attribute is equal to the value.
  • match for String attribute, like match:regex. The alert is thrown if the attribute doesn't match the regex.
  • notmatch for String attribute, like nomatch:regex. The alert is thrown if the attribute match the regex.

So, in our case, if the number of threads is greater than 60 (which is probably the case ;)), we can see the following messages in the log:

2015-07-28 22:17:11,950 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:17:11,951 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:5639 | alertLevel:error | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:22000000000 | PeakThreadCount:225 | AllThreadIds:[J@6d9ad2c5 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:22911917003 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:221 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/error | 

Let's now extend the range, add a new check on the thread, and add a new check to throw alerts when we have errors in the log:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,300]
loggerLevel.error=match:ERROR

Now, we want to call a Camel route to deal with the alerts.

We create the following Camel route, using the deploy/alert.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

        <camelContext xmlns="http://camel.apache.org/schema/blueprint">
                <route id="alerter">
                        <from uri="direct-vm:decanter-alert"/>
                        <to uri="log:alert"/>
                </route>
        </camelContext>

</blueprint>

Now, we can install the decanter-sla-camel feature:

karaf@root()> feature:install decanter-sla-camel

This feature also installs a etc/org.apache.karaf.decanter.sla.camel.cfg configuration file. In this file, you can define the Camel endpoint URI where you want to send the alert:

alert.destination.uri=direct-vm:decanter-alert

Now, let's decrease the thread range in etc/org.apache.karaf.decanter.sla.checker.cfg configuration file to throw some alerts:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,60]
loggerLevel.error=match:ERROR

Now, in the log, we can see the alerts.

From the SLA log alerter:

2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:6234 | alertLevel:warn | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:193150000000 | PeakThreadCount:225 | AllThreadIds:[J@28f0ef87 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:201484424892 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:222 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/warn | 

but also from the SLA Camel alerter:

2015-07-28 22:39:15,293 | INFO  | Thread-41        | alert                            | 114 - org.apache.camel.camel-core - 2.13.2 | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {hostName=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root, alertPattern=range:[0,60], ThreadAllocatedMemorySupported=true, ThreadContentionMonitoringEnabled=false, TotalStartedThreadCount=6236, alertLevel=warn, CurrentThreadCpuTimeSupported=true, CurrentThreadUserTime=193940000000, PeakThreadCount=225, AllThreadIds=[J@408db39f, type=jmx-local, ThreadAllocatedMemoryEnabled=true, CurrentThreadCpuTime=202296849879, ObjectName=java.lang:type=Threading, ThreadContentionMonitoringSupported=true, ThreadCpuTimeSupported=true, ThreadCount=222, event.topics=decanter/alert/warn, ThreadCpuTimeEnabled=true, ObjectMonitorUsageSupported=true, SynchronizerUsageSupported=true, alertAttribute=ThreadCount, DaemonThreadCount=198}]

Decanter also provides the SLA e-mail alerter to send the alerts by e-mail.

Now, you can play with the SLA checker, and add the checks on the attributes that you need. The Decanter Kibana dashboards help a lot there: in the "Event Monitoring" table, you can see all raw harvested data, allowing you to find the attributes.

What's next

It's just the first Decanter release, but I think it's an interesting one.

Now, we are in the process of adding:

  • a new Decanter CXF interceptor collector, thanks to this collector, you will be able to send details about the request/response on CXF endpoints (SOAP-Request, SOAP-Response, REST message, etc).
  • a new Decanter Redis appender, to send the harvested data to Redis
  • a new Decanter Cassandra appender, to send the harvested data to Cassandra
  • a Decanter WebConsole, allowing to easily manipulate the SLA
  • improvement the SLA support with "recovery" support to send only one alert when the check failed, and another alert when the value "recovered"

Anyway, if you have ideas and want to see new features in Decanter, please let us know.

I hope you like Decanter and see interest in this new Karaf project !

Apache Karaf Christmas gifts: docker.io, profiles, and decanter

December 15, 2014 Posted by jbonofre

We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you 😉

Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2.

Some sub-project releases are also in preparation, especially Cellar. We completely refactored Cellar internals, to provide a more reliable, predictable, and stable behavior. New sync policies are available, new properties, new commands, and also interesting new features like HTTP session replication, or HTTP load balancing. I will prepare a blog about this very soon.

But, we’re also preparing brand-new features.

Docker.io

I heard some people saying: “why do I need Karaf when I have docker.io ?”.

Honestly, I don’t understand this as the purpose is not the same: actually, Karaf on docker.io is a great value.

First, docker.io concepts are not new. It’s more or less new on Linux, but the same kind of features exists for a long time on other systems:

  • zones on Solaris
  • jail on FreeBSD
  • xen on Linux, in the past

So, nothing revolutionary in docker.io, however it’s a very convenient way to host multiple images/pseudo-system on the same machine.

However, docker.io (like the other systems) is focus on the OS: it doesn’t cover by its own the application container. For that, you have to prepare an images with OS plus the application container. For instance, you want to deploy your war file, you have to bootstrap a docker.io image with OS and tomcat (or Karaf ;)).

Moreover, remember the cool features provided by Karaf: ConfigAdmin and dynamic configuration, hotdeployment, features, etc.

You want to deploy your Camel routes, your ActiveMQ broker, your CXF webservices, your application: just use the docker.io image providing a Karaf instance!

And it’s what the Karaf docker.io feature provides. Actually, it provides two things:

  • a set of Karaf docker.io images ready to use, with ubuntu/centos images with ready to use Karaf instances (using different combinations)
  • a set of shell commands and Karaf commands to easily bootstrap the images from a Karaf instance. It’s actually a good alternative to the Karaf child instances (which are only local to the machine).

Basically, docker.io doesn’t replace Karaf. However, Karaf on docker.io provides a very flexible infrastructure, allowing you to easily bootstrap Karaf instances. Associated with Cellar, you can bootstrap a Karaf cluster very easily as well.

I will prepare the donation and I will blog about the docker.io feature very soon. Stay tuned !!!

Karaf Profiles

A new feature comes in Karaf 4: the Karaf profiles. The purpose is to apply a ready to use set of configurations and provisioning to a Karaf instance.

Thanks to that you can prepare a complete profile containing your configuration and your application (features) and apply multiple profiles to easily create a ready-to-go Karaf instance.

It’s a great complete to the Karaf docker.io feature: the docker.io feature bootstraps the Karaf image, on which you can apply your profiles, all in a row.

Some profiles description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201412.mbox/%3CCAA66TpodJWHVpOqDz2j1QfkPchhBepK_Mwdx0orz7dEVaw8tPQ%40mail.gmail.com%3E.

I’m working on the storage of profiles on Karaf Cave, the application of profiles on running/existing Karaf instances, support of cluster profiles in Cellar, etc.

Again, I will create a specific blog post about profiles soon. Stay tuned again !! 🙂

Karaf Decanter

As a fully enterprise ready container, Karaf has to provide monitoring and management feature. We already provide a bunch of metrics via JMX (we have multiple MBeans for Karaf, Camel, ActiveMQ, CXF, etc).

However, we should provide:

  • storage of metrics and messages to be able to have an activity timeline
  • SLA definition of the metrics and messages, raising alerts when some metrics are not in the expected value range or when the messages contain a pattern
  • dashboard to configure the SLA, display messages, and graph the metrics

As always in Karaf, it should be very simple to install such kind of feature, with an integration of the supported third parties.

That’s why we started to work on Karaf Decanter, a complete and flexible monitoring solution for Karaf and the applications hosted by Karaf (Camel, ActiveMQ, CXF, etc).

The Decanter proposal and description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201410.mbox/%3C543D3D62.6050608%40nanthrax.net%3E.

The current codebase is also available: https://github.com/jbonofre/karaf-decanter.

I’m preparing the donation (some cleansing/polishing in progress).

Again, I will blog about Karaf Decanter asap. Stay tuned again again !! 🙂

Conclusion

You can see like, as always, the Karaf team is committed and dedicated to provide to you very convenient and flexible features. Lot of those features come from your ideas, discussions, proposals. So, keep on discussing with us, we love our users 😉

We hope you will enjoy those new features. We will document and blog about these Christmas gifts soon.

Enjoy Karaf, and Happy Christmas !

Encrypt ConfigAdmin properties values in Apache Karaf

October 3, 2014 Posted by jbonofre

Apache Karaf loads all the configuration from etc/*.cfg files by default, using a mix of Felix FileInstall and Felix ConfigAdmin.

These files are regular properties file looking like:

key=value

Some values may be critical, and so not store in plain text. It could be critical business data (credit card number, etc), or technical data (password to different systems, like database for instance).

We want to encrypt such kind of data in the etc/*.cfg files, but being able to use it regulary in the application.

Karaf provides a nice feature for that: jasypt-encryption.

It’s very easy to use especially with Blueprint.

The jasypt-encryption feature is an optional feature, so it means that you have to install it first:

karaf@root()> feature:install jasypt-encryption

This feature provides:

  • jasypt bundle
  • a namespace handler (enc:*) for blueprint

Now, we can create a cfg file containing encrypted value. The encrypted value is “wrapped” in a ENC() function.

For instance, we can create etc/my.cfg file containing:

mydb.url=host:port
mydb.username=username
mydb.password=ENC(zRM7Pb/NiKyCalroBz8CKw==)

In the Blueprint descriptor of our application (like a Camel route Blueprint XML for instance), we use the “regular” cm namespace (to load ConfigAdmin), but we add a Jasypt configuration using the enc namespace.

For instance, the blueprint XML could look like:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
           xmlns:enc="http://karaf.apache.org/xmlns/jasypt/v1.0.0">

  <cm:property-placeholder persistent-id="my" update-strategy="reload">
    <cm:default-properties>
      <cm:property name="mydb.url" value="localhost:9999"/>
      <cm:property name="mydb.username" value="sa"/>
      <cm:property name="mydb.password" value="ENC(xxxxx)"/>
    </cm:default-properties>
  </cm:property-placeholder>

  <enc:property-placeholder>
    <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
      <property name="config">
        <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
          <property name="algorithm" value="PBEWithMD5AndDES"/>
          <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/>
        </bean>
      </property>
    </enc:encryptor>
  </enc:property-placeholder>

  <bean id="dbbean" class="...">
    <property name="url" value="${mydb.url}"/>
    <property name="username" value="${mydb.username}"/>
    <property name="password" value="${mydb.password}"/>
  </bean>

  <camelContext xmlns="http://camel.apache.org/schemas/blueprint">
     <route>
        ...
        <process ref="dbbean"/>
        ...
     </route>
  </camelContext>

</blueprint>

It’s also possible to use encryption not in ConfigAdmin, directly loading an “external” properties file using the ext blueprint namespace:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"
           xmlns:enc="http://karaf.apache.org/xmlns/jasypt/v1.0.0">

  <ext:property-placeholder>
    <ext:location>file:etc/db.properties</ext:location>
  </ext:property-placeholder>

  <enc:property-placeholder>
    <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
      <property name="config">
        <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
          <property name="algorithm" value="PBEWithMD5AndDES"/>
          <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/>
        </bean>
      </property>
    </enc:encryptor>
  </enc:property-placeholder>

  ...

</blueprint>

where etc/db.properties looks like:

mydb.url=host:port
mydb.username=username
mydb.password=ENC(zRM7Pb/NiKyCalroBz8CKw==)

It’s also possible to use directly the ConfigAdmin in code. In that case, you have to create the Jasypt configuration programmatically:

StandardPBEStringEncryptor enc = new StandardPBEStringEncryptor();
EnvironmentStringPBEConfig env = new EnvironmentStringPBEConfig();
env.setAlgorithm("PBEWithMD5AndDES");
env.setPassword("ENCRYPTION_PASSWORD");
enc.setConfig(env);
...

MDC logging with Apache Karaf and Camel

August 31, 2014 Posted by jbonofre

MDC (Mapped Diagnostic Context) logging is an interesting feature to log contextual messages.

It’s classic to want to log contextual messages in your application. For instance, we want to log the actions performed by an user (identified by an username or user id). As you have a lot of simultaneous users on your application, it’s easier to “follow” the log.

MDC is supported by several logging frameworks, like log4j or slf4j, and so by Karaf (thanks to pax-logging) as well.
The approach is pretty simple:

  1. You define the context using a key ID and a value for the key:
    MDC.put("userid", "user1");
    
  2. You use the logger as usual, the log messages to this logger will be contextual to the context:
    logger.debug("my message");
    
  3. After that, we can change the context by overriding the key:
    MDC.put("userid", "user2");
    logger.debug("another message");
    

    Or you can remove the key, so to remove the context, and the log will be “global” (not local to a context):

    MDC.remove("userid"); // or MDC.clear() to remove all
    logger.debug("my global message");
    
  4. In the configuration, we can use pattern with %X{key} to log context. A pattern like %X{userid} - %m%n will result to a log file looking like:
    user1 - my message
    user2 - another message
    

In this blog, we will see how to use MDC in different cases (directly in your bundle, generic Karaf OSGi, and in Camel routes.

The source code of the blog post are available on my github: http://github.com/jbonofre/blog-mdc.

Using MDC in your application/bundle

The purpose here is to use slf4j MDC in our bundle and configure Karaf to create one log file per context.

To illustrate this, we will create multiple threads in the bundle, given a different context key for each thread:

package net.nanthrax.blog.mdc;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;

public class MdcExampleBean {

    private Logger logger = LoggerFactory.getLogger(MdcExampleBean.class);

    public void init() throws Exception {
        CycleThread thread1 = new CycleThread("thread1");
        CycleThread thread2 = new CycleThread("thread2");
        CycleThread thread3 = new CycleThread("thread3");
        thread1.start();
        thread2.start();
        thread3.start();
    }

    class CycleThread extends Thread {
        private String mdcContext;
        public CycleThread(String mdcContext) {
            this.mdcContext = mdcContext;
        }
        public void run() {
            MDC.put("threadId", mdcContext);
            for (int i = 0; i < 20; i++) {
                logger.info("Cycle {}", i);
            }
        }
    }

}

After deploying this bundle in Karaf 3.0.1, we can see the log messages:

karaf@root()> bundle:install mvn:net.nanthrax.blog/mdc-bundle/1.0-SNAPSHOT
karaf@root()> log:display
...
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 17
2014-08-30 09:44:25,594 | INFO  | Thread-13        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 18
2014-08-30 09:44:25,595 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19

Now, we can setup the Karaf etc/org.ops4j.pax.logging.cfg file to use our MDC. For that, we add a MDCSiftingAppender, providing the threadId as MDC key, and displaying the threadId in the log message pattern. We will create one log file per key (threadId in our case), and finally, we add this appender to the rootLogger:

...
log4j.rootLogger=INFO, out, mdc-bundle, osgi:*
...
# MDC Bundle appender
log4j.appender.mdc-bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.mdc-bundle.key=threadId
log4j.appender.mdc-bundle.default=unknown
log4j.appender.mdc-bundle.appender=org.apache.log4j.FileAppender
log4j.appender.mdc-bundle.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.mdc-bundle.appender.layout.ConversionPattern=%d | %-5.5p | %X{threadId} | %m%n
log4j.appender.mdc-bundle.appender.file=${karaf.data}/log/mdc-bundle-$\\{threadId\\}.log
log4j.appender.mdc-bundle.appender.append=true
...

Now, in the Karaf data/log folder, we can see:

mdc-bundle-thread1.log
mdc-bundle-thread2.log
mdc-bundle-thread3.log

each file containing the log messages contextual to the thread:

$ cat data/log/mdc-bundle-thread1.log
2014-08-30 09:54:48,287 | INFO  | thread1 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 4
...
$ cat data/log/mdc-bundle-thread2.log
2014-08-30 09:54:48,287 | INFO  | thread2 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 4
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 5
...

In addition, Karaf “natively” provides OSGi MDC data that we can use.

Using Karaf OSGi MDC

So, in Karaf, you can use directly some OSGi headers for MDC logging, especially the bundle name.

We can use this MDC key to create one log file per bundle.

Karaf already provides a pre-defined appender configuration in etc/org.ops4j.pax.logging.cfg:

...
# Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name
log4j.appender.sift.default=karaf
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %m%n
log4j.appender.sift.appender.file=${karaf.data}/log/$\\{bundle.name\\}.log
log4j.appender.sift.appender.append=true
...

The only thing that we have to do is to add this appender to the rootLogger:

log4j.rootLogger=INFO, out, sift, osgi:*

Now, in the Karaf data/log folder, we can see one file per bundle:

data/log$ ls -1
karaf.log
net.nanthrax.blog.mdc-bundle.log
org.apache.aries.blueprint.core.log
org.apache.aries.jmx.core.log
org.apache.felix.fileinstall.log
org.apache.karaf.features.core.log
org.apache.karaf.region.persist.log
org.apache.karaf.shell.console.log
org.apache.sshd.core.log

Especially, we can see our mdc-bundle, containing the log messages “local” to the bundle.

However, if this approach works great, it doesn’t always create interesting log file. For instance, when you use Camel, using OSGi headers for MDC logging will gather most of the log messages into the camel-core bundle log file, so, not really contextual to something or easy to read/seek.

The good news is that Camel also provides MDC logging support.

Using Camel MDC

If Camel provides MDC logging support, it’s not enabled by default. It’s up to you to enable it on the camel context.

Once enabled, Camel provides the following MDC logging properties:

  • camel.exchangeId providing the exchange ID
  • camel.messageId providing the message ID
  • camel.routeId providing the route ID
  • camel.contextId providing the Camel Context ID
  • camel.breadcrumbId providing an unique id used for tracking messages across transports
  • camel.correlationId providing the correlation ID of the exchange (if it’s correlated, for instance like in Splitter EIP)
  • camel.trasactionKey providing the ID of the transaction (for transacted exchange).

To enable the MDC logging, you have to:

  • if you use the Blueprint or Spring XML DSL:
    <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true">
    
  • if you use the Java DSL:
    CamelContext context = ...
    context.setUseMDCLogging(true);
    
  • using the Talend ESB studio, you have to use a cConfig component from the palette:
    studio1

So, let say, we create the following route using the Blueprint DSL:

<?xml version="1.0" encoding="UTF-8"?> 
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> 

   <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true"> 
      <route id="my-route"> 
         <from uri="timer:fire?period=5000"/> 
         <setBody> 
            <constant>Hello Blog</constant> 
         </setBody> 
         <to uri="log:net.nanthrax.blog?level=INFO"/>
      </route>
   </camelContext>
 
</blueprint>

We want to create one log file per route (using the routeId). So, we update the Karaf etc/org.ops4j.pax.logging.cfg file to add a MDC sifting appender using the Camel MDC properties, and we add this appender to the rootLogger:

...
log4j.rootLogger=INFO, out, camel-mdc, osgi:*
...
# Camel MDC appender
log4j.appender.camel-mdc=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.camel-mdc.key=camel.routeId
log4j.appender.camel-mdc.default=unknown 
log4j.appender.camel-mdc.appender=org.apache.log4j.FileAppender
log4j.appender.camel-mdc.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.camel-mdc.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{camel.exchangeId} | %m%n
log4j.appender.camel-mdc.appender.file=${karaf.data}/log/camel-$\\{camel.routeId\\}.log
log4j.appender.camel-mdc.appender.append=true
...

The camel-mdc appender will create one log file by route (named camel-(routeId).log). The log messages will contain the exchange ID.

We start Karaf, and after the installation of the camel-blueprint feature, we can drop our route.xml directly in the deploy folder:

karaf@root()> feature:repo-add camel 2.12.1
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.12.1/xml/features
karaf@root()> feature:install camel-blueprint
cp route.xml apache-karaf-3.0.1/deploy/

Using log:display command in Karaf, we can see the messages for our route:

karaf@root()> log:display

2014-08-31 08:58:24,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Now, if we go into the Karaf data/log folder, we can see the log file for our route:

$ ls -1 data/log
camel-my-route.log
...

If we take a look in the camel-my-route.log file, we can see the messages contextual to the route, including the exchange ID:

2014-08-31 08:58:19,196 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-2 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:24,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-4 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-6 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:34,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-8 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Testing (utest and itest) Apache Camel Blueprint route

August 28, 2014 Posted by jbonofre

In any integration project, testing is vital for multiple reasons:

  • to guarantee that the integration logic matches the expectations
  • to quickly identify some regression issues
  • to test some special cases, like the errors for instance
  • to validate the succesful provisioning (deployment) on a runtime as close as possible to the target platform

We distinguish two kinds of tests:

  • the unit tests (utest) aim to test the behaviors of integration logic, and define the expectations that the logic has to match
  • the integration tests (itest) aim to provision the integration logic artifact to a runtime, and check the behaviors on the actual platform

Camel is THE framework to implement your integration logic (mediation).

It provides the Camel Test Kit, based on JUnit to implement utest. In combinaison with Karaf and Pax Exam, we can cover both utest and itest.

In this blog, we will:

  • create an OSGi service
  • create a Camel route using the Blueprint DSL, using the previously created OSGi service
  • implement the utest using the Camel Blueprint Test
  • implement the itest using Pax Exam and Karaf

You can find the whole source code used for this blog post on my github: https://github.com/jbonofre/blog-camel-blueprint.

Blueprint Camel route and features

We create a project (using Maven) containing the following modules:

  • my-service is the OSGi bundle providing the service that we will use in the Camel route
  • my-route is the OSGi bundle providing the Camel route, using the Blueprint DSL. This route uses the OSGi service provided by my-service. It’s where we will implement the utest.
  • features packages the OSGi bundles as a Karaf features XML, ready to be deployed.
  • itests contains the integration test (itest) leveraging Karaf and Pax Exam.

It means we have the following parent pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>net.nanthrax.blog</groupId>
    <artifactId>net.nanthrax.blog.camel.route.blueprint</artifactId>
    <name>Nanthrax Blog :: Camel :: Blueprint</name>
    <version>1.0-SNAPSHOT</version>
    <packaging>pom</packaging>

    <modules>
        <module>my-service</module>
        <module>my-route</module>
        <module>features</module>
        <module>itests</module>
    </modules>

</project>

OSGi service

The my-service Maven module provides an OSGi bundle providing an echo service.

It uses the following Maven POM:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>net.nanthrax.blog</groupId>
        <artifactId>net.nanthrax.blog.camel.route.blueprint</artifactId>
        <version>1.0-SNAPSHOT</version>
        <relativePath>../pom.xml</relativePath>
    </parent>

    <artifactId>net.nanthrax.blog.camel.route.blueprint.service</artifactId>
    <name>Nanthrax Blog :: Camel :: Blueprint :: Service</name>
    <packaging>bundle</packaging>

    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.5</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.felix</groupId>
                <artifactId>maven-bundle-plugin</artifactId>
                <version>2.4.0</version>
                <extensions>true</extensions>
                <configuration>
                    <instructions>
                        <Export-Package>
                            net.nanthrax.blog.service
                        </Export-Package>
                        <Import-Package>
                            org.slf4j*;resolution:=optional
                        </Import-Package>
                        <Private-Package>
                            net.nanthrax.blog.service.internal
                        </Private-Package>
                    </instructions>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

The echo service is described by the net.nanthrax.blog.service.EchoService interface:

package net.nanthrax.blog.service;

public interface EchoService {

    public String echo(String message);

}

We expose the package containing this interface using OSGi Export-Package header.

The implementation of the EchoService is hidden using the OSGi Private-Package header. This implementation is very simple, it gets a message and return the same message with the “Echoing ” prefix:

package net.nanthrax.blog.service.internal;

import net.nanthrax.blog.service.EchoService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class EchoServiceImpl implements EchoService {

    private final static Logger LOGGER = LoggerFactory.getLogger(EchoServiceImpl.class);

    public String echo(String message) {
        return "Echoing " + message;
    }

}

To expose this service in OSGi, we use blueprint. We create the blueprint descriptor in src/main/resources/OSGI-INF/blueprint/blueprint.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <service interface="net.nanthrax.blog.service.EchoService">
        <bean class="net.nanthrax.blog.service.internal.EchoServiceImpl"/>
    </service>

</blueprint>

The Camel route will use this Echo service.

Camel route and utest

We use the Camel Blueprint DSL to design the route.

The route is packaged as an OSGi bundle, in the my-route Maven module, using the following pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>net.nanthrax.blog</groupId>
        <artifactId>net.nanthrax.blog.camel.route.blueprint</artifactId>
        <version>1.0-SNAPSHOT</version>
        <relativePath>../pom.xml</relativePath>
    </parent>

    <artifactId>net.nanthrax.blog.camel.route.blueprint.myroute</artifactId>
    <name>Nanthrax Blog :: Camel :: Blueprint :: My Route</name>
    <packaging>bundle</packaging>

    <dependencies>
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-test-blueprint</artifactId>
            <version>2.12.1</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-simple</artifactId>
            <version>1.7.5</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>net.nanthrax.blog</groupId>
            <artifactId>net.nanthrax.blog.camel.route.blueprint.service</artifactId>
            <version>1.0-SNAPSHOT</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.felix</groupId>
                <artifactId>maven-bundle-plugin</artifactId>
                <version>2.4.0</version>
                <extensions>true</extensions>
                <configuration>
                    <instructions>
                        <Import-Package>
                            net.nanthrax.blog.service
                        </Import-Package>
                    </instructions>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

The src/main/resources/OSGI-INF/blueprint/route.xml contains the route definition:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="myService" interface="net.nanthrax.blog.service.EchoService"/>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route>
            <from uri="timer:fire?period=5000"/>
            <setBody>
                <constant>Hello Blog</constant>
            </setBody>
            <to uri="bean:myService"/>
            <to uri="log:net.nanthrax.blog.route"/>
            <to uri="file:camel-output"/>
        </route>
    </camelContext>

</blueprint>

This route:

  • creates an exchange every 5 secondes, using a Camel timer
  • we set the body of the “in” message in the exchange to “Hello Blog”
  • the message is sent to the EchoService, which prefix the message with “Echoing”, resulting to an updated message containing “Echoing Hello Blog”
  • we log the exchange
  • we create a file for each exchange, in the camel-output folder, using the Camel file component

We are now to create the utest for this route.

As this route uses Blueprint, and Blueprint is an OSGi specific technology, normally, we would have to deploy the route on Karaf to test it.

However, thanks to Camel Blueprint Test and the use of PojoSR, we can test the Blueprint route “outside” of OSGi. Camel Blueprint Test also supports a mock of the OSGi service registry, allowing to mock the OSGi service as well.

Basically, in the unit test, we:

  • load the route Blueprint XML by overridding the getBlueprintDescriptor() method
  • mock the timer and file endpoints by overridding the isMockEndpointsAndSkip() method (skip means that we don’t send the message to the actual endpoint)
  • mock the Echo OSGi service by overriding the addServicesOnStartup() method
  • finally implement a test in the testMyRoute() method

The test itself get the mocked file endpoint, and define the expectations on this endpoint: we expect one message containing “Echoing Hello Blog” on the file endpoint.
Instead of using the actual timer endpoint, we mock it and we use the producer template to send an exchange (in order to control the number of created exchange).
Finally, we check if the expectations are satisfied on the mocked file endpoint.

package net.nanthrax.blog;

import net.nanthrax.blog.service.EchoService;
import net.nanthrax.blog.service.internal.EchoServiceImpl;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.language.ConstantExpression;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.apache.camel.util.KeyValueHolder;
import org.junit.Test;

import java.util.Dictionary;
import java.util.Map;

public class MyRouteTest extends CamelBlueprintTestSupport {

    @Override
    protected String getBlueprintDescriptor() {
        return "OSGI-INF/blueprint/route.xml";
    }

    @Override
    public String isMockEndpointsAndSkip() {
        return "((file)|(timer)):(.*)";
    }

    @Override
    protected void addServicesOnStartup(Map<String, KeyValueHolder<Object, Dictionary>> services) {
        KeyValueHolder serviceHolder = new KeyValueHolder(new EchoServiceImpl(), null);
        services.put(EchoService.class.getName(), serviceHolder);
    }

    @Test
    public void testMyRoute() throws Exception {

        // mocking the file endpoint and define the expectation
        MockEndpoint mockEndpoint = getMockEndpoint("mock:file:camel-output");
        mockEndpoint.expectedMessageCount(1);
        mockEndpoint.expectedBodiesReceived("Echoing Hello Blog");

        // send a message at the timer endpoint level
        template.sendBody("mock:timer:fire", "empty");

        // check if the expectation is satisfied
        assertMockEndpointsSatisfied();
    }

}

We can see that we mock the Echo OSGi service using the actual EchoServiceImpl. However, of course, it’s possible to use your own local test implementation of the EchoService. It’s interesting to test some use cases, or to simulate errors.

We can note that we use a regex (((file)|(timer)):(.*)) to mock both timer and file endpoints.

We load the route.xml blueprint descriptor directly from the bundle location (OSGI-INF/blueprint/route.xml).

We can run mvn to test the route:

my-route$ mvn clean install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Nanthrax Blog :: Camel :: Blueprint :: My Route 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 1 source file to /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/test-classes
[WARNING] /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java uses unchecked or unsafe operations.
[WARNING] /home/jbonofre/Workspace/blog-camel-blueprint/my-route/src/test/java/net/nanthrax/blog/MyRouteTest.java: Recompile with -Xlint:unchecked for details.
[INFO]
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Surefire report directory: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/surefire-reports

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running net.nanthrax.blog.MyRouteTest
[main] INFO org.apache.camel.test.blueprint.CamelBlueprintHelper - Using Blueprint XML file: /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/classes/OSGI-INF/blueprint/route.xml
Aug 28, 2014 2:57:43 PM org.ops4j.pax.swissbox.tinybundles.core.metadata.RawBuilder run
INFO: Copy thread finished.
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator starting
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator started
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - No quiesce support is available, so blueprint components will not participate in quiesce operations
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Testing: testMyRoute(net.nanthrax.blog.MyRouteTest)
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Skipping starting CamelContext as system property skipStartingCamelContext is set to be true.
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is starting
[main] INFO org.apache.camel.management.DefaultManagementStrategy - JMX is disabled
[main] INFO org.apache.camel.impl.InterceptSendToMockEndpointStrategy - Adviced endpoint [timer://fire?period=5000] with mock endpoint [mock:timer:fire]
[main] INFO org.apache.camel.impl.InterceptSendToMockEndpointStrategy - Adviced endpoint [file://camel-output] with mock endpoint [mock:file:camel-output]
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Route: route1 started and consuming from: Endpoint[timer://fire?period=5000]
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Total 1 routes, of which 1 is started.
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) started in 0.069 seconds
[main] INFO org.apache.camel.component.mock.MockEndpoint - Asserting: Endpoint[mock://file:camel-output] is satisfied
[Camel (23-camel-3) thread #0 - timer://fire] INFO net.nanthrax.blog.route - Exchange[ExchangePattern: InOnly, BodyType: String, Body: Echoing Hello Blog]
[main] INFO org.apache.camel.component.mock.MockEndpoint - Asserting: Endpoint[mock://timer:fire] is satisfied
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO net.nanthrax.blog.MyRouteTest - Testing done: testMyRoute(net.nanthrax.blog.MyRouteTest)
[main] INFO net.nanthrax.blog.MyRouteTest - Took: 1.094 seconds (1094 millis)
[main] INFO net.nanthrax.blog.MyRouteTest - ********************************************************************************
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is shutting down
[main] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 10 seconds)
[Camel (23-camel-3) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[timer://fire?period=5000]
[main] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) uptime 1.117 seconds
[main] INFO org.apache.camel.blueprint.BlueprintCamelContext - Apache Camel 2.12.1 (CamelContext: 23-camel-3) is shutdown in 0.021 seconds
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle MyRouteTest
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle net.nanthrax.blog.camel.route.blueprint.service
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle org.apache.aries.blueprint
[main] INFO org.apache.aries.blueprint.container.BlueprintExtender - Destroying BlueprintContainer for bundle org.apache.camel.camel-blueprint
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator stopping
[main] INFO org.apache.camel.impl.osgi.Activator - Camel activator stopped
[main] INFO org.apache.camel.test.blueprint.CamelBlueprintHelper - Deleting work directory target/bundles/1409230663118
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.581 sec - in net.nanthrax.blog.MyRouteTest

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-bundle-plugin:2.4.0:bundle (default-bundle) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[WARNING] Bundle net.nanthrax.blog:net.nanthrax.blog.camel.route.blueprint.myroute:bundle:1.0-SNAPSHOT : Unused Private-Package instructions, no such package(s) on the class path: [!*]
[INFO] 
[INFO] --- maven-install-plugin:2.5.1:install (default-install) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/my-route/target/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar to /home/jbonofre/.m2/repository/net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/my-route/pom.xml to /home/jbonofre/.m2/repository/net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-bundle-plugin:2.4.0:install (default-install) @ net.nanthrax.blog.camel.route.blueprint.myroute ---
[INFO] Installing net/nanthrax/blog/net.nanthrax.blog.camel.route.blueprint.myroute/1.0-SNAPSHOT/net.nanthrax.blog.camel.route.blueprint.myroute-1.0-SNAPSHOT.jar
[INFO] Writing OBR metadata
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.906s
[INFO] Finished at: Thu Aug 28 14:57:47 CEST 2014
[INFO] Final Memory: 22M/557M
[INFO] ------------------------------------------------------------------------

Again, the purpose of the utest is to test the behaviors of the route: check if the message content is what we expect, if the message arrives on the expected endpoint, etc.

Karaf features and itests

The purpose of the itest is not really to test the behavior of the route: it’s more to test if the provisioning (deployment) of the route is OK, if the route starts without problem, and, when possible, if the “default” behavior is what we expect.

If it’s possible to deploy bundle per bundle (first the one providing the Echo service, and after the one providing the route), with Karaf, it’s largely easier to create a features XML.

It’s what we do in the features Maven module, grouping the bundles in two features as show in the following features XML:

<?xml version="1.0" encoding="UTF-8"?>
<features name="blog-camel-blueprint" xmlns="http://karaf.apache.org/xmlns/features/v1.0.0">

    <feature name="blog-camel-blueprint-service" version="${project.version}">
        <bundle>mvn:net.nanthrax.blog/net.nanthrax.blog.camel.route.blueprint.service/${project.version}</bundle>
    </feature>

    <feature name="blog-camel-blueprint-route" version="${project.version}">
        <feature>blog-camel-blueprint-service</feature>
        <bundle>mvn:net.nanthrax.blog/net.nanthrax.blog.camel.route.blueprint.myroute/${project.version}</bundle>
    </feature>

</features>

Now, we can use Pax Exam to implement our itests, by:

  • bootstrap a Karaf container, where we deploy the camel-blueprint, and our features
  • test if the provisioning is OK
  • create a local route to test the output of my-route

We do that in the itests Maven module, where we define the Pax Exam dependency:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>net.nanthrax.blog</groupId>
        <artifactId>net.nanthrax.blog.camel.route.blueprint</artifactId>
        <version>1.0-SNAPSHOT</version>
        <relativePath>../pom.xml</relativePath>
    </parent>

    <artifactId>itests</artifactId>

    <dependencies>

        <dependency>
            <groupId>net.nanthrax.blog</groupId>
            <artifactId>camel-blueprint</artifactId>
            <version>1.0-SNAPSHOT</version>
            <classifier>features</classifier>
            <type>xml</type>
            <scope>test</scope>
        </dependency>

        <!-- Pax Exam -->
        <dependency>
            <groupId>org.ops4j.pax.exam</groupId>
            <artifactId>pax-exam-container-karaf</artifactId>
            <version>3.4.0</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.ops4j.pax.exam</groupId>
            <artifactId>pax-exam-junit4</artifactId>
            <version>3.4.0</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.ops4j.pax.exam</groupId>
            <artifactId>pax-exam-inject</artifactId>
            <version>3.4.0</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.geronimo.specs</groupId>
            <artifactId>geronimo-atinject_1.0_spec</artifactId>
            <version>1.0</version>
            <scope>test</scope>
        </dependency>

        <!-- Camel Test -->
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-test</artifactId>
            <version>2.12.1</version>
            <scope>test</scope>
        </dependency>

        <!-- Karaf -->
        <dependency>
            <groupId>org.apache.karaf</groupId>
            <artifactId>apache-karaf</artifactId>
            <version>2.3.6</version>
            <type>tar.gz</type>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.karaf</groupId>
                    <artifactId>org.apache.karaf.client</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

    </dependencies>

</project>

We create MyRouteTest in src/test/java/net/nanthrax/blog/itests:

package net.nanthrax.blog.itests;

import static org.ops4j.pax.exam.CoreOptions.maven;
import static org.ops4j.pax.exam.karaf.options.KarafDistributionOption.*;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.language.ConstantExpression;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.apache.karaf.features.FeaturesService;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.ops4j.pax.exam.Configuration;
import org.ops4j.pax.exam.Option;
import org.ops4j.pax.exam.junit.PaxExam;
import org.ops4j.pax.exam.karaf.options.LogLevelOption;
import org.osgi.framework.BundleContext;

import javax.inject.Inject;

import java.io.File;

@RunWith(PaxExam.class)
public class MyRouteTest extends CamelTestSupport {

    @Inject
    protected FeaturesService featuresService;

    @Inject
    protected BundleContext bundleContext;

    @Configuration
    public static Option[] configure() throws Exception {
        return new Option[] {
                karafDistributionConfiguration()
                        .frameworkUrl(maven().groupId("org.apache.karaf").artifactId("apache-karaf").type("tar.gz").version("2.3.6"))
                        .karafVersion("2.3.6")
                        .useDeployFolder(false)
                        .unpackDirectory(new File("target/paxexam/unpack")),
                logLevel(LogLevelOption.LogLevel.WARN),
                features(maven().groupId("org.apache.camel.karaf").artifactId("apache-camel").type("xml").classifier("features").version("2.12.1"), "camel-blueprint", "camel-test"),
                features(maven().groupId("net.nanthrax.blog").artifactId("camel-blueprint").type("xml").classifier("features").version("1.0-SNAPSHOT"), "blog-camel-blueprint-route"),
                keepRuntimeFolder()
        };
    }

    @Test
    public void testProvisioning() throws Exception {
        // first check that the features are installed
        assertTrue(featuresService.isInstalled(featuresService.getFeature("camel-blueprint")));
        assertTrue(featuresService.isInstalled(featuresService.getFeature("blog-camel-blueprint-route")));

        // now we check if the OSGi services corresponding to the camel context and route are there

    }

    @Test
    public void testMyRoute() throws Exception {
        MockEndpoint itestMock = getMockEndpoint("mock:itest");
        itestMock.expectedMinimumMessageCount(3);
        itestMock.whenAnyExchangeReceived(new Processor() {
            public void process(Exchange exchange) {
                System.out.println(exchange.getIn().getBody(String.class));
            }
        });

        template.start();

        Thread.sleep(20000);

        assertMockEndpointsSatisfied();
    }

    @Override
    protected RouteBuilder createRouteBuilder() {
        return new RouteBuilder() {
            public void configure() {
                from("file:camel-output").to("mock:itest");
            }
        };
    }

}

In this test class, we can see:

  • the configure() method where we define the Karaf distribution to use, the log level, the Camel features XML location and the Camel features that we want to install (camel-blueprint and camel-test), the location of our features XML and the feature that we want to install (blog-camel-blueprint-route)
  • the testProvisioning() method where we check if the features have been correctly installed
  • the createRouteBuilder() method where we programmatically create a new route (using the Java DSL here) consuming the files created by my-route and sending to a mock endpoint
  • the testMyRoute() gets the itest mock endpoint (from the route created by the createRouteBuilder() method) and check that it receives at least 3 messages, during an update of 20 secondes (and also display the content of the message)

Running mvn, it bootstraps a Karaf instance, install the features, deploy our test bundle, and check the execution:

itests$ mvn clean install
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running net.nanthrax.blog.itests.MyRouteTest
[org.ops4j.pax.exam.spi.DefaultExamSystem] : Pax Exam System (Version: 3.4.0) created.
[org.ops4j.store.intern.TemporaryStore] : Storage Area is /tmp/1409248259083-0
[org.ops4j.pax.exam.junit.impl.ProbeRunner] : creating PaxExam runner for class net.nanthrax.blog.itests.MyRouteTest
...
[org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer] : Test Container started in 3 millis
[org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer] : Wait for test container to finish its initialization [ RelativeTimeout value = 180000 ]
[org.ops4j.pax.exam.rbc.client.RemoteBundleContextClient] : Waiting for remote bundle context.. on 21414 name: 7cd8df34-0ed2-4449-8d60-d51f395cfa1d timout: [ RelativeTimeout value = 180000 ]
        __ __                  ____
       / //_/____ __________ _/ __/
      / ,<  / __ `/ ___/ __ `/ /_
     / /| |/ /_/ / /  / /_/ / __/
    /_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (2.3.6)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'osgi:shutdown' or 'logout' to shutdown Karaf.

karaf@root> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[org.ops4j.pax.exam.rbc.client.RemoteBundleContextClient] : Remote bundle context found after 5774 millis
[org.ops4j.pax.tinybundles.core.intern.RawBuilder] : make()
[org.ops4j.store.intern.TemporaryStore] : Enter store()
[org.ops4j.pax.tinybundles.core.intern.RawBuilder] : Creating manifest from added headers.
...
[org.ops4j.pax.exam.container.remote.RBCRemoteTarget] : Installed bundle (from stream) as ID: 102
[org.ops4j.pax.exam.container.remote.RBCRemoteTarget] : call [[TestAddress:PaxExam-d7899c82-74e1-445e-9fcb-ab9b18e286b4 root:PaxExam-5dfb0f4b-96d9-4226-bdea-5b057e7e7335]]
Echoing Hello Blog
Echoing Hello Blog
Echoing Hello Blog
Echoing Hello Blog
...
Results :

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

[INFO]
[INFO] --- maven-jar-plugin:2.3.2:jar (default-jar) @ itests ---
[WARNING] JAR will be empty - no content was marked for inclusion!
[INFO] Building jar: /home/jbonofre/Workspace/blog-camel-blueprint/itests/target/itests-1.0-SNAPSHOT.jar
[INFO]
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ itests ---
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/itests/target/itests-1.0-SNAPSHOT.jar to /home/jbonofre/.m2/repository/net/nanthrax/blog/itests/1.0-SNAPSHOT/itests-1.0-SNAPSHOT.jar
[INFO] Installing /home/jbonofre/Workspace/blog-camel-blueprint/itests/pom.xml to /home/jbonofre/.m2/repository/net/nanthrax/blog/itests/1.0-SNAPSHOT/itests-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35.904s
[INFO] Finished at: Thu Aug 28 19:51:32 CEST 2014
[INFO] Final Memory: 28M/430M
[INFO] ------------------------------------------------------------------------

Integration in Jenkins

We can now integrate our project in Jenkins CI. We now have a complete CI covering, build of the service, packaging of the route, utest on the route, itest of the service and route in Karaf.

jenkins1

jenkins2

jenkins3

Apache Syncope backend with Apache Karaf

August 17, 2014 Posted by jbonofre

Apache Syncope is an identity manager (IdM). It comes with a web console where you can manage users, attributes, roles, etc.
It also comes with a REST API allowing to integrate with other applications.

By default, Syncope has its own database, but it can also “façade” another backend (LDAP, ActiveDirectory, JDBC) by using ConnId.

In the next releases (4.0.0, 3.0.2, 2.4.0, and 2.3.7), Karaf provides (by default) a SyncopeLoginModule allowing you to use Syncope as backend for users and roles.

This blog introduces this new feature and explains how to configure and use it.

Installing Apache Syncope

The easiest way to start with Syncope is to use the Syncope standalone distribution. It comes with a Apache Tomcat instance already installed with the different Syncope modules.

You can download the Syncope standalone distribution archive from http://www.apache.org/dyn/closer.cgi/syncope/1.1.8/syncope-standalone-1.1.8-distribution.zip.

Uncompress the distribution in the directory of your choice:

$ unzip syncope-standalone-1.1.8-distribution.zip

You can find the ready to use Tomcat instance in directory. We can start the Tomcat:

$ cd syncope-standalone-1.1.8
$ cd apache-tomcat-7.0.54
$ bin/startup.sh

The Tomcat instance runs on the 9080 port.

You can access the Syncope console by pointing your browser on http://localhost:9080/syncope-console.

The default admin username is “admin”, and password is “password”.

The Syncope REST API is available on http://localhost:9080/syncope/cxf.

The purpose is to use Syncope as backend for Karaf users and roles (in the “karaf” default security realm).
So, first, we create the “admin” role in Syncope:

screen1

screen2

screen3

Now, we can create an user of our choice, let say “myuser” with “myuser01” as password.

screen4

As we want “myuser” as Karaf administrator, we define the “admin” role for “myuser”.

screen5

“myuser” has been created.

screen5

screen6

Syncope is now ready to be used by Karaf (including users and roles).

Karaf SyncopeLoginModule

Karaf provides a complete security framework allowing to use JAAS in an OSGi compliant way.

Karaf itself uses a realm named “karaf”: it’s the one used by SSH, JMX, WebConsole by default.

By default, Karaf uses two login modules for the “karaf” realm:

  • the PropertiesLoginModule uses the etc/users.properties as storage for users and roles (with user password)
  • the PublickeyLoginModule uses the etc/keys.properties as storage for users and roles (with user public key)

In the coming Karaf versions (3.0.2, 2.4.0, 2.3.7), a new login module is available: the SyncopeLoginModule.

To enable the SyncopeLoginModule, we just create a blueprint descriptor that we drop into the deploy folder. The configuration of the Syncope login module is pretty simple, it just requires the address of the Syncope REST API:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0"
           xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">

    <jaas:config name="karaf" rank="1">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"
                     flags="required">
           address=http://localhost:9080/syncope/cxf
        </jaas:module>
    </jaas:config>

</blueprint>

You can see that the login module is enabled for the “karaf” realm using the jaas:realm-list command:

karaf@root()> jaas:realm-list 
Index | Realm Name | Login Module Class Name                                 
-----------------------------------------------------------------------------
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule

We can now login on SSH using “myuser” which is configured in Syncope:

~$ ssh -p 8101 myuser@localhost
The authenticity of host '[localhost]:8101 ([127.0.0.1]:8101)' can't be established.
DSA key fingerprint is b3:4a:57:0e:b8:2c:7e:e6:1c:f1:e2:88:dc:bf:f9:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:8101' (DSA) to the list of known hosts.
Password authentication
Password:
        __ __                  ____      
       / //_/____ __________ _/ __/      
      / ,<  / __ `/ ___/ __ `/ /_        
     / /| |/ /_/ / /  / /_/ / __/        
    /_/ |_|\__,_/_/   \__,_/_/         

  Apache Karaf (4.0.0-SNAPSHOT)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.

myuser@root()> 

Our Karaf instance now uses Syncope for users and roles.

Karaf SyncopeBackendEngine

In addition of the login module, Karaf also ships a SyncopeBackendEngine. The purpose of the Syncope backend engine is to manipulate the users and roles back directly from Karaf. Thanks to the backend engine, you can list the users, add a new user, etc directly from Karaf.

However, for security reason and consistency, the SyncopeBackendEngine only supports the listing of users and roles defined in Syncope: the creation/deletion of an user or role directly from Karaf is disabled as those operations should be performed directly from the Syncope console.

To enable the Syncope backend engine, you have to register the backend engine as an OSGi service. Moreoever, the SyncopeBackendEngine requires two additional options on the login module: the admin.user and admin.password corresponding a Syncope admin user.

We have to update the blueprint descriptor like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0"
           xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">

    <jaas:config name="karaf" rank="5">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"
                     flags="required">
           address=http://localhost:9080/syncope/cxf
           admin.user=admin
           admin.password=password
        </jaas:module>
    </jaas:config>

    <service interface="org.apache.karaf.jaas.modules.BackingEngineFactory">
        <bean class="org.apache.karaf.jaas.modules.syncope.SyncopeBackingEngineFactory"/>
    </service>

</blueprint>

With the SyncopeBackendEngineFactory register as an OSGi service, for instance, we can list the users (and their roles) defined in Syncope.

To do it, we can use the jaas:user-list command:

myuser@root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule
myuser@root()> jaas:realm-manage --index 1
myuser@root()> jaas:user-list
User Name | Group | Role
------------------------------------
rossini   |       | root
rossini   |       | otherchild
verdi     |       | root
verdi     |       | child
verdi     |       | citizen
vivaldi   |       |
bellini   |       | managingDirector
puccini   |       | artDirector
myuser    |       | admin

We can see all the users and roles defined in Syncope, including our “myuser” with our “admin” role.

Using Karaf JAAS realms

In Karaf, you can create any number of JAAS realms that you want.
It means that existing applications or your own applications can directly use a realm to delegate authentication and authorization.

For instance, Apache CXF provides a JAASLoginInterceptor allowing you to add authentication by configuration. The following Spring or Blueprint snippet shows how to use the “karaf” JAAS realm:

<jaxws:endpoint address="/service">
 <jaxws:inInterceptors>
   <ref bean="authenticationInterceptor"/>
 </jaxws:inInterceptors>
</jaxws:endpoint>
 
<bean id="authenticationInterceptor" class="org.apache.cxf.interceptor.security.JAASLoginInterceptor">
   <property name="contextName" value="karaf"/>
</bean>

The same configuration can be applied for jaxrs endpoint instead of jaxws endpoint.

As Pax Web leverages and uses Jetty, you can also define your Jetty security configuration in your Web Application.
For instance, in the META-INF/spring/jetty-security.xml of your application, you can define the security contraints:

<?xml version="1.0" encoding="UTF-8"?>
<beans    xmlns="http://www.springframework.org/schema/beans"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
  <bean id="loginService" class="org.eclipse.jetty.plus.jaas.JAASLoginService">       
    <property name="name" value="karaf" />
    <property name="loginModuleName" value="karaf" />    
 </bean>    
 <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint">        
   <property name="name" value="BASIC"/>       
   <property name="roles" value="user"/>        
   <property name="authenticate" value="true"/>   
</bean>    
<bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping">        
  <property name="constraint" ref="constraint"/>        
  <property name="pathSpec" value="/*"/>    
</bean>
 <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">       
   <property name="authenticator">          
     <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/>     
   </property>        
   <property name="constraintMappings">          
    <list>           
     <ref bean="constraintMapping"/>     
   </list>   
   </property>        
  <property name="loginService" ref="loginService" />      
  <property name="strict" value="false" />   
 </bean>
</beans>

We can link the security constraint in the web.xml:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
    <display-name>example_application</display-name>
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>
    <security-constraint>
        <display-name>authenticated</display-name>
        <web-resource-collection>
            <web-resource-name>All files</web-resource-name>
            <description/>
            <url-pattern>/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <description/>
            <role-name>user</role-name>
        </auth-constraint>
    </security-constraint>
    <login-config>
        <auth-method>BASIC</auth-method>
        <realm-name>karaf</realm-name>
    </login-config>
    <security-role>
        <description/>
        <role-name>user</role-name>
    </security-role>
</web-app>

Thanks to that, your web application will use the “karaf” JAAS realm, which can delegates the storage of users and roles to Syncope.

Thanks to the Syncope Login Module, Karaf becomes even more flexible for the authentication and authorization of the users, as the users/roles backend doesn’t have to be embedded in Karaf itself (as for the PropertiesLoginModule): Karaf can delegates to Syncope which is able to façade a lot of different actual backends.

Coming in Karaf 3.0.0: new enterprise JPA (OpenJPA, Hibernate) and CDI (OpenWebBeans, JBoss Weld) features

December 20, 2013 Posted by jbonofre

Apache Karaf 3.0.0 is now mostly ready (I’m just polishing the documentation).

In previous post, I introduced new enterprise features like JNDI, JDBC, JMS.

As I said, the purpose is to provide a full flexible enterprise ready container, easy to use and extend for the users.

Easy to use means that a simple command will extend your container, with feature that can help you a lot.

JPA

Previous Karaf version already provided a jpa feature. However, this feature “only” installs the Aries JPA bundles, allowing to expose the EntityManager as an OSGi service. It doesn’t install any JPA engine. It means that, previously, the users had to install all bundles required to have a persistence engine.

As very popular persistence engines, Karaf 3.0.0 provides two ready-to-use features:

karaf@root()> feature:install openjpa

The openjpa feature brings Apache OpenJPA in Apache Karaf.

karaf@root()> feature:install hibernate

The hibernate feature brings Hibernate in Apache Karaf.

CDI

Karaf 3.0.0 now refers Pax CDI. It means that you can install pax-cdi* features in Apache Karaf.

However, Pax-CDI doesn’t install any CDI container, it’s up to the users to install all bundles required to have a CDI container.

As very popular CDI containers, Karaf 3.0.0 provides two ready-to-use features:

karaf@root()> feature:repo-add pax-cdi
karaf@root()> feature:install openwebbeans

The openwebbeans feature brings Apache OpenWebBeans CDI container in Apache Karaf.

karaf@root()> feature:repo-add pax-cdi
karaf@root()> feature:install weld

The weld feature brings JBoss Weld CDI container in Apache Karaf.

EJB

As a reminder, waiting to have KarafEE back in Karaf directly (as a ejb feature, I plan to work on it next week), you can install Apache OpenEJB in Apache Karaf:

karaf@root()> feature:repo-add openejb
karaf@root()> feature:install openejb-core
karaf@root()> feature:install openejb-server

Coming in Karaf 3.0.0: new enterprise JMS feature

December 19, 2013 Posted by jbonofre

In my previous post, I introduced the new enterprise JDBC feature.

To follow the same purpose, we introduced the new enterprise JMS feature.

JMS feature

Like the JDBC feature, the JMS feature is an optional one. It means that you have to install it first:

karaf@root()> feature:install jms

The jms feature installs the JMS service which is mostly a JMS “client”. It doesn’t install any broker.

For the rest of this post, I’m using a ActiveMQ embedded in my Karaf:

karaf@root()> feature:repo-add activemq 5.9.0
karaf@root()> feature:install activemq-broker

Like the JDBC feature, the JMS feature provides:

  • an OSGi service
  • jms:* commands
  • a JMX JMS MBean

The OSGi service provides a set of operation to create JMS connection factories, send JMS messages, browse a JMS queue, etc.

The commands and MBean manipulate the OSGi service.

Commands

The jms:create command allows you to create a JMS connection factory.

This command automatically creates a connectionfactory-[name].xml blueprint file in the deploy folder.

However, it doesn’t install any bundle or feature providing the JMS connection factory classes. It’s up to you to previously install the jar files, bundles, or features providing the actual JMS connection factory.

The jms:create command expects one argument and two options:

karaf@root()> jms:create --help
DESCRIPTION
        jms:create

        Create a JMS connection factory.

SYNTAX
        jms:create [options] name 

ARGUMENTS
        name
                The JMS connection factory name

OPTIONS
        -u, --url
                The JMS URL. NB: for WebsphereMQ type, the URL is hostname/port/queuemanager/channel
        --help
                Display this help message
        -t, --type
                The JMS connection factory type (ActiveMQ or WebsphereMQ)
  • name argument is the JMS connection factory name. It’s used in the JNDI name given to the connection factory (e.g. /jms/[name]) and in the blueprint file name in the deploy folder.
  • -t (--type) option is the JMS connection factory type. For now, the command supports two kinds of connection factory: ActiveMQ or WebsphereMQ. If you want to use another kind of connection factory, you can create the connection factory file yourself (using a ActiveMQ file created by the jms:create command as a template).
  • -u (--url) is the URL used by the connection factory. For instance, for ActiveMQ type, the URL looks like tcp://localhost:61616. For WebSphereMQ type, the URL looks like host/port/queuemanager/channel.

As I installed the activemq-broker feature in my Karaf, I can create the JMS connection factory for this broker:

karaf@root()> jms:create -t activemq -u tcp://localhost:61616 default

We can see the JMS connection factory file correclty deployed:

karaf@root()> la
...
151 | Active   |  80 | 0.0.0                 | connectionfactory-default.xml

The connectionfactory-default.xml file has been created in the deploy folder and contains:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <bean id="activemqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="brokerURL" value="tcp://localhost:61616" />
    </bean>

    <bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
        <property name="maxConnections" value="8" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />
    </bean>

    <bean id="resourceManager" class="org.apache.activemq.pool.ActiveMQResourceManager" init-method="recoverResource">
        <property name="transactionManager" ref="transactionManager" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />
        <property name="resourceName" value="activemq.localhost" />
    </bean>

    <reference id="transactionManager" interface="javax.transaction.TransactionManager" />

    <service ref="pooledConnectionFactory" interface="javax.jms.ConnectionFactory">
        <service-properties>
            <entry key="name" value="default" />
            <entry key="osgi.jndi.service.name" value="/jms/default" />
        </service-properties>
    </service>

</blueprint>

We can see the JMS connection factories available in Karaf (created by the jms:create command, or by hand) using the jms:connectionfactories command:

karaf@root()> jms:connectionfactories 
JMS Connection Factory
----------------------
/jms/default    

The jms:info command gives details about a JMS connection factory:

karaf@root()> jms:info /jms/default 
Property | Value   
-------------------
product  | ActiveMQ
version  | 5.9.0  

We are now ready to manipulate the JMS broker.

Let start by sending some messages to a queue in the JMS broker. We can use the jms:send command to do that:

karaf@root()> jms:send /jms/default MyQueue "Hello World"
karaf@root()> jms:send /jms/default MyQueue "Hello Karaf"
karaf@root()> jms:send /jms/default MyQueue "Hello ActiveMQ"

The jms:count command counts the number of messages in a JMS queue. We can check if we have our messages:

karaf@root()> jms:count /jms/default MyQueue
Messages Count
--------------
3             

When using ActiveMQ, the jms:queues and jms:topics commands can lists the queues and topics available in the JMS broker:

karaf@root()> jms:queues /jms/default 
JMS Queues
----------
MyQueue   

We can see the MyQueue queue where we sent our messages.

We can also browse the messages in a queue using the jms:browse command. We can have the details of the messages:

karaf@root()> jms:browse /jms/default MyQueue
Message ID                              | Content        | Charset | Type | Correlation ID | Delivery Mode | Destination     | Expiration | Priority | Redelivered | ReplyTo | Timestamp                   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID:vostro-33323-1387464670760-3:2:1:1:1 | Hello World    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:06 CET 2013
ID:vostro-33323-1387464670760-3:3:1:1:1 | Hello Karaf    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:10 CET 2013
ID:vostro-33323-1387464670760-3:4:1:1:1 | Hello ActiveMQ | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:14 CET 2013

By default, the jms:browse command displays all messages in the given queue. You can specify a selector with the -s (--selector) option to select the messages to browse.

The jms:consume command consumes messages from a queue. By consuming, it means removing.

To consume/remove the messages in MyQueue queue, we can use:

karaf@root()> jms:consume /jms/default MyQueue
3 message(s) consumed

JMX JMS MBean

All actions that we did using the jms:* commands can be performed using the JMS MBean (the object name is org.apache.karaf:type=jms,name=*).

More over, if you want to perform JMS operations programmatically, you can use the org.apache.karaf.jms.JmsService OSGi service.

Coming in Karaf 3.0.0: new enterprise JDBC feature

December 16, 2013 Posted by jbonofre

Some weeks (months ;)) ago, my colleague Christian (Schneider) did a good job by creating some useful commands to manipulate databases directly in Karaf.

We discussed together where to put those commands. We decided to submit a patch at ServiceMix because we didn’t really think about Karaf 3.0.0 at that time.

Finally, I decided to refactore those commands as a even more “useful” Karaf feature and prepare it for Karaf 3.0.0.

JDBC feature

By refactoring, I mean that it’s no more only commands: I did a complete JDBC features, providing a OSGi service, a set of commands and a MBean.
The different modules are provided by the jdbc feature.

Like most of other enterprise features, the jdbc feature is not installed by default. To enable it, you have to install the jdbc feature first:

karaf@root()> feature:install jdbc

This feature provides:

  • a JdbcService OSGi service
  • a set of jdbc:* commands
  • a JDBC MBean (org.apache.karaf:type=jdbc,name=*)

The OSGi service provides a set of operation to create a datasource, execute SQL queries on a datasource, get details about a datasource, etc.

The commands and MBean manipulate the OSGi service.

Commands

The first command that you can do is jdbc:create.

This command automatically create a JDBC datasource file in the deploy folder. It can also try to automatically install the bundles providing the JDBC driver.

The jdbc:create command requires the datasource name and type. The type can be: generic (DBCP), Derby, Oracle, MySQL, Postgres, H2, HSQL.

For instance, if you want to create an embedded Apache Derby database and the corresponding datasource, you can do:

karaf@root()> jdbc:create -t derby -u test -i test

The -t derby option defines a datasource of type derby. The -u test option defines the datasource username. The -i option indicates to try to install the bundles providing the Derby JDBC driver. Finally test is the datasource name.

Now, we can see several things.

First, the command automatically installed the JDBC driver and created the datasource:

karaf@root()> la
...
87 | Active   |  80 | 10.8.2000002.1181258  | Apache Derby 10.8                                                 
88 | Active   |  80 | 0.0.0                 | datasource-test.xml  

We can see the datasource blueprint file created in the deploy folder:

/opt/apache-karaf/target/apache-karaf-3.0.0$ ls -l deploy/
total 8
-rw-r--r-- 1 jbonofre jbonofre 841 Dec 16 14:10 datasource-test.xml

The datasource-test.xml file has been created using a set of templates depending of the datasource type that we provided.
The content is a blueprint XML definition:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           default-activation="eager">

    <bean id="dataSource" class="org.apache.derby.jdbc.EmbeddedXADataSource">
        <property name="databaseName" value="test"/>
        <property name="createDatabase" value="create" />
    </bean>

    <service ref="dataSource" interface="javax.sql.DataSource">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="/jdbc/test"/>
        </service-properties>
    </service>

    <service ref="dataSource" interface="javax.sql.XADataSource">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="/jdbc/testxa"/>
        </service-properties>
    </service>

</blueprint>

You can use the jdbc:delete command to delete an existing datasource.

The jdbc:datasources command provide the list of available JDBC datasources:

karaf@root()> jdbc:datasources 
Name       | Product      | Version              | URL            
------------------------------------------------------------------
/jdbc/test | Apache Derby | 10.8.2.2 - (1181258) | jdbc:derby:test

If you want to have more details about a JDBC datasource, you can use the jdbc:info command:

karaf@root()> jdbc:info /jdbc/test 
Property       | Value                            
--------------------------------------------------
driver.version | 10.8.2.2 - (1181258)             
username       | APP                              
db.version     | 10.8.2.2 - (1181258)             
db.product     | Apache Derby                     
driver.name    | Apache Derby Embedded JDBC Driver
url            | jdbc:derby:test  

You can execute SQL commands and queries on a given JDBC datasource.

For instance, we can create a table directly in our Derby database using the jdbc:execute command:

karaf@root()> jdbc:execute /jdbc/test "create table person(name varchar(100), nick varchar(100))"

The jdbc:tables command displays all tables available on a given datasource. In our case, we can see our person table:

karaf@root()> jdbc:tables /jdbc/test 
REF_GENERATION | TYPE_NAME | TABLE_NAME       | TYPE_CAT | REMARKS | TYPE_SCHEM | TABLE_TYPE   | TABLE_SCHEM | TABLE_CAT | SELF_REFERENCING_COL_NAME
----------------------------------------------------------------------------------------------------------------------------------------------------
               |           | SYSALIASES       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCHECKS        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLPERMS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLUMNS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONGLOMERATES |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONSTRAINTS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDEPENDS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFILES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFOREIGNKEYS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSKEYS          |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSPERMS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROLES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROUTINEPERMS  |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSCHEMAS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSEQUENCES     |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATEMENTS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATISTICS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLEPERMS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLES        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTRIGGERS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSVIEWS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDUMMY1        |          |         |            | SYSTEM TABLE | SYSIBM      |           |                          
               |           | PERSON           |          |         |            | TABLE        | APP         |           |     

Now, also using the jdbc:execute command, we can insert some records in our person table:

karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('foo','bar')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('Christian Schneider','cschneider')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('JB Onofre','jbonofre')"

The jdbc:query command executes SQL queries returning result. For instance, we can select the records in the person table:

karaf@root()> jdbc:query /jdbc/test "select * from person"
NICK       | NAME               
--------------------------------
bar        | foo                
cschneider | Christian Schneider
jbonofre   | JB Onofre     

JDBC MBean and JDBC OSGi Service

All actions that we did using the jdbc:* commands can be performed using the JDBC MBean.

More over, if you want to perform JDBC operations or manipulates datasources programmatically, you can use the org.apache.karaf.jdbc.JdbcService OSGi service.

Next one: jms

Following the same idea, I prepared a new jms feature providing OSGi service, commands and MBean to manipulate JMS connection factories, get information on the broker, send/consume messages, etc.