Apache Syncope backend with Apache Karaf

August 17, 2014 Posted by jbonofre

Apache Syncope is an identity manager (IdM). It comes with a web console where you can manage users, attributes, roles, etc.
It also comes with a REST API allowing to integrate with other applications.

By default, Syncope has its own database, but it can also “façade” another backend (LDAP, ActiveDirectory, JDBC) by using ConnId.

In the next releases (4.0.0, 3.0.2, 2.4.0, and 2.3.7), Karaf provides (by default) a SyncopeLoginModule allowing you to use Syncope as backend for users and roles.

This blog introduces this new feature and explains how to configure and use it.

Installing Apache Syncope

The easiest way to start with Syncope is to use the Syncope standalone distribution. It comes with a Apache Tomcat instance already installed with the different Syncope modules.

You can download the Syncope standalone distribution archive from http://www.apache.org/dyn/closer.cgi/syncope/1.1.8/syncope-standalone-1.1.8-distribution.zip.

Uncompress the distribution in the directory of your choice:

$ unzip syncope-standalone-1.1.8-distribution.zip

You can find the ready to use Tomcat instance in directory. We can start the Tomcat:

$ cd syncope-standalone-1.1.8
$ cd apache-tomcat-7.0.54
$ bin/startup.sh

The Tomcat instance runs on the 9080 port.

You can access the Syncope console by pointing your browser on http://localhost:9080/syncope-console.

The default admin username is “admin”, and password is “password”.

The Syncope REST API is available on http://localhost:9080/syncope/cxf.

The purpose is to use Syncope as backend for Karaf users and roles (in the “karaf” default security realm).
So, first, we create the “admin” role in Syncope:

screen1

screen2

screen3

Now, we can create an user of our choice, let say “myuser” with “myuser01″ as password.

screen4

As we want “myuser” as Karaf administrator, we define the “admin” role for “myuser”.

screen5

“myuser” has been created.

screen5

screen6

Syncope is now ready to be used by Karaf (including users and roles).

Karaf SyncopeLoginModule

Karaf provides a complete security framework allowing to use JAAS in an OSGi compliant way.

Karaf itself uses a realm named “karaf”: it’s the one used by SSH, JMX, WebConsole by default.

By default, Karaf uses two login modules for the “karaf” realm:

  • the PropertiesLoginModule uses the etc/users.properties as storage for users and roles (with user password)
  • the PublickeyLoginModule uses the etc/keys.properties as storage for users and roles (with user public key)

In the coming Karaf versions (3.0.2, 2.4.0, 2.3.7), a new login module is available: the SyncopeLoginModule.

To enable the SyncopeLoginModule, we just create a blueprint descriptor that we drop into the deploy folder. The configuration of the Syncope login module is pretty simple, it just requires the address of the Syncope REST API:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0"
           xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">

    <jaas:config name="karaf" rank="1">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"
                     flags="required">
           address=http://localhost:9080/syncope/cxf
        </jaas:module>
    </jaas:config>

</blueprint>

You can see that the login module is enabled for the “karaf” realm using the jaas:realm-list command:

karaf@root()> jaas:realm-list 
Index | Realm Name | Login Module Class Name                                 
-----------------------------------------------------------------------------
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule

We can now login on SSH using “myuser” which is configured in Syncope:

~$ ssh -p 8101 myuser@localhost
The authenticity of host '[localhost]:8101 ([127.0.0.1]:8101)' can't be established.
DSA key fingerprint is b3:4a:57:0e:b8:2c:7e:e6:1c:f1:e2:88:dc:bf:f9:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:8101' (DSA) to the list of known hosts.
Password authentication
Password:
        __ __                  ____      
       / //_/____ __________ _/ __/      
      / ,<  / __ `/ ___/ __ `/ /_        
     / /| |/ /_/ / /  / /_/ / __/        
    /_/ |_|\__,_/_/   \__,_/_/         

  Apache Karaf (4.0.0-SNAPSHOT)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.

myuser@root()> 

Our Karaf instance now uses Syncope for users and roles.

Karaf SyncopeBackendEngine

In addition of the login module, Karaf also ships a SyncopeBackendEngine. The purpose of the Syncope backend engine is to manipulate the users and roles back directly from Karaf. Thanks to the backend engine, you can list the users, add a new user, etc directly from Karaf.

However, for security reason and consistency, the SyncopeBackendEngine only supports the listing of users and roles defined in Syncope: the creation/deletion of an user or role directly from Karaf is disabled as those operations should be performed directly from the Syncope console.

To enable the Syncope backend engine, you have to register the backend engine as an OSGi service. Moreoever, the SyncopeBackendEngine requires two additional options on the login module: the admin.user and admin.password corresponding a Syncope admin user.

We have to update the blueprint descriptor like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0"
           xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">

    <jaas:config name="karaf" rank="5">
        <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule"
                     flags="required">
           address=http://localhost:9080/syncope/cxf
           admin.user=admin
           admin.password=password
        </jaas:module>
    </jaas:config>

    <service interface="org.apache.karaf.jaas.modules.BackingEngineFactory">
        <bean class="org.apache.karaf.jaas.modules.syncope.SyncopeBackingEngineFactory"/>
    </service>

</blueprint>

With the SyncopeBackendEngineFactory register as an OSGi service, for instance, we can list the users (and their roles) defined in Syncope.

To do it, we can use the jaas:user-list command:

myuser@root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------
1     | karaf      | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule
myuser@root()> jaas:realm-manage --index 1
myuser@root()> jaas:user-list
User Name | Group | Role
------------------------------------
rossini   |       | root
rossini   |       | otherchild
verdi     |       | root
verdi     |       | child
verdi     |       | citizen
vivaldi   |       |
bellini   |       | managingDirector
puccini   |       | artDirector
myuser    |       | admin

We can see all the users and roles defined in Syncope, including our “myuser” with our “admin” role.

Using Karaf JAAS realms

In Karaf, you can create any number of JAAS realms that you want.
It means that existing applications or your own applications can directly use a realm to delegate authentication and authorization.

For instance, Apache CXF provides a JAASLoginInterceptor allowing you to add authentication by configuration. The following Spring or Blueprint snippet shows how to use the “karaf” JAAS realm:

<jaxws:endpoint address="/service">
 <jaxws:inInterceptors>
   <ref bean="authenticationInterceptor"/>
 </jaxws:inInterceptors>
</jaxws:endpoint>
 
<bean id="authenticationInterceptor" class="org.apache.cxf.interceptor.security.JAASLoginInterceptor">
   <property name="contextName" value="karaf"/>
</bean>

The same configuration can be applied for jaxrs endpoint instead of jaxws endpoint.

As Pax Web leverages and uses Jetty, you can also define your Jetty security configuration in your Web Application.
For instance, in the META-INF/spring/jetty-security.xml of your application, you can define the security contraints:

<?xml version="1.0" encoding="UTF-8"?>
<beans    xmlns="http://www.springframework.org/schema/beans"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
  <bean id="loginService" class="org.eclipse.jetty.plus.jaas.JAASLoginService">       
    <property name="name" value="karaf" />
    <property name="loginModuleName" value="karaf" />    
 </bean>    
 <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint">        
   <property name="name" value="BASIC"/>       
   <property name="roles" value="user"/>        
   <property name="authenticate" value="true"/>   
</bean>    
<bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping">        
  <property name="constraint" ref="constraint"/>        
  <property name="pathSpec" value="/*"/>    
</bean>
 <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">       
   <property name="authenticator">          
     <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/>     
   </property>        
   <property name="constraintMappings">          
    <list>           
     <ref bean="constraintMapping"/>     
   </list>   
   </property>        
  <property name="loginService" ref="loginService" />      
  <property name="strict" value="false" />   
 </bean>
</beans>

We can link the security constraint in the web.xml:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
    <display-name>example_application</display-name>
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>
    <security-constraint>
        <display-name>authenticated</display-name>
        <web-resource-collection>
            <web-resource-name>All files</web-resource-name>
            <description/>
            <url-pattern>/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <description/>
            <role-name>user</role-name>
        </auth-constraint>
    </security-constraint>
    <login-config>
        <auth-method>BASIC</auth-method>
        <realm-name>karaf</realm-name>
    </login-config>
    <security-role>
        <description/>
        <role-name>user</role-name>
    </security-role>
</web-app>

Thanks to that, your web application will use the “karaf” JAAS realm, which can delegates the storage of users and roles to Syncope.

Thanks to the Syncope Login Module, Karaf becomes even more flexible for the authentication and authorization of the users, as the users/roles backend doesn’t have to be embedded in Karaf itself (as for the PropertiesLoginModule): Karaf can delegates to Syncope which is able to façade a lot of different actual backends.

Hadoop CDC and processes notification with Apache Falcon, Apache ActiveMQ, and Apache Camel

March 19, 2014 Posted by jbonofre

Some weeks (months ? ;)) ago, I started to work on Apache Falcon. First of all, I would like to thanks all Falcon guys: they are really awesome and do a great job (special thanks to Srikanth, Venkatesh, Swetha).

This blog post is a preparation to a set of “recipes documentation” that I will propose in Apache Falcon.

Falcon is in incubation at Apache. The purpose is to provide a data processing and management solution for Hadoop designed for data motion, coordination of data pipelines, lifecycle management, and data discovery. Falcon enables end consumers to quickly onboard their data and its associated processing and management tasks on Hadoop clusters.

A interesting feature provided by Falcon is notifications of the activities in the Hadoop cluster “outside” of the cluster ;)
In this article, we will see how to get two kinds of notification in Camel routes “outside” of the Hadoop cluster:

  • a Camel route will be notified and triggered when a process is executed in the Hadoop cluster
  • a Camel route will be notified and triggered when a HDFS location changes (a first CDC feature)

Requirements

If you already have your Hadoop cluster, or you know to install/prepare it, you can skip this step.

In this section, I will create a “real fake” Hadoop cluster on one machine. It’s not really a pseudo-distributed as I will use multiple datanodes and tasktrackers, but all on one machine (of course, it doesn’t make sense, but it’s just for demo purpose ;)).

In addition of Hadoop common components (HDFS namenode/datanodes and M/R jobtracker/tasktracker), Falcon requires Oozie (for scheduling) and ActiveMQ.

By default, Falcon embeds ActiveMQ, but for the demo (and provide screenshots to the ActiveMQ WebConsole), I will use a standalone ActiveMQ instance.

Hadoop “fake” cluster preparation

For the demo, I will “fake” three machines.

I create a demo folder on my machine, and I uncompress hadoop-1.1.2-bin.tar.gz tarball in node1, node2, node3 folders:

$ mkdir demo
$ cd demo
$ tar zxvf ~/hadoop-1.1.2-bin.tar.gz
$ cp -r hadoop-1.1.2 node1
$ cp -r hadoop-1.1.2 node2
$ cp -r hadoop-1.1.2 node3
$ mkdir storage

I also create a storage folder where I will put the nodes’ files. This folder is just for convenience, as it’s easier to restart from scratch, just be deleting the storage folder content.

Node1 will hosts:

  • the HDFS namenode
  • a HDFS datanode
  • the M/R jobtracker
  • a M/R tasktracker

So, the node1/conf/core-site.xml file contains the location of the namenode:

<?xml version="1.0">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/core-site.xml -->
<configuration>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost</value>
  </property>
</configuration>

In the node1/conf/hdfs-site.xml file, we define the storage location for the namenode and the datanode (in the storage folder), and the default replication:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.name.dir</name>
    <value>/home/jbonofre/demo/storage/node1/namenode</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node1/datanode</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
</configuration>

Finally, in node1/conf/mapred-site.xml file, we define the location of the job tracker:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
</configuration>

Node1 is not ready.

Node2 hosts a datanode and a tasktracker. As for node1, the node2/conf/core-site.xml file contains the location of the namenode:

<?xml version="1.0">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/core-site.xml -->
<configuration>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost</value>
  </property>
</configuration>

The node2/conf/hdfs-site.xml file contains:

  • the storage location of the datanode
  • the network location of the namenode (from node1)
  • the port numbers used by the datanode (core, IPC, and HTTP in order to be able to start multiple datanodes on the same machine)
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsd"?>
<!-- node2 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node2/datanode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>localhost:50110</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>localhost:50120</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>localhost:50175</value>
  </property>
</configuration>

The node2/conf/mapred-site.xml file contains the network location of the jobtracker, and the HTTP port number used by the tasktracker (in order to be able to run multiple tasktracker on the same machine):

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
  <property>
    <name>mapred.task.tracker.http.address</name>
    <value>localhost:50160</value>
  </property>
</configuration>

Node3 is very similar to node2: it hosts a datanode and a tasktracker. So the configuration is very similar to node2 (just the storage location, and the datanode and tasktracker port numbers are different).

Here’s the node3/conf/core-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node3 conf/core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost</value>
  </property>
</configuration>

Here’s the node3/conf/hdfs-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node3 conf/hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/jbonofre/demo/storage/node3/datanode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>localhost:50210</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>localhost:50220</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>localhost:50275</value>
  </property>
</configuration>

Here’s the node3/conf/mapred-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node2 conf/mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
  <property>
    <name>mapred.task.tracker.http.address</name>
    <value>localhost:50260</value>
  </property>
</configuration>

Our “fake” cluster configuration is now ready.

We can format the namenode on node1:

$ cd node1/bin
$ ./hadoop namenode -format
14/03/06 17:26:38 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = vostro/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.1.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
14/03/06 17:26:39 INFO util.GSet: VM type       = 64-bit
14/03/06 17:26:39 INFO util.GSet: 2% max memory = 17.78 MB
14/03/06 17:26:39 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/03/06 17:26:39 INFO util.GSet: recommended=2097152, actual=2097152
14/03/06 17:26:39 INFO namenode.FSNamesystem: fsOwner=jbonofre
14/03/06 17:26:39 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/06 17:26:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/06 17:26:39 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/06 17:26:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/06 17:26:39 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/03/06 17:26:40 INFO common.Storage: Image file of size 114 saved in 0 seconds.
14/03/06 17:26:40 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/jbonofre/demo/storage/node1/namenode/current/edits
14/03/06 17:26:40 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/jbonofre/demo/storage/node1/namenode/current/edits
14/03/06 17:26:40 INFO common.Storage: Storage directory /home/jbonofre/demo/storage/node1/namenode has been successfully formatted.
14/03/06 17:26:40 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vostro/127.0.0.1
************************************************************/

We are now ready to start the namenode on node1:

$ cd node1/bin
$ ./hadoop namenode &

We start the datanode on node1:

$ cd node1/bin
$ ./hadoop datanode &

We start the jobtracker on node1:

$ cd node1/bin
$ ./hadoop jobtracker &

We start the tasktracker on node1:

$ cd node1/bin
$ ./hadoop tasktracker &

Node1 is fully started with the namenode, a datanode, the jobtracker, and a tasktracker.

We start a datanode and a tasktracker on node2:

$ cd node2/bin
$ ./hadoop datanode &
$ ./hadoop tasktracker &

And finally, we start a datanode and a tasktracker on node3:

$ cd node3/bin
$ ./hadoop datanode &
$ ./hadoop tasktracker &

We access to the HDFS web console (http://localhost:50070) to verify that the namenode is able to see the 3 live datanodes:
hdfs1
We also access to the MapReduce web console (http://localhost:50030) to verify that the jobtracker is able to see the 3 live tasktrackers:
mapred1

Oozie

Falcon delegates scheduling of jobs (plannification, re-execution, etc) to Oozie.

Oozie is a workflow scheduler system to manage hadoop jobs, using Quartz internally.

It uses a “custom” Oozie distribution: Falcon adds some addition EL extensions on top of a “regular” Oozie.

Falcon provides a script to create the Falcon custom Oozie distribution: we provide the Hadoop and Oozie version that we need.

We can clone Falcon sources from git and call the src/bin/package.sh with the Hadoop and Oozie target versions that we want:

$ git clone https://git-wip-us.apache.org/repos/asf/incubator-falcon falcon
$ cd falcon
$ src/bin/package.sh 1.1.2 4.0.0

The package.sh script creates target/oozie-4.0.0-distro.tar.gz in the Falcon sources folder.

In the demo folder, I uncompress oozie-4.0.0-distro.tar.gz tarball:

$ cp ~/oozie-4.0.0-distro.tar.gz
$ tar zxvf oozie-4.0.0-distro.tar.gz

We now have a oozie-4.0.0-falcon folder.

Oozie requires a special configuration on the namenode (so on node1). We have to update the node1/conf/core-site.xml file to define the system user “proxied” by Oozie:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- node1 conf/core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost</value>
  </property>
  <property>
    <name>hadoop.proxyuser.jbonofre.hosts</name>
    <value>localhost</value>
  </property>
  <property>
    <name>hadoop.proxyuser.jbonofre.groups</name>
    <value>localhost</value>
  </property>
</configuration>

NB: don’t forget to restart the namenode to include these changes.

Now, we can prepare the Oozie webapplication. Due to license restriction, it’s up to you to add ExtJS library for Oozie webconsole. To enable it, first, we create a oozie-4.0.0-falcon/libext folder and put ext-2.2.zip archive:

$ cd oozie-4.0.0-falcon
$ mkdir libext
$ cd libext
$ wget "http://extjs.com/deploy/ext-2.2.zip"

We have to populate the libext folder with different additional jar files:

  • the Hadoop jar files:

    $ cp node1/hadoop-core-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/hadoop-client-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/hadoop-tools-1.1.2.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-beanutils-1.7.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-beanutils-core-1.8.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-codec-1.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-collections-3.2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-configuration-1.6.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-digester-1.8.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-el-1.0.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-io-2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-lang-2.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-logging-api-1.0.4.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-logging-1.1.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-math-2.1.jar oozie-4.0.0-falcon/libext
    $ cp node1/lib/commons-net-3.1.jar oozie-4.0.0-falcon/libext
    
  • the Falcon Oozie extender:

    $ cp falcon-0.5-incubating-SNAPSHOT/oozie/libext/falcon-oozie-el-extension-0.5-incubating-SNAPSHOT.jar oozie-4.0.0-falcon/libext
    
  • jar files required for hcatalog, pig from oozie-sharelib-4.0.0-falcon.tar.gz:

    $ cd oozie-4.0.0-falcon
    $ tar zxvf oozie-sharelib-4.0.0-falcon.tar.gz
    $ cp share/lib/hcatalog/hcatalog-core-0.5.0-incubating.jar libext
    $ cp share/lib/hcatalog/hive-* libext
    $ cp share/lib/pig/hsqldb-1.8.0.7.jar libext
    $ cp share/lib/pig/jackson-* libext
    $ cp share/lib/hcatalog/libfb303-0.7.0.jar libext
    $ cp share/lib/hive/log4j-1.2.16.jar libext
    $ cp libtools/oro-2.0.8.jar libext
    $ cp share/lib/hcatalog/webhcat-java-client-0.5.0-incubating.jar libext
    $ cp share/lib/pig/xmlenc-0.52.jar libext
    $ cp share/lib/pig/guava-11.0.2.jar libext
    $ cp share/lib/hcatalog/oozie-* libext
    

We are now ready to setup Oozie.

First, we “assemble” the oozie webapplication (war):

$ cd oozie-4.0.0-falcon/bin
$ ./oozie-setup.sh prepare-war
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

INFO: Adding extension: /home/jbonofre/demo/oozie-4.0.0-falcon/libext/ant-1.6.5.jar
...

New Oozie WAR file with added 'ExtJS library, JARs' at /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/webapps/oozie.war


INFO: Oozie is ready to be started

Now, we “upload” the oozie shared libraries on our HDFS, including the falcon shared lib:

$ cd oozie-4.0.0/bin
$ ./oozie-setup.sh sharelib create -fs hdfs://localhost -locallib ../oozie-sharelib-4.0.0-falcon.tar.gz
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
the destination path for sharelib is: /user/jbonofre/share/lib

If we browse the HDFS, we can see the folders created by Oozie.
hdfs2
Finally, we create the Oozie database (where it stores the jobs definition, etc).

$ cd oozie-4.0.0-falcon/bin
$ ./oozie-setup.sh db create -run
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

Validate DB Connection
DONE
Check DB schema does not exist
DONE
Check OOZIE_SYS table does not exist
DONE
Create SQL schema
DONE
Create OOZIE_SYS table
DONE

Oozie DB has been created for Oozie version '4.0.0'


The SQL commands have been written to: /tmp/ooziedb-4527318150729236810.sql

The Oozie configuration is done, we start it:

$ cd oozie-4.0.0-falcon/bin
$ ./oozied.sh start

Setting OOZIE_HOME:          /home/jbonofre/demo/oozie-4.0.0-falcon
Setting OOZIE_CONFIG:        /home/jbonofre/demo/oozie-4.0.0-falcon/conf
Sourcing:                    /home/jbonofre/demo/oozie-4.0.0-falcon/conf/oozie-env.sh
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
Setting OOZIE_CONFIG_FILE:   oozie-site.xml
Setting OOZIE_DATA:          /home/jbonofre/demo/oozie-4.0.0-falcon/data
Setting OOZIE_LOG:           /home/jbonofre/demo/oozie-4.0.0-falcon/logs
Setting OOZIE_LOG4J_FILE:    oozie-log4j.properties
Setting OOZIE_LOG4J_RELOAD:  10
Setting OOZIE_HTTP_HOSTNAME: vostro.nanthrax.net
Setting OOZIE_HTTP_PORT:     11000
Setting OOZIE_ADMIN_PORT:     11001
Setting OOZIE_HTTPS_PORT:     11443
Setting OOZIE_BASE_URL:      http://vostro.nanthrax.net:11000/oozie
Setting CATALINA_BASE:       /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Setting OOZIE_HTTPS_KEYSTORE_FILE:     /home/jbonofre/.keystore
Setting OOZIE_HTTPS_KEYSTORE_PASS:     password
Setting CATALINA_OUT:        /home/jbonofre/demo/oozie-4.0.0-falcon/logs/catalina.out
Setting CATALINA_PID:        /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp/oozie.pid

Using   CATALINA_OPTS:        -Xmx1024m -Dderby.stream.error.file=/home/jbonofre/demo/oozie-4.0.0-falcon/logs/derby.log
Adding to CATALINA_OPTS:     -Doozie.home.dir=/home/jbonofre/demo/oozie-4.0.0-falcon -Doozie.config.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/conf -Doozie.log.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/logs -Doozie.data.dir=/home/jbonofre/demo/oozie-4.0.0-falcon/data -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=vostro.nanthrax.net -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://vostro.nanthrax.net:11000/oozie -Doozie.https.keystore.file=/home/jbonofre/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=

Using CATALINA_BASE:   /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Using CATALINA_HOME:   /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server
Using CATALINA_TMPDIR: /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp
Using JRE_HOME:        /opt/jdk/1.7.0_51
Using CLASSPATH:       /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/bin/bootstrap.jar
Using CATALINA_PID:    /home/jbonofre/demo/oozie-4.0.0-falcon/oozie-server/temp/oozie.pid

We access to the Oozie webconsole on http://localhost:11000/oozie/:
oozie1

ActiveMQ

By default, Falcon embeds ActiveMQ, so generally speaking, you don’t have to install ActiveMQ. However, for the demo, I would like to show how to use a external and standalone ActiveMQ.

I uncompress the apache-activemq-5.7.0-bin.tar.gz tarball in the demo folder:

$ cd demo
$ tar zxvf ~/apache-activemq-5.7.0-bin.tar.gz

The default ActiveMQ configuration is fine, we can just start the broker on the default port (61616):

$ cd demo/apache-activemq-5.7.0/bin
$ ./activemq console

All the Falcon pre-requirements are done.

Falcon installation

Falcon can be deployed:

  • standalone: it’s the “regular” deployment mode when you have only one hadoop cluster. It’s the deployment mode that I will use for this CDC demo.
  • distributed: it’s the deployment to use when you have multiple hadoop clusters, especially if you want to use the Falcon replication feature.

For the installation, we uncompress the falcon-0.5-incubating-SNAPSHOT-bin.tar.gz tarball in the demo folder:

$ cd demo
$ tar zxvf ~/falcon-0.5-incubating-SNAPSHOT-bin.tar.gz

Before starting Falcon, we disable the default embedded ActiveMQ broker in the conf/falcon-env.sh file:

# conf/falcon-env.sh
...
export FALCON_OPTS="-Dfalcon.embeddedmq=false"
...

We start the falcon server:

$ cd falcon-0.5-incubating-SNAPSHOT/bin
$ ./falcon-start 
Could not find installed hadoop and HADOOP_HOME is not set.
Using the default jars bundled in /home/jbonofre/demo/falcon-0.5-incubating-SNAPSHOT/hadooplibs/
/home/jbonofre/demo/falcon-0.5-incubating-SNAPSHOT/bin
falcon started using hadoop version:  Hadoop 1.1.2

The falcon server starts actually a Jetty container with jersey to expose the Falcon REST API.

You can check if the falcon server started correctly using bin/falcon-status or bin/falcon:

$ bin/falcon-status
Falcon server is running (on http://localhost:15000/)
$ bin/falcon admin -status
Falcon server is running (on http://localhost:15000/)
$ bin/falcon admin -version
Falcon server build version: {"properties":[{"key":"Version","value":"0.5-incubating-SNAPSHOT-r5445e109bc7fbfea9295f3411a994485b65d1477"},{"key":"Mode","value":"embedded"}]}

Falcon usage: the entities

In Falcon, the configuration is defined by “entity”. Falcon supports three types of entity:

  • cluster entity defines the hadoop cluster (location of the namenode, location of the jobtracker), related falcon module (Oozie, ActiveMQ), and the location of the Falcon working directories (on HDFS)
  • feed entity defines a location on HDFS
  • process entity defines a hadoop job scheduled by Oozie

An entity is described using XML. You can do different actions on an entity:

  • Submit: register an entity in Falcon. Submitted entity are not scheduled, meaning it would simply be in the configuration store of Falcon.
  • List: provide the list of all entities registered in the configuration store of Falcon.
  • Dependency: provide the dependency of an entity. For example, a feed would show process that are dependent on the feed and the clusters that it depends on.
  • Schedule: feeds or processes that are already submitted and present in the configuration store can be scheduled. Upon schedule, Falcon system wraps the required repeatable action as a bundle of oozie coordinators and executes them on the Oozie scheduler.
  • Suspend: this action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.
  • Resume: put a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
  • Status: to display the current status of an entity.
  • Definition: dump the entity definition from the configuration store.
  • Delete: remote an entity from the Falcon configuration store.
  • Update: update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed. Feed update can cause cascading update to all the processes already scheduled. The following set of actions are performed in Oozie to realize an update.
    • Suspend the previously scheduled Oozie coordinator. This is prevent any new action from being triggered.
    • Update the coordinator to set the end time to “now”
    • Resume the suspended coordinators
    • Schedule as per the new process/feed definition with the start time as “now”

Cluster

The cluster entity defines the configuration of the hadoop cluster and components used by Falcon.

We will store the entity descriptors in the entity folder:

$ mkdir entity

For the cluster, we create entity/local.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<cluster colo="local" description="Local cluster" name="local" xmlns="uri:falcon:cluster:0.1">
  <interfaces>
    <interface type="readonly" endpoint="hftp://localhost:50010" version="1.1.2"/>
    <interface type="write" endpoint="hdfs://localhost:8020" version="1.1.2"/>
    <interface type="execute" endpoint="localhost:8021" version="1.1.2"/>
    <interface type="workflow" endpoint="http://localhost:11000/oozie/" version="4.0.0"/>
    <interface type="messaging" endpoint="tcp://localhost:61616" version="5.7.0"/>
  </interfaces>
  <locations>
    <location name="staging" path="/falcon/staging"/>
    <location name="temp" path="/falcon/temp"/>
    <location name="working" path="/falcon/working"/>
  </locations>
  <properties></properties>
</cluster>

A cluster contains different interfaces and locations used by Falcon. A cluster is referenced by feeds and processes entities (using the cluster name). A cluster can’t be scheduled (it doesn’t make sense).

The colo specifies a kind of cluster grouping. It’s used in distributed deployment mode, so not useful in our demo (as we have only one cluster).
The readonly interface specifies the Hadoop’s HFTP protocol, only used in the case of feed replication between clusters (again, not use in our demo).
The write interface specifies the write access to hdfs, containing the fs.default.name value. Falcon uses this interface to write system data to hdfs and feeds referencing this cluster are written to hdfs using this interface.
The execute interface specifies the location of the jobtracker, containing the mapred.job.tracker value. Falcon uses this interface to submit the processes as jobs in the jobtracker defined here.
The workflow interface specifies the interface for worklow engine (the Oorie URL). Falcon uses this interface to schedule the processes referencing this cluster on workflow engine defined here.
Optionally, you can have a registry interface (defininng thrift URL) to specify the metadata catalog, such as Hive Metastore (or HCatalog). We don’t use it in our demo.
The messaging interface specifies the interface for sending feed availability messages. It’s the URL of the ActiveMQ broker.

A cluster has a list of locations with a name (working, temp, staging) and a path on HDFS. Falcon would use the location to do intermediate processing of entities in hdfs and hence Falcon should have read/write/execute permission on these locations.

Optionally, a cluster may have a list of properties. It’s a list of key-value pairs used in Falcon and propagated to the workflow engine. For instance, you can specify the JMS broker connection factory:

<property name="brokerImplClass" value="org.apache.activemq.ActiveMQConnectionFactory" />

Now, that we have the XML description, we can register our cluster in Falcon. We use the Falcon client commandline to do submit our cluster definition:

$ cd falcon-0.5-incubating-SNAPSHOT
$ bin/falcon entity -submit -type cluster -file ~/demo/local.xml
default/Submit successful (cluster) local

We can check that our local cluster is actually present in the Falcon configuration store:

$ bin/falcon entity -list -type cluster
(cluster) local(null)

We can see our cluster “local”, for now without any dependency (null).

If we take a look on hdfs, we can see that the falcon directory has been created:

$ cd node1
$ bin/hadoop fs -ls /
Found 3 items
drwxr-xr-x   - jbonofre supergroup          0 2014-03-08 07:48 /falcon
drwxr-xr-x   - jbonofre supergroup          0 2014-03-06 17:32 /tmp
drwxr-xr-x   - jbonofre supergroup          0 2014-03-06 18:05 /user

Feed

A feed entity is a location on the cluster. It also defines additional attributes like frequency, late-arrival handling, and retention policies. A feed can be scheduled, meaning that Falcon will create processes to deal with retention and replication on the cluster.

As other entity, a feed is described using a XML. We create entity/output.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<feed description="RandomProcess output feed" name="output" xmlns="uri:falcon:feed:0.1">
  <group>output</group>
 
  <frequency>minutes(1)</frequency>
  <timezone>UTC</timezone>
  <late-arrival cut-off="minutes(5)"/>

  <clusters>
    <cluster name="local">
       <validity start="2012-07-20T03:00Z" end="2099-07-16T00:00Z"/>
       <retention limit="hours(10)" action="delete"/>
    </cluster>
  </clusters>

  <locations>
    <location type="data" path="/data/output"/>
  </locations>

  <ACL owner="jbonofre" group="supergroup" permission="0x644"/>

  <schema location="none" provider="none"/>
</feed>

The locations element define the feed storage. It’s paths on HDFS or table names for Hive. A location is define on a cluster, identified by name. In our example, we use the “local” cluster that we submitted before.

The group element defines a list of comma separated groups. A group is a logical grouping of feeds. A group is said available if all the feeds belonging to a group are available. The frequency of all the feeds which belong to the same group must be same.

The frequency element specifies the frequency by which this feed is generated (for instance, it can generated every hour, every 5 minutes, daily, weekly, etc). Falcon uses this frequency to check if the feed has changed or not (the size has changed). In our example, we define a frequency of every minute. Falcon creates a job in Oozie to monitor the feed.
Falcon system can handle late arrival of input data and appropriately re-trigger processing for the affected instance. From the perspective of late handling, there are two main configuration parameters late-arrival cut-off and late-inputs section in feed and process entity definition that are central. These configurations govern how and when the late processing happens. In the current implementation (oozie based) the late handling is very simple and basic. The falcon system looks at all dependent input feeds for a process and computes the max late cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in late data file by Falcons “record-size” action, if it detects any changes then the workflow will be rerun with the new set of feed data.

The retention element specifies how long the feed is retained on the cluster and the action to be taken on the feed after the expiration of the retention period. In our example, we delete the feed after a retention of 10 days.

The validity of a feed on cluster specifies duration for which this feed is valid on this cluster (considered for scheduling by Falcon).

The ACL defines the permission on the feed (owner/group/permission).

The schema allows you to specific the “format” of the feed (for instance csv). In our case, we don’t define any schema.

We can now submit the feed (register the feed) into Falcon:

$ cd falcon-0.5-incubating-SNAPSHOT
$ bin/falcon entity -submit -type feed -file ~/demo/entity/output.xml
default/Submit successful (feed) output

Process

A process entity defines a job in the cluster.

Like other entity, a process is described with XML (entity/process.xml):

<?xml version="1.0" encoding="UTF-8"?>
<process name="my-process" xmlns="uri:falcon:process:0.1">
    <clusters>
        <cluster name="local">
            <validity start="2013-11-15T00:05Z" end="2030-11-15T01:05Z"/>
        </cluster>
    </clusters>

    <parallel>1</parallel>
    <order>FIFO</order>
    <frequency>minutes(5)</frequency>
    <timezone>UTC</timezone>

    <inputs>
        <!-- In the workflow, the input paths will be available in a variable 'inpaths' -->
        <input name="inpaths" feed="input" start="now(0,-5)" end="now(0,-1)"/>
    </inputs>

    <outputs>
        <!-- In the workflow, the output path will be available in a variable 'outpath' -->
        <output name="outpath" feed="output" instance="now(0,0)"/>
    </outputs>

    <properties>
        <!-- In the workflow, these properties will be available with variable - key -->
        <property name="queueName" value="default"/>
        <!-- The schedule time available as a property in workflow -->
        <property name="time" value="${instanceTime()}"/>
    </properties>

    <workflow engine="oozie" path="/app/mr"/>

    <late-process policy="periodic" delay="minutes(1)">
       <late-input input="inpaths" workflow-path="/app/mr"/>
    </late-process>

</process>

The cluster element defines where the process will be executed. Each cluster has a validity period, telling the times between which the job should run on the cluster. For the demo, we set a large validity period.

The parallel element defines how many instances of the process can run concurrently. We set a value of 1 here to ensure that only one instance of the process can run at a time.

The order element defines the order in which the ready instances are picked up. The possible values are FIFO(First In First Out), LIFO(Last In First Out), and ONLYLAST(Last Only). It’s not really used in our case.

The frequency element defines how frequently the process should run. In our case, minutes(5) means that the job will run every 5 minutes.

The inputs element defines the input data for the process. The process job will start executing only after the schedule time and when all the inputs are available. There can be 0 or more inputs and each of the input maps to a feed. The path and frequency of input data is picked up from feed definition. Each input should also define start and end instances in terms of EL expressions and can optionally specify specific partition of input that the process requires. The components in partition should be subset of partitions defined in the feed.
For each input, Falcon will create a property with the input name that contains the comma separated list of input paths. This property can be used in process actions like pig scripts and so on.

The outputs element defines the output data that is generated by the process. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of EL expression.
For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path.

The properties element contains key value pairs that are passed to the process. These properties are optional and can be used to parameterize the process.

The workflow element defines the workflow engine that should be used and the path to the workflow on hdfs. The workflow definition on hdfs contains the actual job that should run and it should confirm to the workflow specification of the engine specified. The libraries required by the workflow should be in lib folder inside the workflow path.
The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also be available for the workflow.
Currently, Falcon supports three workflow engines:

  • oozie enables users to provide a Oozie workflow definition (in XML).
  • pig enables users to embed a Pig script as a process
  • hive enables users to embed a Hive script as a process. This would enable users to create materialized queries in a declarative way.

NB: I proposed to support a new type of workflow: MapReduce, to be able to directly execute MapReduce job.

In this demo, we use the oozie workflow engine.

We create a Oozie workflow.xml:

<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">
    <start to="mr-node"/>
    <action name="mr-node">
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <prepare>
                <delete path="${outpath}"/>
            </prepare>
            <configuration>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
                <property>
                    <name>mapred.mapper.class</name>
                    <value>org.apache.hadoop.mapred.lib.IdentityMapper</value>
                </property>
                <property>
                    <name>mapred.reducer.class</name>
                    <value>org.apache.hadoop.mapred.lib.IdentityReducer</value>
                </property>
                <property>
                    <name>mapred.map.tasks</name>
                    <value>1</value>
                </property>
                <property>
                    <name>mapred.input.dir</name>
                    <value>${inpaths}</value>
                </property>
                <property>
                    <name>mapred.output.dir</name>
                    <value>${outpath}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end"/>
        <error to="fail"/>
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
</workflow-app>

This workflow is very simple: it uses IdentityMapper and IdentityReducer (provided in Hadoop core) to copy input data as output data.

We upload this workflow.xml on HDFS (in the location specified in the Falcon process workflow element):

$ cd node1
$ bin/hadoop fs -mkdir /app/mr
$ bin/hadoop fs -put ~/demo/workflow.xml /app/mr

The late-process allows the process to react with the input feed changes and trigger an action (here, we re-execute the oozie workflow).

We are now ready to submit the process in Falcon:

$ cd falcon-*
$ bin/falcon entity -submit -type process -file ~/entity/process.xml

The process is ready to be scheduled.

Before scheduling the process, we create the input data. The input data is a simple file (containing a string) that we upload to HDFS:

$ cat > file1
This is a test file
$ node1/bin/hadoop fs -mkdir /data/input
$ node1/bin/hadoop fs -put file1 /data/input

We can now trigger the process:

$ cd falcon*
$ bin/falcon entity -schedule -type process -name my-process
default/my-process(process) scheduled successfully

We can see the different jobs in Oozie (accessing http://localhost:11000/oozie):
oozie_bj
oozie_cj
oozie_wj

On the other hand, we see new topics and queues created in ActiveMQ:
mq_queues
amq_topics

Especially, in ActiveMQ, we have two topics:

  • Falcon publishes messages in the FALCON.my-process topic for each execution of the process
  • Falcon publishes messages in the FALCON.ENTITY.TOPIC topic for each change on the feeds

It’s where our Camel routes subscribe.

Camel routes in Karaf

Now that we have our Falcon platform ready, we just have to create Camel routes (hosted in Karaf container), subscribing on the Falcon topics in ActiveMQ.

We uncompress a Karaf container, and install the Camel features (camel-spring, activemq-camel):

$ tar zxvf apache-karaf-2.3.1.tar.gz
$ cd apache-karaf-2.3.1
$ bin/karaf
karaf@root> features:chooseurl camel
adding feature url mvn:org.apache.camel.karaf/apache-camel/LATEST/xml/features
karaf@root> features:install camel-spring
karaf@root> features:chooseurl activemq
karaf@root> features:install activemq-camel

We create a falcon-route.xml route file containing the Camel routes (using Spring DSL):

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
         http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
         http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

  <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
    <route id="process-listener">
       <from uri="jms:topic:FALCON.my-process"/>
       <to uri="log:process-listener"/>
    </route>
    <route id="feed-listener">
       <from uri="jms:topic:FALCON.ENTITY.TOPIC"/>
       <to uri="log:feed-listener"/>
    </route>
  </camelContext>

  <bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
    <property name="connectionFactory">
      <bean class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="brokerURL" value="tcp://localhost:61616"/>
      </bean>
    </property>
  </bean>

</beans>

In the Camel context, we create two routes, both connecting on the ActiveMQ broker, and listening on the two topics.

We drop the falcon-routes.xml in the deploy folder, and we can see it active:

karaf@root> la|grep -i falcon
[ 114] [Active     ] [            ] [Started] [   80] falcon-routes.xml (0.0.0)
karaf@root> camel:route-list 
 Context        Route              Status   
 -------        -----              ------   
 camel          feed-listener      Started  
 camel          process-listener   Started 

The routes subscribed on the topics and just send to the log (it’s very very simple).

So, we just have to take a look on the log (log:tail):

2014-03-19 11:25:43,273 | INFO  | LCON.my-process] | process-listener                 | rg.apache.camel.util.CamelLogger  176 | 74 - org.apache.camel.camel-core - 2.13.0.SNAPSHOT | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {brokerUrl=tcp://localhost:61616, timeStamp=2014-03-19T10:24Z, status=SUCCEEDED, logFile=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/instancePaths-2013-11-15-06-05.csv, feedNames=output, runId=0, entityType=process, nominalTime=2013-11-15T06:05Z, brokerTTL=4320, workflowUser=null, entityName=my-process, feedInstancePaths=hdfs://localhost:8020/data/output, operation=GENERATE, logDir=null, workflowId=0000026-140319105443372-oozie-jbon-W, cluster=local, brokerImplClass=org.apache.activemq.ActiveMQConnectionFactory, topicName=FALCON.my-process}]
2014-03-19 11:25:43,693 | INFO  | ON.ENTITY.TOPIC] | feed-listener                    | rg.apache.camel.util.CamelLogger  176 | 74 - org.apache.camel.camel-core - 2.13.0.SNAPSHOT | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {brokerUrl=tcp://localhost:61616, timeStamp=2014-03-19T10:24Z, status=SUCCEEDED, logFile=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/instancePaths-2013-11-15-06-05.csv, feedNames=output, runId=0, entityType=process, nominalTime=2013-11-15T06:05Z, brokerTTL=4320, workflowUser=jbonofre, entityName=my-process, feedInstancePaths=hdfs://localhost:8020/data/output, operation=GENERATE, logDir=hdfs://localhost:8020/falcon/staging/falcon/workflows/process/my-process/logs/job-2013-11-15-06-05/, workflowId=0000026-140319105443372-oozie-jbon-W, cluster=local, brokerImplClass=org.apache.activemq.ActiveMQConnectionFactory, topicName=FALCON.ENTITY.TOPIC}]

And we can see our notifications:

  • on the process-listener logger, we can see that my-process (entityName) has been executed with SUCCEEDED (status) at 2014-03-19T10:24Z (timeStamp). We also have the location of the job execution log on HDFS.
  • on the feed-listener logger, we can see quite the same messages. This message comes from the late-arrival, so it means that the input field changed.

For sure, the Camel routes are very simple now (just a log), but there is no limit: you bring all the powerful from ESB and BigData all together.
Once the Camel routes get the messages on ActiveMQ coming from Falcon, you can implement the integration process of your choice (sending e-mails, using Camel EIPs, calling beans, etc).

What’s next ?

I’m working on different enhancements on the late-arrival/CDC feature:

  1. The late-arrival messages in the FALCON.ENTITY.TOPIC should be improved: the message should contain a message with the feed changed, the location of the feed, eventually the size gap.
  2. We should provide a more straight forward CDC feature which doesn’t require a process to monitor a feed. Just scheduling a feed should be enough with the late cut-off.
  3. In addition of the oozie, pig, and hive workflow engine, we should provide a “pure” MapReduce jar workflow engine.
  4. The package.sh should be improved to provide a more “ready” to use Falcon Oozie custom distribution.

I’m working on this different enhancements and improvements.

On the other hand, I will propose a set of documentation improvements, especially some kind of “recipe documentation” like this one.

Stay tuned, I’m preparing a new blog about Falcon, this time about the replication between two Hadoop clusters.

Apache Karaf, Cellar, Camel, ActiveMQ monitoring with ELK (ElasticSearch, Logstash, and Kibana)

March 17, 2014 Posted by jbonofre

Apache Karaf, Cellar, Camel, and ActiveMQ provides a lot of information via JMX.

More over, another very useful source of information is in the log files.

If these two sources are very interesting, for a “real life” monitoring, we need some additional features:

  • The JMX information and log messages should be stored in order to be requested later and history. For instance, using jconsole, you can request all the JMX attributes to get the number, but these numbers have to be store somewhere. It’s quite the same for the log. Most of the time, you define a log file rotation, or you periodically cleanup the logs. So the log messages should be store as well to be requested later.
  • Numbers are good, graphics are even better. Once the JMX “numbers” are stored somewhere, a good feature is to use these numbers to create some charts. And also, we can define some kind of SLA: at some point, if a number is not “acceptable” for instance greater than a “watermark” value), we should raise a alert.
  • For high availability and scalability, most of production systems use multiple Karaf instances (synchronize with Cellar for instance). It means that the log files are spread on different machines. In that case, it’s really helpful to “centralize” the log messages.

Of course, there are already open source solutions (zabbix, nagios, etc) or commercial solutions (dynatrace, etc) to cover these needs.

In this blog, I just introduce a possible solution leveraging “big data” tools: we will see how to use the ELK (Elasticsearch, Logstash, and Kibana) solution.

Toplogy

For this example, let say we have to following architecture:

  • node1 is a machine hosting a Karaf container with a set of Camel routes.
  • node2 is a machine hosting a Karaf container with another set of Camel routes.
  • node3 is a machine hosting a ActiveMQ broker (used by the Camel routes from node1 and node2).
  • monitor is a machine hosting the monitoring platform.

Local to node1, node2, and node3, we install and configure logstash with both file and JMX input plugins. This logstash will get the log messages and pool JMX MBeans attributes, and send to a “central” Redis server (using the redis output plugin).

On monitor, we install:

  • redis server to receive the messages and events coming from logstash installed on node1, node2, and node3
  • elasticsearch to store the messages and events
  • a first logstash acting as an indexer to take the messages/events from redis and store into elasticsearch (including the update of indexes, etc)
  • a second logstash providing the kibana web console

Redis and Elasticsearch

Redis

Redis is a key-value store. But it also may acts as a broker to receive the messages/events from the different logstash (node1, node2, and node3).

For the demo, I use Redis 2.8.7 (that you can download from http://download.redis.io/releases/redis-2.8.7.tar.gz.

We uncompress the redis tarball in the /opt/monitor folder:

cp redis-2.8.7.tar.gz /opt/monitor
tar zxvf redis-2.8.7.tar.gz

Now, we have to compile Redis server on the machine. To do so, we have to execute make in the Redis src folder:

cd redis-2.8.7/src
make

NB: this step requires make and gcc installed on the machine.

make created a redis-server binary in the src folder. It’s the binary that we use to start Redis:

./redis-server --loglevel verbose
[12130] 16 Mar 21:04:28.387 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984.
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 2.8.7 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 12130
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

[12130] 16 Mar 21:04:28.388 # Server started, Redis version 2.8.7
[12130] 16 Mar 21:04:28.388 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[12130] 16 Mar 21:04:28.388 * The server is now ready to accept connections on port 6379
[12130] 16 Mar 21:04:28.389 - 0 clients connected (0 slaves), 443376 bytes in use

The redis server is now ready to accept connections coming from the “remote” logstash.

Elasticsearch

We use elasticsearch as storage backend for all messages and events. For this demo, I use elasticsearch 1.0.1, that you can download from https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.1.tar.gz.

We uncompress the elasticsearch tarball in the /opt/monitor folder:

cp elasticsearch-1.0.1.tar.gz /opt/monitore
tar zxvf elasticsearch-1.0.1.tar.gz

We start elasticsearch with the bin/elasticsearch binary (the default configuration is OK):

cd elasticsearch-1.0.1
bin/elasticsearch
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] version[1.0.1], pid[12466], build[5c03844/2014-02-25T15:52:53Z]
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] initializing ...
[2014-03-16 21:16:13,786][INFO ][plugins                  ] [Solarr] loaded [], sites []
[2014-03-16 21:16:15,763][INFO ][node                     ] [Solarr] initialized
[2014-03-16 21:16:15,764][INFO ][node                     ] [Solarr] starting ...
[2014-03-16 21:16:15,902][INFO ][transport                ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.134.11:9300]}
[2014-03-16 21:16:18,990][INFO ][cluster.service          ] [Solarr] new_master [Solarr][V9GO0DiaT4SFmRmxgwYv0A][vostro][inet[/192.168.134.11:9300]], reason: zen-disco-join (elected_as_master)
[2014-03-16 21:16:19,010][INFO ][discovery                ] [Solarr] elasticsearch/V9GO0DiaT4SFmRmxgwYv0A
[2014-03-16 21:16:19,021][INFO ][http                     ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.134.11:9200]}
[2014-03-16 21:16:19,072][INFO ][gateway                  ] [Solarr] recovered [0] indices into cluster_state
[2014-03-16 21:16:19,072][INFO ][node                     ] [Solarr] started

Logstash

Logstash is a tool for managing events and logs.

It works with a chain of input, filter, output.

On node1, node2, and node3, we will setup logstash with:

  • a file input plugin to read the log files
  • a jmx input plugin to read the different MBeans attributes
  • a redis output to send the messages and events to the monitor machine.

For this blog, I use logstash 1.4 SNAPSHOT with a contrib that I did. You can find my modified plugin on my github: https://github.com/jbonofre/logstash-contrib.

The first thing is to checkout latest logstach codebase and build it:

git clone https://github.com/elasticsearch/logstash/
cd logstash
make tarball

It will create the logstash distribution tarball in the build folder.

We can install logstash in a folder (for instance /opt/monitor/logstash):

mkdir /opt/monitor
cp build/logstash-1.4.0.rc1.tar.gz /opt/monitor
cd /opt/monitor
tar zxvf logstash-1.4.0.rc1.tar.gz
rm logstash-1.4.0.rc1.tar.gz

JMX is not a “standard” logstash plugin. It’s a plugin from logstash-contrib project. As I modified the logstash JMX plugin (to work “smoothly” with Karaf MBeanServer), waiting that my pull request will be integrated in logstash-contrib (I hope ;)), you have to clone my github fork:

git clone https://github.com/jbonofre/logstash-contrib/
cd logstash-contrib
make tarball

We can add the contrib plugins into our logstash installation (in /opt/monitor/logstash-1.4.0.rc1 folder):

cd build
tar zxvf logstash-contrib-1.4.0.beta2.tar.gz
cd logstash-contrib-1.4.0.beta2
cp -r * /opt/monitor/logstash-1.4.0.rc1

Our logstash installation is now ready, including the logstash-contrib plugins.

It means that on node1, node2, node3 and monitor, you should have the /opt/monitor/logstash-1.4.0.rc1 folder with the installation (you can use scp or rsync to install logstash on the machines).

Indexer

On monitor machine, we have a logstash instance acting as an indexer: it gets the messages from redis and stores in elasticsearch.

We create the /opt/monitor/logstash-1.4.0.rc1/conf/indexer.conf file containing:

input {
  redis {
    host => "localhost"
    data_type => "list"
    key => "logstash"
    codec => json
  }
}
output {
  elasticsearch {
    host => "localhost"
  }
}

We can start logstash using this configuration file:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/indexer.conf

Collector

On node1, node2, and node3, logstash will act as a collector:

  • the file input plugin will read the messages from the log files (you can configure multiple log files)
  • the jmx input plugin will periodically pool MBean attributes

Both will send messages to the redis server using the redis output plugin.

We create a folder /opt/monitor/logstash-1.4.0.rc1/conf. It’s where we store the logstash configuration. In this folder, we create a collector.conf file.

For node1 and node2 (both hosting a karaf container with camel routes), the collector.conf file contains:

input {
  file {
    type => "log"
    path => ["/opt/karaf/data/log/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

On node3 (hosting an ActiveMQ broker), the collector.conf is the same, just the location of the log file is different:

input {
  file {
    type => "log"
    path => ["/opt/activemq/data/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

The redis output plugin send the messages/events to the redis server located on “monitor” machine.

These messages and events come from two input plugins:

  • the file input plugin takes the path of the log file (using glob)
  • the jmx input plugin takes a folder. This folder contains json file (see later) with the MBeans queries. The plugin executes the queries every 10 seconds (polling_frequency).

So, the jmx input plugin reads all files located in the /opt/monitor/logstash-1.4.0.rc1/conf/jmx folder.

On node1 and node2 (again hosting a karaf container with camel routes), for instance, we want to monitor the number of thread on the Karaf instance (using the thread MBean), and a route named “route1″ (using the Camel route MBean).
We specify this in /opt/monitor/logstash-1.4.0.rc1/conf/jmx/karaf file:

{
  "host" : "localhost",
  "port" : 1099,
  "url" : "service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root",
  "username" : "karaf",
  "password" : "karaf",
  "alias" : "node1",
  "queries" : [
    {
      "object_name" : "java.lang:type=Threading",
      "object_alias" : "Threading"
    }, {
      "object_name" : "org.apache.camel:context=*,type=routes,name=\"route1\"",
      "object_alias" : "Route1"
    }
   ]
}

On node3, we will have a different JMX configuration file (for instance /opt/monitor/logstash-1.4.0.rc1/conf/jmx/activemq) containing the ActiveMQ MBeans that we want to query.

Now, we can start the logstash “collector”:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/collector.conf

We can see the clients connected in the redis log:

[12130] 17 Mar 14:33:27.041 - Accepted 127.0.0.1:46598
[12130] 17 Mar 14:33:31.267 - 2 clients connected (0 slaves), 484992 bytes in use

and the data populated in the elasticsearch log:

[2014-03-17 14:21:59,539][INFO ][cluster.service          ] [Solarr] added {[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false}])
[2014-03-17 14:30:59,584][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [_default_]
[2014-03-17 14:31:00,665][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [log] (dynamic)
[2014-03-17 14:33:28,247][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [jmx] (dynamic)

Now, we have JMX data and log messages from different containers, brokers, etc stored in one centralized place (the monitor machine).

We can now add a web application to read the data and create charts using the data.

Kibana

Kibana is the web application provided with logstash. The default configuration use elasticsearch default port. So, we just have to start Kibana on the monitor machine:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash-web

We access to Kibana on http://monitor:9292.

On the welcome page, we click on the “Logstash dashboard” link, and we arrive on a console looking like:
logstash1

It’s time to configure Kibana.

We remove the default histogram, to add a custom one to chart the thread count.

First, we create a query to isolate the thread count for node1. Kibana uses the Apache Lucene query syntax.
Our query is here very simple: metric_path:"node1.Threading.ThreadCount".

Now, we can create a histogram using this query, getting the metric_value_number:
kibana2

Now, we want to chart the lastProcessingTime on the Camel route (to see for instance if the route takes more time at some point).
We create a new query to isolate the route1 lastProcessingTime on node1: metric_path:"node1.Route1.LastProcessingTime".

We can now create a histogram using this query, getting the metric_value_number:
kibana3

For the demo, we can create a histogram chart to display the exchanges completed and failed for route1 on node1. We create two queries:

  • metric_path:”node1.Route1.ExchangesFailed”
  • metric_path:”node1.Route1.ExchangesCompleted”

We create a new chart in the same row:
kibana4

We cleanup a bit the events panel. We create a query to display only the log messages (not the JMX queries): type:"log".
We configure the log event panel to change the name and use the log query:
kibana6

We have now a kibana console looking like:

kibana5

With this very simple kibana configuration, we have:
- a chart of the thread count on node1
- a chart of the last processing time for route1 (on node1)
- a chart of the exchanges (failed/completed) for route1 (on node1)
- a view of all logs messages

You can now play with Kibana, add a lot of new charts leveraging all information that you have into elasticsearch (both log messages and JMX data).

Next

I’m working on some new Karaf, Cellar, ActiveMQ, Camel features providing “native” and “enhanced” support for logstash. The purpose is to just type feature:install monitoring to get:

  • jmx:* commands in Karaf
  • broadcast of event in elasticsearch
  • integration of redis, elasticsearch, logstash in Karaf (to avoid to install it “externally” from Karaf) and provide ready to use configuration (pre-configured logstash jmx input plugin, pre-configured kibana console/charts, …).

If you have other ideas to enhance and improve monitoring in Karaf, Cellar, Camel, ActiveMQ, don’t hesitate to propose on the mailing lists ! Any idea is welcome.

Coming in Karaf 3.0.0: new enterprise JPA (OpenJPA, Hibernate) and CDI (OpenWebBeans, JBoss Weld) features

December 20, 2013 Posted by jbonofre

Apache Karaf 3.0.0 is now mostly ready (I’m just polishing the documentation).

In previous post, I introduced new enterprise features like JNDI, JDBC, JMS.

As I said, the purpose is to provide a full flexible enterprise ready container, easy to use and extend for the users.

Easy to use means that a simple command will extend your container, with feature that can help you a lot.

JPA

Previous Karaf version already provided a jpa feature. However, this feature “only” installs the Aries JPA bundles, allowing to expose the EntityManager as an OSGi service. It doesn’t install any JPA engine. It means that, previously, the users had to install all bundles required to have a persistence engine.

As very popular persistence engines, Karaf 3.0.0 provides two ready-to-use features:

karaf@root()> feature:install openjpa

The openjpa feature brings Apache OpenJPA in Apache Karaf.

karaf@root()> feature:install hibernate

The hibernate feature brings Hibernate in Apache Karaf.

CDI

Karaf 3.0.0 now refers Pax CDI. It means that you can install pax-cdi* features in Apache Karaf.

However, Pax-CDI doesn’t install any CDI container, it’s up to the users to install all bundles required to have a CDI container.

As very popular CDI containers, Karaf 3.0.0 provides two ready-to-use features:

karaf@root()> feature:repo-add pax-cdi
karaf@root()> feature:install openwebbeans

The openwebbeans feature brings Apache OpenWebBeans CDI container in Apache Karaf.

karaf@root()> feature:repo-add pax-cdi
karaf@root()> feature:install weld

The weld feature brings JBoss Weld CDI container in Apache Karaf.

EJB

As a reminder, waiting to have KarafEE back in Karaf directly (as a ejb feature, I plan to work on it next week), you can install Apache OpenEJB in Apache Karaf:

karaf@root()> feature:repo-add openejb
karaf@root()> feature:install openejb-core
karaf@root()> feature:install openejb-server

Coming in Karaf 3.0.0: new enterprise JMS feature

December 19, 2013 Posted by jbonofre

In my previous post, I introduced the new enterprise JDBC feature.

To follow the same purpose, we introduced the new enterprise JMS feature.

JMS feature

Like the JDBC feature, the JMS feature is an optional one. It means that you have to install it first:

karaf@root()> feature:install jms

The jms feature installs the JMS service which is mostly a JMS “client”. It doesn’t install any broker.

For the rest of this post, I’m using a ActiveMQ embedded in my Karaf:

karaf@root()> feature:repo-add activemq 5.9.0
karaf@root()> feature:install activemq-broker

Like the JDBC feature, the JMS feature provides:

  • an OSGi service
  • jms:* commands
  • a JMX JMS MBean

The OSGi service provides a set of operation to create JMS connection factories, send JMS messages, browse a JMS queue, etc.

The commands and MBean manipulate the OSGi service.

Commands

The jms:create command allows you to create a JMS connection factory.

This command automatically creates a connectionfactory-[name].xml blueprint file in the deploy folder.

However, it doesn’t install any bundle or feature providing the JMS connection factory classes. It’s up to you to previously install the jar files, bundles, or features providing the actual JMS connection factory.

The jms:create command expects one argument and two options:

karaf@root()> jms:create --help
DESCRIPTION
        jms:create

        Create a JMS connection factory.

SYNTAX
        jms:create [options] name 

ARGUMENTS
        name
                The JMS connection factory name

OPTIONS
        -u, --url
                The JMS URL. NB: for WebsphereMQ type, the URL is hostname/port/queuemanager/channel
        --help
                Display this help message
        -t, --type
                The JMS connection factory type (ActiveMQ or WebsphereMQ)
  • name argument is the JMS connection factory name. It’s used in the JNDI name given to the connection factory (e.g. /jms/[name]) and in the blueprint file name in the deploy folder.
  • -t (--type) option is the JMS connection factory type. For now, the command supports two kinds of connection factory: ActiveMQ or WebsphereMQ. If you want to use another kind of connection factory, you can create the connection factory file yourself (using a ActiveMQ file created by the jms:create command as a template).
  • -u (--url) is the URL used by the connection factory. For instance, for ActiveMQ type, the URL looks like tcp://localhost:61616. For WebSphereMQ type, the URL looks like host/port/queuemanager/channel.

As I installed the activemq-broker feature in my Karaf, I can create the JMS connection factory for this broker:

karaf@root()> jms:create -t activemq -u tcp://localhost:61616 default

We can see the JMS connection factory file correclty deployed:

karaf@root()> la
...
151 | Active   |  80 | 0.0.0                 | connectionfactory-default.xml

The connectionfactory-default.xml file has been created in the deploy folder and contains:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <bean id="activemqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="brokerURL" value="tcp://localhost:61616" />
    </bean>

    <bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
        <property name="maxConnections" value="8" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />
    </bean>

    <bean id="resourceManager" class="org.apache.activemq.pool.ActiveMQResourceManager" init-method="recoverResource">
        <property name="transactionManager" ref="transactionManager" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />
        <property name="resourceName" value="activemq.localhost" />
    </bean>

    <reference id="transactionManager" interface="javax.transaction.TransactionManager" />

    <service ref="pooledConnectionFactory" interface="javax.jms.ConnectionFactory">
        <service-properties>
            <entry key="name" value="default" />
            <entry key="osgi.jndi.service.name" value="/jms/default" />
        </service-properties>
    </service>

</blueprint>

We can see the JMS connection factories available in Karaf (created by the jms:create command, or by hand) using the jms:connectionfactories command:

karaf@root()> jms:connectionfactories 
JMS Connection Factory
----------------------
/jms/default    

The jms:info command gives details about a JMS connection factory:

karaf@root()> jms:info /jms/default 
Property | Value   
-------------------
product  | ActiveMQ
version  | 5.9.0  

We are now ready to manipulate the JMS broker.

Let start by sending some messages to a queue in the JMS broker. We can use the jms:send command to do that:

karaf@root()> jms:send /jms/default MyQueue "Hello World"
karaf@root()> jms:send /jms/default MyQueue "Hello Karaf"
karaf@root()> jms:send /jms/default MyQueue "Hello ActiveMQ"

The jms:count command counts the number of messages in a JMS queue. We can check if we have our messages:

karaf@root()> jms:count /jms/default MyQueue
Messages Count
--------------
3             

When using ActiveMQ, the jms:queues and jms:topics commands can lists the queues and topics available in the JMS broker:

karaf@root()> jms:queues /jms/default 
JMS Queues
----------
MyQueue   

We can see the MyQueue queue where we sent our messages.

We can also browse the messages in a queue using the jms:browse command. We can have the details of the messages:

karaf@root()> jms:browse /jms/default MyQueue
Message ID                              | Content        | Charset | Type | Correlation ID | Delivery Mode | Destination     | Expiration | Priority | Redelivered | ReplyTo | Timestamp                   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID:vostro-33323-1387464670760-3:2:1:1:1 | Hello World    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:06 CET 2013
ID:vostro-33323-1387464670760-3:3:1:1:1 | Hello Karaf    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:10 CET 2013
ID:vostro-33323-1387464670760-3:4:1:1:1 | Hello ActiveMQ | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:14 CET 2013

By default, the jms:browse command displays all messages in the given queue. You can specify a selector with the -s (--selector) option to select the messages to browse.

The jms:consume command consumes messages from a queue. By consuming, it means removing.

To consume/remove the messages in MyQueue queue, we can use:

karaf@root()> jms:consume /jms/default MyQueue
3 message(s) consumed

JMX JMS MBean

All actions that we did using the jms:* commands can be performed using the JMS MBean (the object name is org.apache.karaf:type=jms,name=*).

More over, if you want to perform JMS operations programmatically, you can use the org.apache.karaf.jms.JmsService OSGi service.

Coming in Karaf 3.0.0: new enterprise JDBC feature

December 16, 2013 Posted by jbonofre

Some weeks (months ;)) ago, my colleague Christian (Schneider) did a good job by creating some useful commands to manipulate databases directly in Karaf.

We discussed together where to put those commands. We decided to submit a patch at ServiceMix because we didn’t really think about Karaf 3.0.0 at that time.

Finally, I decided to refactore those commands as a even more “useful” Karaf feature and prepare it for Karaf 3.0.0.

JDBC feature

By refactoring, I mean that it’s no more only commands: I did a complete JDBC features, providing a OSGi service, a set of commands and a MBean.
The different modules are provided by the jdbc feature.

Like most of other enterprise features, the jdbc feature is not installed by default. To enable it, you have to install the jdbc feature first:

karaf@root()> feature:install jdbc

This feature provides:

  • a JdbcService OSGi service
  • a set of jdbc:* commands
  • a JDBC MBean (org.apache.karaf:type=jdbc,name=*)

The OSGi service provides a set of operation to create a datasource, execute SQL queries on a datasource, get details about a datasource, etc.

The commands and MBean manipulate the OSGi service.

Commands

The first command that you can do is jdbc:create.

This command automatically create a JDBC datasource file in the deploy folder. It can also try to automatically install the bundles providing the JDBC driver.

The jdbc:create command requires the datasource name and type. The type can be: generic (DBCP), Derby, Oracle, MySQL, Postgres, H2, HSQL.

For instance, if you want to create an embedded Apache Derby database and the corresponding datasource, you can do:

karaf@root()> jdbc:create -t derby -u test -i test

The -t derby option defines a datasource of type derby. The -u test option defines the datasource username. The -i option indicates to try to install the bundles providing the Derby JDBC driver. Finally test is the datasource name.

Now, we can see several things.

First, the command automatically installed the JDBC driver and created the datasource:

karaf@root()> la
...
87 | Active   |  80 | 10.8.2000002.1181258  | Apache Derby 10.8                                                 
88 | Active   |  80 | 0.0.0                 | datasource-test.xml  

We can see the datasource blueprint file created in the deploy folder:

/opt/apache-karaf/target/apache-karaf-3.0.0$ ls -l deploy/
total 8
-rw-r--r-- 1 jbonofre jbonofre 841 Dec 16 14:10 datasource-test.xml

The datasource-test.xml file has been created using a set of templates depending of the datasource type that we provided.
The content is a blueprint XML definition:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           default-activation="eager">

    <bean id="dataSource" class="org.apache.derby.jdbc.EmbeddedXADataSource">
        <property name="databaseName" value="test"/>
        <property name="createDatabase" value="create" />
    </bean>

    <service ref="dataSource" interface="javax.sql.DataSource">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="/jdbc/test"/>
        </service-properties>
    </service>

    <service ref="dataSource" interface="javax.sql.XADataSource">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="/jdbc/testxa"/>
        </service-properties>
    </service>

</blueprint>

You can use the jdbc:delete command to delete an existing datasource.

The jdbc:datasources command provide the list of available JDBC datasources:

karaf@root()> jdbc:datasources 
Name       | Product      | Version              | URL            
------------------------------------------------------------------
/jdbc/test | Apache Derby | 10.8.2.2 - (1181258) | jdbc:derby:test

If you want to have more details about a JDBC datasource, you can use the jdbc:info command:

karaf@root()> jdbc:info /jdbc/test 
Property       | Value                            
--------------------------------------------------
driver.version | 10.8.2.2 - (1181258)             
username       | APP                              
db.version     | 10.8.2.2 - (1181258)             
db.product     | Apache Derby                     
driver.name    | Apache Derby Embedded JDBC Driver
url            | jdbc:derby:test  

You can execute SQL commands and queries on a given JDBC datasource.

For instance, we can create a table directly in our Derby database using the jdbc:execute command:

karaf@root()> jdbc:execute /jdbc/test "create table person(name varchar(100), nick varchar(100))"

The jdbc:tables command displays all tables available on a given datasource. In our case, we can see our person table:

karaf@root()> jdbc:tables /jdbc/test 
REF_GENERATION | TYPE_NAME | TABLE_NAME       | TYPE_CAT | REMARKS | TYPE_SCHEM | TABLE_TYPE   | TABLE_SCHEM | TABLE_CAT | SELF_REFERENCING_COL_NAME
----------------------------------------------------------------------------------------------------------------------------------------------------
               |           | SYSALIASES       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCHECKS        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLPERMS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLUMNS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONGLOMERATES |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONSTRAINTS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDEPENDS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFILES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFOREIGNKEYS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSKEYS          |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSPERMS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROLES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROUTINEPERMS  |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSCHEMAS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSEQUENCES     |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATEMENTS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATISTICS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLEPERMS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLES        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTRIGGERS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSVIEWS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDUMMY1        |          |         |            | SYSTEM TABLE | SYSIBM      |           |                          
               |           | PERSON           |          |         |            | TABLE        | APP         |           |     

Now, also using the jdbc:execute command, we can insert some records in our person table:

karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('foo','bar')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('Christian Schneider','cschneider')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('JB Onofre','jbonofre')"

The jdbc:query command executes SQL queries returning result. For instance, we can select the records in the person table:

karaf@root()> jdbc:query /jdbc/test "select * from person"
NICK       | NAME               
--------------------------------
bar        | foo                
cschneider | Christian Schneider
jbonofre   | JB Onofre     

JDBC MBean and JDBC OSGi Service

All actions that we did using the jdbc:* commands can be performed using the JDBC MBean.

More over, if you want to perform JDBC operations or manipulates datasources programmatically, you can use the org.apache.karaf.jdbc.JdbcService OSGi service.

Next one: jms

Following the same idea, I prepared a new jms feature providing OSGi service, commands and MBean to manipulate JMS connection factories, get information on the broker, send/consume messages, etc.

Coming in Karaf 3.0.0: new enterprise JNDI feature

December 13, 2013 Posted by jbonofre

In previous Karaf version (2.x), the JNDI support was “basic”.
We just leveraged Aries JNDI to support the osgi:service JNDI scheme to reference the OSGi services using JNDI name.

However, we didn’t provide a fully functionnal JNDI initial context, nor any tooling around JNDI.

In part of the new enterprise features coming with Karaf 3.0.0, the JNDI support is now more “complete”.

Add JNDI support

As most of the other enterprise features, the JNDI feature is an optional one. It means that you have to install the jndi feature first:

karaf@root()> feature:install jndi

The jndi feature installs several parts.

Ready to use initial context

Like in previous version, Karaf provides a fully compliant implementation of the OSGi Alliance JNDI Service Specification. This specification details how to advertise InitialContextFactory and ObjectFactories in an OSGi environment. It also defines how to obtain services from services registry via JNDI.

Now, it’s possible to use directly the JNDI initial context. Karaf now provides a fully functionnal initial context where you can lookup both the osgi:service scheme or a regular JNDI name.

You can do:

Context context = new InitialContext();
MyBean myBean = (MyBean) context.lookup("my/bean/name");

You can use the osgi:service scheme to access to the OSGi service registry using JNDI:

Context context = new InitialContext();
MyBean myBean = (MyBean) context.lookup("osgi:service/mybean");

JNDI Service, Commands and MBean

Karaf 3.0.0 provides a OSGi service dedicated to JNDI.

The interface of this JNDI service is org.apache.karaf.jndi.JndiService and it’s registered when installing the jndi feature.

You can manipulate the JNDI service using shell commands.

You can list the JNDI name using jndi:names:

karaf@root()> jndi:names
JNDI Name         | Class Name                                    
------------------------------------------------------------------
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can create a new JNDI name using anoher one (a kind of alias) using the jndi:alias command:

karaf@root()> jndi:alias osgi:service/jndi local/service/jndi
karaf@root()> jndi:names
JNDI Name          | Class Name                                    
-------------------------------------------------------------------
osgi:service/jndi  | org.apache.karaf.jndi.internal.JndiServiceImpl
local/service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

For instance, here, we bind a name from the “special” osgi:service scheme as a “regular” JNDI name.

You can directly bind a OSGi service (identifiy by its service.id) with a JNDI name:

karaf@root()> jndi:bind 344 local/service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
------------------------------------------------------------------
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can alias the local/service/kar name with directly service/kar:

karaf@root()> jndi:alias local/service/kar service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
------------------------------------------------------------------
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
service/kar       | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can unbind the service/kar name:

karaf@root()> jndi:unbind service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
------------------------------------------------------------------
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can get all JNDI names, and manipulate the JNDI service using a new JMX JNDI MBean. The object name to use is org.apache.karaf:type=jndi,name=*.

Conclusion

One of our purpose for Karaf 3.0.0 is to provide more services, commands, MBeans to move Karaf as a more complete full enterprise OSGi container.

If we already provide a bunch of features, a lot are not really “visible” to the end users due to some “missing” commands or MBeans.

It’s a key point for Karaf 3.x releases.

Coming in Karaf 3.0.0: RBAC support for OSGi services and console commands

December 12, 2013 Posted by jbonofre

In a previous post, we saw a new Karaf feature: support of user groups and Role-Based Access Controle (RBAC) for the JMX layer.

We extended the RBAC support to the OSGi services, and by side effect to the console commands (as a console command is also an OSGi service).

RBAC for OSGi services

The JMX RBAC support uses a MBeanServerBuilder. The KarafMBeanServerBuilder “intercepts” the call to the MBeans, checks the definition (defined in etc/jmx.acl.*.cfg configuration files) and defines if the call can be performed or not.

Regarding the RBAC support for OSGi services, we use a similar mechanism.

The Karaf Service Guard provides a service listener which intercepts the service calls, and check if the call to the service can be performed or not.

The list of “secured” OSGi service is defined in the karaf.secured.services property in the etc/system.properties (using a LDAP syntax filter).

By default, we only “intercept” (and so secure) the command OSGi services:

karaf.secured.services = (&(osgi.command.scope=*)(osgi.command.function=*))

The RBAC definition itself are stored in etc/org.apache.karaf.service.acl.*.cfg configuration files, similar to the etc/jmx.acl*.cfg configuration files used for JMX. The syntax in this file is the same.

RBAC for console commands

As the console commands are actually OSGi services, the direct application of the OSGi services RBAC support is to secure the console commands.

By default, we secure only the OSGi services associated to the console commands (as explained early in the karaf.secured.services).

The RBAC definition on the console commands are defined in the etc/org.apache.karaf.commands.acl.*.cfg configuration files.

You can define one configuration file by command scope. For instance, the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file defines the RBAC for the bundle:* commands.

For instance, in the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file, we can define:

install = admin
refresh[/.*[-][f].*/] = admin
refresh = manager
restart[/.*[-][f].*/] = admin
restart = manager
start[/.*[-][f].*/] = admin
start = manager
stop[/.*[-][f].*/] = admin
stop = manager
uninstall[/.*[-][f].*/] = admin
uninstall = manager
update[/.*[-][f].*/] = admin
update = manager
watch = admin

The format is command[option]=role.

For instance, in this file we:

  • limit bundle:install and bundle:watch commands only for the users with the admin role
  • limit bundle:refresh, bundle:restart, bundle:start, bundle:stop, bundle:uninstall, bundle:update commands with the -f option (meaning executing these commands for “system” bundles) only for the users with the admin role
  • all other commands (not matching the two previously defined rules) can be executed by the users with the manager role

By default, we define RBAC for:

  • bundle:* commands (in the etc/org.apache.karaf.command.acl.bundle.cfg configuration file)
  • config:* commands (in the etc/org.apache.karaf.command.acl.config.cfg configuration file)
  • feature:* commands (in the etc/org.apache.karaf.command.acl.feature.cfg configuration file)
  • jaas:* commands (in the etc/org.apache.karaf.command.acl.jaas.cfg configuration file)
  • kar:* commands (in the etc/org.apache.karaf.command.acl.kar.cfg configuration file)
  • shell:* commands (in the etc/org.apache.karaf.command.acl.shell.cfg configuration file)
  • system:* commands (in the etc/org.apache.karaf.command.acl.system.cfg configuration file)

This RBAC rules apply on both “local” console and remote SSH console.

As you don’t really logon the “local” console, we have to define the “roles” that we can use in the “local” console.

These “local” roles are defined in the karaf.local.roles in the etc/system.properties configuration file:

karaf.local.roles = admin,manager,viewer

We can see that, when we use the “local” console, the “implicit local user” will have the admin, manager, and viewer roles.

Some books review: Instant Apache Camel Messaging System,Learning Apache Karaf, and Instant Apache ServiceMix How-To

November 21, 2013 Posted by jbonofre

I’m pleased to be reviewer on new books published by Packt:

I received a “hard” copy from Packt (thanks for that), and I’m now able to do the review.

Instant Apache Camel Messaging System, by Evgeniy Sharapov. Published by Packt publishing in September 2013

This book is a good introduction to Camel. It covers Camel fundamentals.

What is Apache Camel

It’s a quick introduction about Camel, in only four pages. We have a good overview about Camel basics: what is a component, routes, contexts, EIPs, etc.

We have to see that as it is: it’s just a quick introduction. Don’t expect a lot of details about the Camel basics, it just provides a very high level overview.

Installation

To be honest, I don’t like this part. It focus mostly on using Maven with Camel: how to use Camel with Maven, integrate Camel in your IDE (Eclipse, or IntelliJ), usage of the archetypes.

I think it’s too much restrictive. I would have prefered a quick listing of the differents ways to install and use Camel: in a Karaf/ServiceMix container, in a Spring application context, in Tomcat or another application server, etc.

I’m afraid that some users will take “bad habits” reading this part.

Quickstart

This part goes in bit deeper about CamelContext and RouteBuilder. It’s a good chapter, but I would have focus a bit more about the DSL (at least Java, Spring, and Blueprint).

The example used is interesting as it uses different components, transformation, predicates and expressions.

It’s a really good introduction.

Conclusion

It’s a good introduction book, only for new Camel users. If you already know Camel, I’m afraid that you will be a disapointed and you won’t learn a lot.

If you are a Camel rookie rider, and you want to move forward quickly, with a “ready to use example”, this book is good one.

I would have expects more details on some key Camel features, especially the EIPs, and some real use cases on EIP with some components.

Learning Apache Karaf, by Jamie Goodyear, Johan Edstrom, Heath Kesler. Published by Packt publishing in October 2013

I helped a lot on this book and I would like to congratulate my friends Jamie Goodyear, Johan Edstrom, Heath Kesler. You did a great job guys !

It’s the perfect book to start with Apache Karaf. All Karaf features are introduced, and more, like Karaf Cellar.

It’s based on Karaf 2.x (an update will be required for Karaf 3.0.0 as a lot of commands, etc changed).

The global content is great for beginner. If you already know Karaf, you probably know most of the content, however, the book can be helpful to discover some features like Cellar.

Good job guys !

Instant Apache ServiceMix How-To, by Henryk Konsek. Published by Packt publishing in June 2013

This book is a good complement from the Camel and Karaf ones. Unfortunately, some chapters are a bit redondent: you will find the same information in both books.

However, as Apache ServiceMix is powered by Karaf, starting from Learning Apache Karaf makes sense and give you details about the core of ServiceMix (the “ServiceMix Kernel”, which is the genesis of Karaf ;)).

This book is a good jump to ServiceMix.

I would have expect some details about some ServiceMix NMR (naming for instance), the different distributions.

ServiceMix is more than an umbrella project gathering Karaf, Camel, CXF, ActiveMQ, etc. It also provides some interesting features like Naming, etc. It would have been great to introduce this.

Conclusion

These three books are great for beginners, especially the Karaf one.

I was really glad and pleased to review these books. It’s a really a tough job to write this kind of books, and we have to congratulate the authors for their job.

It’s a great work guys !

Talend ESB Continous Integration, part2: Maven and commandline

October 24, 2013 Posted by jbonofre

In the first part of the “Talend ESB Continuous Integration” serie, we saw how to test the Camel routes created by the studio, by leveraging Camel Test Kit. We saw how to have automatic testing using Jenkins.

The Maven POM that we did assumes that the route has been deployed (on the local repository or on a remote repository like Apache Archiva).

But, it’s not so elegant that a Studio directly publish to the Archiva repository, especially from a continuous integration perspective.

In this second article, I will show how to use the Talend commandline with Maven, and do nightly builds using Jenkins.

Talend CommandLine

CommandLine introduction

The Talend commandline is the Talend Studio without the GUI. Thanks to the commandline, you can do a lot of actions, like checkout, export route, publish route, execute route. Actually, you can do all actions except the design itself ;)

You can find commandline*.sh scripts directly in your Talend Studio installation, or you can launch the commandline using:

./Talend-Studio-linux-gtk-x86_64 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace

You can use the commandline in different mode:

  • Shell Mode:

    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace shell
    

    Using this mode, the commandline starts a shell. You can execute the action directly in this shell. Type quit to exit from the commandline.

  • Script Mode:

    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace scriptFile /path/to/script
    

    Using this mode, the commandline starts and executes the actions (commands) listed in the script file.

  • Server Mode:

    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace startServer -p 8002
    

    Using this mode, the commandline starts a server. You can execution actions (commands) on the commandline using telnet (telnet localhost 8002). Type stopServer (eventually –force) to exit from the commandline.

The help command provides a list of all commands that you can execute in the commandline.

The first action to perform in the commandline is to init the Talend repository (containing the metadata). The repository can be local or remote.

To init a local repository, simply execute the following command in the Talend commandline:

talend> initLocal

To init a remote repository, you have to use the initRemote command, providing the location of a Talend Administration Center:

talend> initRemote http://localhost:8080/org.talend.administrator

As the commandline performs the actions asynchronously, you can see all commands (and the status) executed by the commandline, using listCommand:

talend> listCommand -a
0:COMPLETED InitRemoteCommand initRemote

Once the repository initialized, we can list the project in the repository:

talend> listProject
CI (CI) java desc=[Continuous Integration Sample] storage=[Local]
1

If you don't have existing project, you can create a new project:

1
talend> createProject -pn "CI" -pd "Continuous Integration Sample" -pl java -pa "jbonofre@talend.com"
talend> listCommand -a
1:COMPLETED CreateProjectCommand createProject -pn 'CI' -pd 'Continuous Integration Sample' -pl 'java' -pa 'jbonofre@talend.com'  name CI description Continuous Integration Sample language java author jbonofre@talend.com

Now, you can logon a project:

talend> logonProject -pn CI -ul "jbonofre@talend.com" [-up "password"]
talend> listCommand -a
2:COMPLETED LogonProjectCommand log on CI

Once logged on a project, you can list routes, jobs, services in this project:

talend> listRoute
talend> listJob
talend> listService

If you use a remote repository, once logged on the project, you will have all jobs, routes, and services checked out from the svn.

If you initialized a local repository, you may want to import items (jobs, routes, services) that you export from a studio.

talend> importItem /home/jbonofre/MyRoute.zip
talend> listCommand -a
3:COMPLETED ImportItemsCommand

Now, you can see the items that you imported:

talend> listRoute
[Samples]
  MyRoute

Now, we can use the command line the create the route kar file:

talend> exportRoute MyRoute -dd /home/jbonofre
talend> listCommand -a
4:COMPLETED ExportRouteServerCommand exportRoute 'MyRoute' -dd '/home/jbonofre'

We have the MyRoute.kar file created:

jbonofre@vostro:~$ ls -lh|grep -i kar
-rw-r--r--  1 jbonofre jbonofre 231K Oct 24 17:29 MyRoute.kar

Using the Talend Enterprise Edition, instead of creating the kar file locally, we can publish the route features (and all dependencies) directly to a Maven repository (Apache Archiva in my case):

talend> publishRoute MyRoute -pv 0.1.0-SNAPSHOT -g net.nanthrax -a MyRoute -r "http://localhost:8082/archiva/repository/repo-snapshot" -u tadmin -p foo 

We gonna use a combination of these commands on a commandline invoked by Maven.

Prepare the commandline

For our build, we use the script mode on the commandline.

To simplify, we create a commandline-script.sh in the Talend Studio installation directory. The commandline-script.sh contains:

./Talend-Studio-linux-gtk-x86_64 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace scriptFile $1

Publish script

We can now create a publish script called by the commandline-script.sh. This script performs the following action:

  1. Initialize the repository (local or remote, for this example, I use a remote repository)
  2. Logon on the project
  3. Publish a route

This script uses properties that we will filter with Maven using the resource plugin.

We place the script in src/scripts/commandline folder, with the publish name:

initRemote ${tac.location}
logonProject -pn ${talend.project} -ul "${tac.user}" -up ${tac.password}
publishRoute ${project.artifactId} -r "${repo.snapshot}" -u ${repo.user} -p ${repo.password} -pv ${project.version} -g ${project.groupId} -a ${project.artifactId}

We are now ready to call the commandline using Maven.

Maven deploy using commandline

To call the commandline with Maven, we use the exec-maven-plugin from codehaus.

Our Maven POM does:

  1. Disable the “default” deploy plugin.
  2. Use the maven-resource-plugin to filter the commandline scripts.
  3. Execute the commandline-script.sh at the deploy phase, using the filtered script files.

Finally, the Maven POM looks like:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>net.nanthrax</groupId>
    <artifactId>MyRoute</artifactId>
    <version>0.1.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>MyRoute</name>

    <properties>
        <talend.project>MAIN</talend.project>
        <tac.location>http://localhost:8080/org.talend.administrator</tac.location>
        <tac.user>jbonofre@talend.com</tac.user>
        <tac.password>foobar</tac.password>
        <commandline.location>/opt/talend/commandline</commandline.location>
        <commandline.executable>./commandline-script.sh</commandline.executable>
        <repo.release>http://localhost:8082/archiva/repository/repo-release/</repo.release>
        <repo.snapshot>http://localhost:8082/archiva/repository/repo-snapshot/</repo.snapshot>
        <repo.user>admin</repo.user>
        <repo.password>foobar</repo.password>
    </properties>

    <repositories>
        <repository>
            <id>archiva.repo.release</id>
            <name>Aarchiva Artifact Repository (release)</name>
            <url>${repo.release}</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>archiva.repo.snapshot</id>
            <name>Archiva Artifact Repository (snapshot)</name>
            <url>${repo.snapshot}</url>
            <releases>
                <enabled>false</enabled>
            </releases>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
    </repositories>

    <build>
        <resources>
            <resource>
                <directory>${project.basedir}/src/scripts/commandline</directory>
                <filtering>true</filtering>
                <includes>
                    <include>**/*</include>
                </includes>
            </resource>
        </resources>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-deploy-plugin</artifactId>
                <version>2.7</version>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.2.1</version>
                <executions>
                    <execution>
                        <id>export</id>
                        <phase>deploy</phase>
                        <goals>
                            <goal>exec</goal>
                        </goals>
                        <configuration>
                            <executable>${commandline.executable}</executable>
                            <workingDirectory>${commandline.location}</workingDirectory>
                            <arguments>
                                <argument>${project.build.directory}/classes/publish</argument>
                            </arguments>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>

By leveraging the commandline, our Maven POM does:

  1. Checkout the Talend metadata repository.
  2. Generate the code using the metadata.
  3. Compile the generated code.
  4. Create the artifact, the Karaf features XML, and deploy on the Archiva repository.

Nightly builds using Jenkins

Now that we have our Maven POM, we can creat the job in Jenkins. Like this, we will have nightly builds including the latest changes performed by the developers.

Of course, we can “couple” this deploy phase with the unit tests that we did in the first article. We can merge both in the same Maven POM.

It’s interesting to note that we can leverage Maven features (like pluginManagement, profiles, etc), and especially reactor (multiple Maven modules) with this, allowing to build a set of jobs, routes, or services in a row.