Category: ‘Apache Karaf’

Coming in Karaf 3.0.0: new enterprise JMS feature

December 19, 2013 Posted by jbonofre

In my previous post, I introduced the new enterprise JDBC feature.

To follow the same purpose, we introduced the new enterprise JMS feature.

JMS feature

Like the JDBC feature, the JMS feature is an optional one. It means that you have to install it first:

karaf@root()> feature:install jms

The jms feature installs the JMS service which is mostly a JMS “client”. It doesn’t install any broker.

For the rest of this post, I’m using a ActiveMQ embedded in my Karaf:

karaf@root()> feature:repo-add activemq 5.9.0
karaf@root()> feature:install activemq-broker

Like the JDBC feature, the JMS feature provides:

  • an OSGi service
  • jms:* commands
  • a JMX JMS MBean

The OSGi service provides a set of operation to create JMS connection factories, send JMS messages, browse a JMS queue, etc.

The commands and MBean manipulate the OSGi service.


The jms:create command allows you to create a JMS connection factory.

This command automatically creates a connectionfactory-[name].xml blueprint file in the deploy folder.

However, it doesn’t install any bundle or feature providing the JMS connection factory classes. It’s up to you to previously install the jar files, bundles, or features providing the actual JMS connection factory.

The jms:create command expects one argument and two options:

karaf@root()> jms:create --help

        Create a JMS connection factory.

        jms:create [options] name 

                The JMS connection factory name

        -u, --url
                The JMS URL. NB: for WebsphereMQ type, the URL is hostname/port/queuemanager/channel
                Display this help message
        -t, --type
                The JMS connection factory type (ActiveMQ or WebsphereMQ)
  • name argument is the JMS connection factory name. It’s used in the JNDI name given to the connection factory (e.g. /jms/[name]) and in the blueprint file name in the deploy folder.
  • -t (--type) option is the JMS connection factory type. For now, the command supports two kinds of connection factory: ActiveMQ or WebsphereMQ. If you want to use another kind of connection factory, you can create the connection factory file yourself (using a ActiveMQ file created by the jms:create command as a template).
  • -u (--url) is the URL used by the connection factory. For instance, for ActiveMQ type, the URL looks like tcp://localhost:61616. For WebSphereMQ type, the URL looks like host/port/queuemanager/channel.

As I installed the activemq-broker feature in my Karaf, I can create the JMS connection factory for this broker:

karaf@root()> jms:create -t activemq -u tcp://localhost:61616 default

We can see the JMS connection factory file correclty deployed:

karaf@root()> la
151 | Active   |  80 | 0.0.0                 | connectionfactory-default.xml

The connectionfactory-default.xml file has been created in the deploy folder and contains:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="">

    <bean id="activemqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="brokerURL" value="tcp://localhost:61616" />

    <bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
        <property name="maxConnections" value="8" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />

    <bean id="resourceManager" class="org.apache.activemq.pool.ActiveMQResourceManager" init-method="recoverResource">
        <property name="transactionManager" ref="transactionManager" />
        <property name="connectionFactory" ref="activemqConnectionFactory" />
        <property name="resourceName" value="activemq.localhost" />

    <reference id="transactionManager" interface="javax.transaction.TransactionManager" />

    <service ref="pooledConnectionFactory" interface="javax.jms.ConnectionFactory">
            <entry key="name" value="default" />
            <entry key="" value="/jms/default" />


We can see the JMS connection factories available in Karaf (created by the jms:create command, or by hand) using the jms:connectionfactories command:

karaf@root()> jms:connectionfactories 
JMS Connection Factory

The jms:info command gives details about a JMS connection factory:

karaf@root()> jms:info /jms/default 
Property | Value   
product  | ActiveMQ
version  | 5.9.0  

We are now ready to manipulate the JMS broker.

Let start by sending some messages to a queue in the JMS broker. We can use the jms:send command to do that:

karaf@root()> jms:send /jms/default MyQueue "Hello World"
karaf@root()> jms:send /jms/default MyQueue "Hello Karaf"
karaf@root()> jms:send /jms/default MyQueue "Hello ActiveMQ"

The jms:count command counts the number of messages in a JMS queue. We can check if we have our messages:

karaf@root()> jms:count /jms/default MyQueue
Messages Count

When using ActiveMQ, the jms:queues and jms:topics commands can lists the queues and topics available in the JMS broker:

karaf@root()> jms:queues /jms/default 
JMS Queues

We can see the MyQueue queue where we sent our messages.

We can also browse the messages in a queue using the jms:browse command. We can have the details of the messages:

karaf@root()> jms:browse /jms/default MyQueue
Message ID                              | Content        | Charset | Type | Correlation ID | Delivery Mode | Destination     | Expiration | Priority | Redelivered | ReplyTo | Timestamp                   
ID:vostro-33323-1387464670760-3:2:1:1:1 | Hello World    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:06 CET 2013
ID:vostro-33323-1387464670760-3:3:1:1:1 | Hello Karaf    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:10 CET 2013
ID:vostro-33323-1387464670760-3:4:1:1:1 | Hello ActiveMQ | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:57:14 CET 2013

By default, the jms:browse command displays all messages in the given queue. You can specify a selector with the -s (--selector) option to select the messages to browse.

The jms:consume command consumes messages from a queue. By consuming, it means removing.

To consume/remove the messages in MyQueue queue, we can use:

karaf@root()> jms:consume /jms/default MyQueue
3 message(s) consumed


All actions that we did using the jms:* commands can be performed using the JMS MBean (the object name is org.apache.karaf:type=jms,name=*).

More over, if you want to perform JMS operations programmatically, you can use the org.apache.karaf.jms.JmsService OSGi service.

Coming in Karaf 3.0.0: new enterprise JDBC feature

December 16, 2013 Posted by jbonofre

Some weeks (months ;)) ago, my colleague Christian (Schneider) did a good job by creating some useful commands to manipulate databases directly in Karaf.

We discussed together where to put those commands. We decided to submit a patch at ServiceMix because we didn’t really think about Karaf 3.0.0 at that time.

Finally, I decided to refactore those commands as a even more “useful” Karaf feature and prepare it for Karaf 3.0.0.

JDBC feature

By refactoring, I mean that it’s no more only commands: I did a complete JDBC features, providing a OSGi service, a set of commands and a MBean.
The different modules are provided by the jdbc feature.

Like most of other enterprise features, the jdbc feature is not installed by default. To enable it, you have to install the jdbc feature first:

karaf@root()> feature:install jdbc

This feature provides:

  • a JdbcService OSGi service
  • a set of jdbc:* commands
  • a JDBC MBean (org.apache.karaf:type=jdbc,name=*)

The OSGi service provides a set of operation to create a datasource, execute SQL queries on a datasource, get details about a datasource, etc.

The commands and MBean manipulate the OSGi service.


The first command that you can do is jdbc:create.

This command automatically create a JDBC datasource file in the deploy folder. It can also try to automatically install the bundles providing the JDBC driver.

The jdbc:create command requires the datasource name and type. The type can be: generic (DBCP), Derby, Oracle, MySQL, Postgres, H2, HSQL.

For instance, if you want to create an embedded Apache Derby database and the corresponding datasource, you can do:

karaf@root()> jdbc:create -t derby -u test -i test

The -t derby option defines a datasource of type derby. The -u test option defines the datasource username. The -i option indicates to try to install the bundles providing the Derby JDBC driver. Finally test is the datasource name.

Now, we can see several things.

First, the command automatically installed the JDBC driver and created the datasource:

karaf@root()> la
87 | Active   |  80 | 10.8.2000002.1181258  | Apache Derby 10.8                                                 
88 | Active   |  80 | 0.0.0                 | datasource-test.xml  

We can see the datasource blueprint file created in the deploy folder:

/opt/apache-karaf/target/apache-karaf-3.0.0$ ls -l deploy/
total 8
-rw-r--r-- 1 jbonofre jbonofre 841 Dec 16 14:10 datasource-test.xml

The datasource-test.xml file has been created using a set of templates depending of the datasource type that we provided.
The content is a blueprint XML definition:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns=""

    <bean id="dataSource" class="org.apache.derby.jdbc.EmbeddedXADataSource">
        <property name="databaseName" value="test"/>
        <property name="createDatabase" value="create" />

    <service ref="dataSource" interface="javax.sql.DataSource">
            <entry key="" value="/jdbc/test"/>

    <service ref="dataSource" interface="javax.sql.XADataSource">
            <entry key="" value="/jdbc/testxa"/>


You can use the jdbc:delete command to delete an existing datasource.

The jdbc:datasources command provide the list of available JDBC datasources:

karaf@root()> jdbc:datasources 
Name       | Product      | Version              | URL            
/jdbc/test | Apache Derby | - (1181258) | jdbc:derby:test

If you want to have more details about a JDBC datasource, you can use the jdbc:info command:

karaf@root()> jdbc:info /jdbc/test 
Property       | Value                            
driver.version | - (1181258)             
username       | APP                              
db.version     | - (1181258)             
db.product     | Apache Derby                | Apache Derby Embedded JDBC Driver
url            | jdbc:derby:test  

You can execute SQL commands and queries on a given JDBC datasource.

For instance, we can create a table directly in our Derby database using the jdbc:execute command:

karaf@root()> jdbc:execute /jdbc/test "create table person(name varchar(100), nick varchar(100))"

The jdbc:tables command displays all tables available on a given datasource. In our case, we can see our person table:

karaf@root()> jdbc:tables /jdbc/test 
               |           | SYSALIASES       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCHECKS        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLPERMS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCOLUMNS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONGLOMERATES |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSCONSTRAINTS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDEPENDS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFILES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSFOREIGNKEYS   |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSKEYS          |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSPERMS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROLES         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSROUTINEPERMS  |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSCHEMAS       |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSEQUENCES     |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATEMENTS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSSTATISTICS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLEPERMS    |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTABLES        |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSTRIGGERS      |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSVIEWS         |          |         |            | SYSTEM TABLE | SYS         |           |                          
               |           | SYSDUMMY1        |          |         |            | SYSTEM TABLE | SYSIBM      |           |                          
               |           | PERSON           |          |         |            | TABLE        | APP         |           |     

Now, also using the jdbc:execute command, we can insert some records in our person table:

karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('foo','bar')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('Christian Schneider','cschneider')"
karaf@root()> jdbc:execute /jdbc/test "insert into person(name,nick) values('JB Onofre','jbonofre')"

The jdbc:query command executes SQL queries returning result. For instance, we can select the records in the person table:

karaf@root()> jdbc:query /jdbc/test "select * from person"
NICK       | NAME               
bar        | foo                
cschneider | Christian Schneider
jbonofre   | JB Onofre     

JDBC MBean and JDBC OSGi Service

All actions that we did using the jdbc:* commands can be performed using the JDBC MBean.

More over, if you want to perform JDBC operations or manipulates datasources programmatically, you can use the org.apache.karaf.jdbc.JdbcService OSGi service.

Next one: jms

Following the same idea, I prepared a new jms feature providing OSGi service, commands and MBean to manipulate JMS connection factories, get information on the broker, send/consume messages, etc.

Coming in Karaf 3.0.0: new enterprise JNDI feature

December 13, 2013 Posted by jbonofre

In previous Karaf version (2.x), the JNDI support was “basic”.
We just leveraged Aries JNDI to support the osgi:service JNDI scheme to reference the OSGi services using JNDI name.

However, we didn’t provide a fully functionnal JNDI initial context, nor any tooling around JNDI.

In part of the new enterprise features coming with Karaf 3.0.0, the JNDI support is now more “complete”.

Add JNDI support

As most of the other enterprise features, the JNDI feature is an optional one. It means that you have to install the jndi feature first:

karaf@root()> feature:install jndi

The jndi feature installs several parts.

Ready to use initial context

Like in previous version, Karaf provides a fully compliant implementation of the OSGi Alliance JNDI Service Specification. This specification details how to advertise InitialContextFactory and ObjectFactories in an OSGi environment. It also defines how to obtain services from services registry via JNDI.

Now, it’s possible to use directly the JNDI initial context. Karaf now provides a fully functionnal initial context where you can lookup both the osgi:service scheme or a regular JNDI name.

You can do:

Context context = new InitialContext();
MyBean myBean = (MyBean) context.lookup("my/bean/name");

You can use the osgi:service scheme to access to the OSGi service registry using JNDI:

Context context = new InitialContext();
MyBean myBean = (MyBean) context.lookup("osgi:service/mybean");

JNDI Service, Commands and MBean

Karaf 3.0.0 provides a OSGi service dedicated to JNDI.

The interface of this JNDI service is org.apache.karaf.jndi.JndiService and it’s registered when installing the jndi feature.

You can manipulate the JNDI service using shell commands.

You can list the JNDI name using jndi:names:

karaf@root()> jndi:names
JNDI Name         | Class Name                                    
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can create a new JNDI name using anoher one (a kind of alias) using the jndi:alias command:

karaf@root()> jndi:alias osgi:service/jndi local/service/jndi
karaf@root()> jndi:names
JNDI Name          | Class Name                                    
osgi:service/jndi  | org.apache.karaf.jndi.internal.JndiServiceImpl
local/service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

For instance, here, we bind a name from the “special” osgi:service scheme as a “regular” JNDI name.

You can directly bind a OSGi service (identifiy by its with a JNDI name:

karaf@root()> jndi:bind 344 local/service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can alias the local/service/kar name with directly service/kar:

karaf@root()> jndi:alias local/service/kar service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
service/kar       | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can unbind the service/kar name:

karaf@root()> jndi:unbind service/kar
karaf@root()> jndi:names
JNDI Name         | Class Name                                    
local/service/kar | org.apache.karaf.kar.internal.KarServiceImpl  
osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl

You can get all JNDI names, and manipulate the JNDI service using a new JMX JNDI MBean. The object name to use is org.apache.karaf:type=jndi,name=*.


One of our purpose for Karaf 3.0.0 is to provide more services, commands, MBeans to move Karaf as a more complete full enterprise OSGi container.

If we already provide a bunch of features, a lot are not really “visible” to the end users due to some “missing” commands or MBeans.

It’s a key point for Karaf 3.x releases.

Coming in Karaf 3.0.0: RBAC support for OSGi services and console commands

December 12, 2013 Posted by jbonofre

In a previous post, we saw a new Karaf feature: support of user groups and Role-Based Access Controle (RBAC) for the JMX layer.

We extended the RBAC support to the OSGi services, and by side effect to the console commands (as a console command is also an OSGi service).

RBAC for OSGi services

The JMX RBAC support uses a MBeanServerBuilder. The KarafMBeanServerBuilder “intercepts” the call to the MBeans, checks the definition (defined in etc/jmx.acl.*.cfg configuration files) and defines if the call can be performed or not.

Regarding the RBAC support for OSGi services, we use a similar mechanism.

The Karaf Service Guard provides a service listener which intercepts the service calls, and check if the call to the service can be performed or not.

The list of “secured” OSGi service is defined in the property in the etc/ (using a LDAP syntax filter).

By default, we only “intercept” (and so secure) the command OSGi services: = (&(osgi.command.scope=*)(osgi.command.function=*))

The RBAC definition itself are stored in etc/org.apache.karaf.service.acl.*.cfg configuration files, similar to the etc/jmx.acl*.cfg configuration files used for JMX. The syntax in this file is the same.

RBAC for console commands

As the console commands are actually OSGi services, the direct application of the OSGi services RBAC support is to secure the console commands.

By default, we secure only the OSGi services associated to the console commands (as explained early in the

The RBAC definition on the console commands are defined in the etc/org.apache.karaf.commands.acl.*.cfg configuration files.

You can define one configuration file by command scope. For instance, the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file defines the RBAC for the bundle:* commands.

For instance, in the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file, we can define:

install = admin
refresh[/.*[-][f].*/] = admin
refresh = manager
restart[/.*[-][f].*/] = admin
restart = manager
start[/.*[-][f].*/] = admin
start = manager
stop[/.*[-][f].*/] = admin
stop = manager
uninstall[/.*[-][f].*/] = admin
uninstall = manager
update[/.*[-][f].*/] = admin
update = manager
watch = admin

The format is command[option]=role.

For instance, in this file we:

  • limit bundle:install and bundle:watch commands only for the users with the admin role
  • limit bundle:refresh, bundle:restart, bundle:start, bundle:stop, bundle:uninstall, bundle:update commands with the -f option (meaning executing these commands for “system” bundles) only for the users with the admin role
  • all other commands (not matching the two previously defined rules) can be executed by the users with the manager role

By default, we define RBAC for:

  • bundle:* commands (in the etc/org.apache.karaf.command.acl.bundle.cfg configuration file)
  • config:* commands (in the etc/org.apache.karaf.command.acl.config.cfg configuration file)
  • feature:* commands (in the etc/org.apache.karaf.command.acl.feature.cfg configuration file)
  • jaas:* commands (in the etc/org.apache.karaf.command.acl.jaas.cfg configuration file)
  • kar:* commands (in the etc/org.apache.karaf.command.acl.kar.cfg configuration file)
  • shell:* commands (in the etc/ configuration file)
  • system:* commands (in the etc/org.apache.karaf.command.acl.system.cfg configuration file)

This RBAC rules apply on both “local” console and remote SSH console.

As you don’t really logon the “local” console, we have to define the “roles” that we can use in the “local” console.

These “local” roles are defined in the karaf.local.roles in the etc/ configuration file:

karaf.local.roles = admin,manager,viewer

We can see that, when we use the “local” console, the “implicit local user” will have the admin, manager, and viewer roles.

Some books review: Instant Apache Camel Messaging System,Learning Apache Karaf, and Instant Apache ServiceMix How-To

November 21, 2013 Posted by jbonofre

I’m pleased to be reviewer on new books published by Packt:

I received a “hard” copy from Packt (thanks for that), and I’m now able to do the review.

Instant Apache Camel Messaging System, by Evgeniy Sharapov. Published by Packt publishing in September 2013

This book is a good introduction to Camel. It covers Camel fundamentals.

What is Apache Camel

It’s a quick introduction about Camel, in only four pages. We have a good overview about Camel basics: what is a component, routes, contexts, EIPs, etc.

We have to see that as it is: it’s just a quick introduction. Don’t expect a lot of details about the Camel basics, it just provides a very high level overview.


To be honest, I don’t like this part. It focus mostly on using Maven with Camel: how to use Camel with Maven, integrate Camel in your IDE (Eclipse, or IntelliJ), usage of the archetypes.

I think it’s too much restrictive. I would have prefered a quick listing of the differents ways to install and use Camel: in a Karaf/ServiceMix container, in a Spring application context, in Tomcat or another application server, etc.

I’m afraid that some users will take “bad habits” reading this part.


This part goes in bit deeper about CamelContext and RouteBuilder. It’s a good chapter, but I would have focus a bit more about the DSL (at least Java, Spring, and Blueprint).

The example used is interesting as it uses different components, transformation, predicates and expressions.

It’s a really good introduction.


It’s a good introduction book, only for new Camel users. If you already know Camel, I’m afraid that you will be a disapointed and you won’t learn a lot.

If you are a Camel rookie rider, and you want to move forward quickly, with a “ready to use example”, this book is good one.

I would have expects more details on some key Camel features, especially the EIPs, and some real use cases on EIP with some components.

Learning Apache Karaf, by Jamie Goodyear, Johan Edstrom, Heath Kesler. Published by Packt publishing in October 2013

I helped a lot on this book and I would like to congratulate my friends Jamie Goodyear, Johan Edstrom, Heath Kesler. You did a great job guys !

It’s the perfect book to start with Apache Karaf. All Karaf features are introduced, and more, like Karaf Cellar.

It’s based on Karaf 2.x (an update will be required for Karaf 3.0.0 as a lot of commands, etc changed).

The global content is great for beginner. If you already know Karaf, you probably know most of the content, however, the book can be helpful to discover some features like Cellar.

Good job guys !

Instant Apache ServiceMix How-To, by Henryk Konsek. Published by Packt publishing in June 2013

This book is a good complement from the Camel and Karaf ones. Unfortunately, some chapters are a bit redondent: you will find the same information in both books.

However, as Apache ServiceMix is powered by Karaf, starting from Learning Apache Karaf makes sense and give you details about the core of ServiceMix (the “ServiceMix Kernel”, which is the genesis of Karaf ;)).

This book is a good jump to ServiceMix.

I would have expect some details about some ServiceMix NMR (naming for instance), the different distributions.

ServiceMix is more than an umbrella project gathering Karaf, Camel, CXF, ActiveMQ, etc. It also provides some interesting features like Naming, etc. It would have been great to introduce this.


These three books are great for beginners, especially the Karaf one.

I was really glad and pleased to review these books. It’s a really a tough job to write this kind of books, and we have to congratulate the authors for their job.

It’s a great work guys !

Coming in Karaf 3.0.0: subshell and completion mode

October 10, 2013 Posted by jbonofre

If you are a Karaf user, you probably know that Karaf is very extensible: you can add features in Karaf to provide new functionalities.

For instance, you can install Camel, ActiveMQ, CXF, Cellar, etc in your Karaf runtime.

Most of these features provide new commands:
– Camel provides camel:* commands to manipulate the Camel Context, the routes, etc.
– CXF provides cxf:* commands to manipulate the CXF buses, endpoints, etc.
– ActiveMQ provides activemq:* commands to manipulate brokers.
– Cellar provides cluster:* commands to manipulate cluster nodes, cluster groups, etc.
– and so on

If you install some features like this, the number of commands available in the Karaf shell console is really impressive. And it’s not always easy to find the one that we need.

That’s why subshell support has been introduced.


Karaf now uses commands scope to create “on the fly” a subshell: the commands are grouped by subshell. As you will see later, depending of the completion mode that you will use, you will be able to see the commands only in the current subshell, and change from one subshell to another.

Let take an exemple. In Karaf itself, we have commands to manipulate bundle and commands to manipulate feature, for instance:

  • bundle:list list the bundles
  • bundle:start start bundles
  • bundle:stop stop bundles
  • feature:list list the Karaf features
  • feature:repo-list list the Karaf features repositories

In previous Karaf version, to list bundles and features, you did something like this:

karaf@root> osgi:list
karaf@root> features:list

In Karaf 3.0.0, you can still do the same (just using the new name of the commands):

karaf@root()> bundle:list
karaf@root()> feature:list

But you can also use subshell:

karaf@root()> bundle
karaf@root(bundle)> list
karaf@root(bundle)> feature
karaf@root(feature)> list


karaf@root()> bundle
karaf@root(bundle)> list
karaf@root(bundle)> exit
karaf@root()> feature
karaf@root(feature)> list

We can note several things here:

  • You have commands to go into a subshell. These commands are created on the fly by Karaf using the scope of the commands. Here, we use the bundle and feature commands to go into the bundle and feature subshell.
  • You can see your current subshell location directly in the prompt:


    We can see here that we are in the bundle subshell.
  • We can switch directly from one subhsell to another using the subshell command:

    karaf@root(bundle)> feature
  • You have a new exit command to get out from the current subhsell and return to the root level.

You have the choice between different completion mode, depending the behaviour that you prefer.

Completion Mode

The completion mode defines the behaviour of the TAB key to complete commands.

You have three different modes available:


You can define your default completion mode using the completionMode property in etc/ file. By default, you have:

completionMode = GLOBAL

But, you can also change the completion mode “on the fly” (while using the Karaf shell console) using a new command: shell:completion:

karaf@root()> shell:completion
karaf@root()> shell:completion FIRST
karaf@root()> shell:completion

shell:completion can inform you about the current completion mode used. You can also provide the new completion mode that you want.

GLOBAL completion mode

GLOBAL completion mode is the default one in Karaf 3.0.0 (mostly for transition purpose).

GLOBAL mode doesn’t really use subshell: it’s the same behavior as in previous Karaf versions.

When you type the TAB key, whatever in which subshell you are, the completion will display all commands and all aliases:

karaf@root()> <TAB>
karaf@root()> Display all 273 possibilities? (y or n)
karaf@root()> feature
karaf@root(feature)> <TAB>
karaf@root(feature)> Display all 273 possibilities? (y or n)

FIRST completion mode

FIRST completion mode is an alternative to the GLOBAL completion mode.

If you type the TAB key on the root level subshell, the completion will display the commands and the aliases from all subshells (as in GLOBAL mode). However, if you type the TAB key when you are in a subshell, the completion will display only the commands of the current subshell:

karaf@root()> shell:completion FIRST
karaf@root()> <TAB>
karaf@root()> Display all 273 possibilities? (y or n)
karaf@root()> feature
karaf@root(feature)> <TAB>
info install list repo-add repo-list repo-remove uninstall version-list
karaf@root(feature)> exit
karaf@root()> log
karaf@root(log)> <TAB>
clear display exception-display get log set tail

SUBSHELL completion mode

SUBSHELL completion mode is the real subshell mode (to be honest, it’s my prefered one ;)).

If you type the TAB key on the root level, the completion displays the subshell commands (to go into a subshell), and the global aliases. Once you are in a subshell, if you type the TAB key, the completion displays the commands of the current subshell:

karaf@root()> shell:completion SUBSHELL
karaf@root()> <TAB>
* bundle cl config dev feature help instance jaas kar la ld lde log log:list man package region service shell ssh system
karaf@root()> bundle
karaf@root(bundle)> <TAB>
capabilities classes diag dynamic-import find-class headers info install list refresh requirements resolve restart services start start-level stop
uninstall update watch
karaf@root(bundle)> exit
karaf@root()> camel
karaf@root(camel)> <TAB>
backlog-tracer-dump backlog-tracer-info backlog-tracer-start backlog-tracer-stop context-info context-list context-start context-stop endpoint-list route-info route-list route-profile route-reset-stats
route-resume route-show route-start route-stop route-suspend


The “old” full qualified command names are still valid. So, you don’t have to change anything in your scripts, you can use:

karaf@root()> feature:install
karaf@root()> ssh:ssh

You have the choice: use the completion mode that you prefer, you can always change the mode when you want using the shell:completion command.

My preference is for the SUBSHELL completion mode. Using this mode, you don’t see a bunch of commands on the root level, just the subshell switch commands. I think it’s clear and straight forward. When you “extend” your Karaf runtime with a lot of additional features, it’s interesting to have commands grouped by subshell.

Coming in Karaf 3.0.0: JAAS users, groups, roles, and ACLs

October 4, 2013 Posted by jbonofre

This week I worked with David Booschaert. David proposed a patch for Karaf 3.0.0 to add the notion of groups and use ACL for JMX.

He posted a blog entry about that:

David’s blog is very detailed, mostly in term of implementation, the usage of the interceptor, etc. This blog is more about the pure end-user usage: how to configure group, JMX ACL, etc.

JAAS users, groups, and roles

Karaf uses JAAS for user authentication and authorisation. By default, it uses the PropertiesLoginModule, which use the etc/ file to store the users.

The etc/ file has the following format:


For instance:


that means we have an user karaf, with password karaf, and admin for role.

Actually, the roles are not really used in Karaf: for instance, when you use ssh or JMX, Karaf checks the principal and credentials (basically the username and password) but it doesn’t really use the roles. All users have exactly the same permissions (basically all permissions): they can execute any shell commands, access to any MBeans and call any operation on these MBeans.

More over, the roles are “only” assigned by users. So, it means that we had to define the same roles list for two different users: it was the only way to assign the same roles list to different users.

So, in addition of users and roles, we introduced JAAS groups.

An user can be a member of a group or have roles assigned directly (as previously).

A groups has typically one or more roles assigned. An user that is part of that group will get these roles associated too.
Finally, an user has the union of the roles associated with his groups, togeher with his own roles.

Basically, the etc/ file doesn’t change in terms of format. We just introduced a prefix to identify a group: _g_. An “user” with the _g_: prefix is actually a group.
So a group is defined as an user, and it’s possible to use a group in the list of roles of an user:

# users
karaf = karaf,_g_:admingroup
manager = manager,_g_:managergroup
other = other,_g_:managergroup,otherrole

_g_\:admingroup = admin,viewer,manager
_g_\:managergroup = viewer,manager

We updated the jaas:* shell commands to be able to manage groups, roles, and users:

karaf@root> jaas:realm-manage --realm karaf
karaf@root> jaas:group-add managergroup
karaf@root> jaas:group-add --help
karaf@root> jaas:user-add joe joe
karaf@root> jaas:group-add joe managergroup
karaf@root> jaas:group-role-add managergroup manager
karaf@root> jaas:group-role-add managergroup viewer
karaf@root> jaas:update
karaf@root> jaas:realm-manage --realm karaf
karaf@root> jaas:user-list
User Name | Group | Role
karaf | admingroup | admin
karaf | admingroup | manager
karaf | admingroup | viewer
joe | managergroup | manager
joe | managergroup | viewer

Thanks to the groups, it’s possible to factorise the roles, and easily share different roles between the different users.

Define JMX ACLs based on roles

As explained before, the roles were not really used by Karaf. Especially, on the JMX layer, for instance, using jconsole with karaf user, you were able to see all MBeans and perform all operations.

So, we introduced the support of ACL (AccessLists) on JMX.

Now, whenever a JMX operation is invoked, the roles of the current user are checked against the required roles for this operation.

The ACL are defined using configuration files in the Karaf etc folder.

The ACL configuration file is prefixed with jmx.acl and completed with the MBean ObjectName that it applies to.

For example, to define the ACL on the MBean, you will create a configuration file named etc/
It’s possible to define more generic configuration file: on the domain (using applied to all MBeans in this domain , or the most generic (jmx.acl.cfg) applied to all MBeans.

A very simple configuration file looks like:

# operation = roles
test = admin
getVal = manager,viewer

The configuration file supports different syntax to provide fine-grained operation ACL:

  • Specific match for the invocation, including arguments value:

    test(int)["17"] = role1

    It means that only users with role1 assigned will be able to invoke the test operation with 17 as argument value.
  • Regex match for the invocation:

    test(int)[/[0-9]/] = role2

    It means that only users with role2 assigned will be able to invoke the test operation with argument between 0 and 9.
  • Signature match for the invocation:

    test(int) = role3

    It means that only users with role3 assigned will be able to invoke test operation.
  • Method name match for the invocation:

    test = role4

    It means that only the users with role4 assigned will be able to invoke any test operations (whatever the list of arguments is).
  • A method name wildcard match:

    te* = role5

    It means that only the users with role5 assigned will be able to invoke any operations matching te* expression.

Karaf looks for required roles using the following process:

  1. The most specific configuration file is tried first (etc/
  2. If no matching definition is found in the specific configuration file, a more generic configuration file is inspected. In our case, Karaf will use etc/
  3. If no matching definition is found in the domain specific configuration file, the most generic configuration file is inspected, etc/jmx.acl.cfg.

The ACLs work for any kind of MBeans including the one from the JVM itself. For instance, it’s possible to create etc/ configuration file containing:

gc = manager

It means that only the users with manager role assigned will be able to invoke the gc operation of the JVM Memory MBean.

It’s also possible to define more advanced configuration. For instance, we want that bundles with ID between 0 and 49 can be stopped only by an admin, the other bundles can be stopped by a manager. To do so, we create etc/ configuration file containing:

stop(java.lang.String)[/([1-4])?[0-9]/] = admin
stop = manager

etc/jmx.acl.cfg configuration file is a global configuration for the invocations of any MBean that doesn’t have a more specific ACL.
By default, we define this configuration:

list* = viewer
get* = viewer
is* = viewer
set* = admin
* = admin

We introduced a new MBean: org.apache.karaf:type=security,area=jmx.
The purpose of this MBean is to check whether the current user can access a certain MBean or invoke a specific operation on it.
This MBean can be used by management clients to decide whether to show certain MBeans or operations to the end user.

What’s next ?

Now, David and I are working on ACL/RBAC for:

  • shell commands: as we have ACL for MBeans, it makes sense to apply the same for shell commands.
  • OSGi services: the same can be applied to any OSGi service.

I would like to thank David for this great job. It’s a great addition to Karaf and a new very strong reason to promote Karaf 3 😉

Karaf and Pax Web: disabling reverse lookup

September 29, 2013 Posted by jbonofre

Karaf can be a full WebContainer just by installing the war feature:

features:install war

The war feature will install Pax Web and Jetty web server. You can configure Pax Web using a configuration file etc/org.ops4j.pax.web.cfg. In this configuration, you can define a Jetty configuration file (like jetty.xml) using the following property:


Now, using the etc/jetty.xml, you have a complete access to the Jetty configuration, especially, you can define the Connector configuration.

In the “default” connector (bound to port 8181 by default), you can set “advanced” configuration.

An interesting configuration is the reverse lookup. Depending of your network, the DNS resolution may not work. By default, Jetty will try to do reverse DNS resolution, and if you can’t use a DNS server on the machine, you may encounter “bad response time”, because you will have to wait the timeout for each DNS lookup.
So, in that case, it makes sense to disable the reverse lookup. You can disable reverse lookup per Jetty connector, using the etc/jetty.xml and adding the resolveNames option on the connector:

  <Call name="addConnector">
      <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
        <Set name="host"><Property name="" /></Set>
        <Set name="port"><Property name="jetty.port" default="8040"/></Set>
        <Set name="maxIdleTime">300000</Set>
        <Set name="Acceptors">2</Set>
        <Set name="statsOn">false</Set>
        <Set name="confidentialPort">8443</Set>
        <Set name="lowResourcesConnections">20000</Set>
        <Set name="lowResourcesMaxIdleTime">5000</Set>
        <Set name="resolveNames">false</Set>

Pax Logging: loggers log level

September 29, 2013 Posted by jbonofre

As you probably know, Apache Karaf uses Pax Logging as logging system.

Pax Logging is an OPS4j project (Open Participation Software 4 Java) which provide a fully OSGi compliant framework for logging. Pax Logging leverages a bunch of logging frameworks like slf4j, logback, log4j, avalong, etc. It gathers all the configuration and the actual logging mechanisms in a central way. It means that, in your applications/bundles, you can use slf4j or log4j, it doesn’t matter, behind the hood you will use Pax Logging.

Karaf provides a bunch of shell commands and MBean for logging:

  • log:display to see the log
  • log:display-exception to see only the exceptions
  • log:tail to display and “follow on the fly” the log
  • log:set to change the log level of a particular logger (or the rootLogger)
  • log:get to get the current log level of a particular logger (or the rootLogger)

The default configuration is a log4j configuration described in etc/org.ops4j.pax.logging.cfg. It’s where you especially define the loggers with the level and the appenders with the the conversion pattern.

However, sometimes, you may want to disable logging in a particular class or package. A typical example is when you use the Karaf webcontainer (provided by Pax Web), and you have a monitoring tool (like Naggios or Zabbix) which access to a URL in a “bad manner”. By “bad manner”, I mean that the monitoring tool send just a “ping” most of the time, not a complete valid HTTP request.

In that case, you may see “WARNING” messages in the log, coming from the Jetty web server. The messages look like:

22:25:20,948 | WARN | tp2029485198-177 | pse.jetty.servlet.ServletHandler 514 | 54 - org.eclipse.jetty.util - 7.6.7.v20120910 | /system/console/bundles
    at org.ops4j.pax.web.service.internal.$Proxy10.service(Unknown Source)[71:org.ops4j.pax.web.pax-web-runtime:1.1.4]
    at org.eclipse.jetty.servlet.ServletHolder.handle([62:org.eclipse.jetty.servlet:7.6.7.v20120910]
    at org.eclipse.jetty.servlet.ServletHandler.doHandle([62:org.eclipse.jetty.servlet:7.6.7.v20120910]

As you know the source of this warn message, you may want to “increase” the log level to ERROR (to avoid to see WARN messages), or to completely disable the log messages coming from the Jetty ServletHandler.

To change the log level, in etc/org.ops4j.pax.logging.cfg, you can create a new logger dedicated to jetty, and define the log level for this logger:

or you can completely disable the logging coming from the servlet handler:

OFF is a “special” log level which disable the logging.

Another “use case” for this is about the sshd server embedded in Karaf. You may know that you can access to Karaf using a simple ssh client (OpenSSH on Unix, Putty on Windows, or the client provided with Karaf). By default, the Karaf sshd server log all session connections in DEBUG. So if you turn the rootLogger to DEBUG, you will see a lot of “noise” in the log. So, it makes sense to change the sshd server log level to INFO, just for the channel session:

Apache Hadoop and Karaf, Article 1: Karaf as HDFS client

July 8, 2013 Posted by jbonofre

Maybe some of you remember that, a couple of months ago, I posted some messages on the Hadoop mailing list about OSGi support in Hadoop (

In order to move forward on this topic, instead of an important refactoring, I started to work on standalone and atomic bundles that we can deploy in Karaf. The purpose is to avoid to change Hadoop core, but provides a good Hadoop support directly in Karaf.

I worked on Hadoop trunk (3.0.0-SNAPSHOT) and prepared patches (

I also deployed bundles on my Maven repository to give users the possibility to directly deploy karaf-hadoop in a running Karaf instance.

The purpose is to explain what you can do, the values about this, and maybe you will vote to “include” it in Hadoop directly 😉

To explain exactly what you can do, I prepared a serie of blog posts:

  • Article 1: Karaf as HDFS client. This is this first post. We will see the hadoop-karaf bundle installation, the hadoop and hdfs Karaf shell commands, and how you can use HDFS to store bundles or features using the HDFS URL handler.
  • Article 2: Karaf as MapReduce job client. We will see how to run MapReduce jobs directly from Karaf, and the “hot-deploy-and-run” of MapReduce jobs using the Hadoop deployer.
  • Article 3: Exposing Hadoop, HDFS, Yarn, and MapReduce features as OSGi services. We will see how to use Hadoop features programmatically thanks to OSGi services.
  • Article 4: Karaf as a HDFS datanode (and eventually namenode). Here, more than using Karaf as a simple HDFS client, Karaf will be part of HDFS acting as a datanode, and/or namenode.
  • Article 5: Karaf, Camel, Hadoop all together. In this article, we will use the Hadoop OSGi services now available in Karaf inside Camel routes (plus the camel-hdfs component).
  • Article 6: Karaf as complete Hadoop container. I will explain here what I did in Hadoop to add a complete support of OSGi and Karaf.

Karaf as HDFS client

Just a reminder about HDFS (Hadoop Distributed FileSystem).

HDFS is composed by:
– a namenode hosting the metadata of the filesystem (directories, blocks location, file permissions or modes, …). There is only one namenode per HDFS, and the metadata are stored in memory by default.
– a set of datanode hosting the file blocks. Files are composed by blocks (like in all filesystems). The blocks are located on different datanodes. The blocks can be replicated.

A HDFS client connects to the namenode to execute actions on the filesystem (ls, rm, mkdir, cat, …).

Preparing HDFS

The first step is to set up the HDFS filesystem.

I gonna use a “pseudo-cluster”: a HDFS with the namenode and only one datanode on a single machine.
To do so, I configure the $HADOOP_INSTALL/etc/hadoop/core-site.xml file like this:




For a pseudo-cluster, we setup only one replica per block (as we have only one datanode) in the $HADOOP_INSTALL/etc/hadoop/hdfs-site.xml file:




Now, we can format the namenode:

$HADOOP_INSTALL/bin/hdfs namenode -format

and start the HDFS (both namenode and datanode):


Now, we can connect to the HDFS and create a first folder:

$HADOOP_INSTALL/bin/hadoop fs -mkdir /bundles
$HADOOP_INSTALL/bin/hadoop fs -ls /
Found 1 items
drwxr-xr-x - jbonofre supergroup 0 2013-07-07 22:18 /bundles

Our HDFS is up and running.

Configuration and installation of hadoop-karaf

I created the hadoop-karaf bundle as standalone. It means that it embeds a lot of dependencies internally (directly in the bundle classloader).

The purpose is to:

  1. avoid to alter anything in Hadoop core. Thanks to this approach, I can provide hadoop-karaf bundle for different Hadoop versions, and I don’t need to alter Hadoop itself.
  2. ship all dependencies in the same bundle classloader. Of course it’s not ideal in term of OSGi, but to provide a very easy and ready to use bundle, I gather most of dependencies in the hadoop-karaf bundle.

I worked on trunk directly (for now, if you are interested I can provide hadoop-karaf for existing Hadoop releases): Hadoop 3.0.0-SNAPSHOT.

Before deploying the hadoop-karaf bundle, we have to prepare the Hadoop configuration. In order to be integrated in Karaf, I implemented a mechanism to create and populate the Hadoop configuration from OSGi ConfigAdmin.
The only requirement for the user is to create a org.apache.hadoop PID in the Karaf etc folder containing the Hadoop properties. Actually, it means to just create a $KARAF_INSTALL/etc/org.apache.hadoop.cfg file containing: = hdfs://localhost/

If you don’t want to compile hadoop-karaf bundle yourself, you can use the artifact that I deployed on my Maven repository (

To do this, you have to edit my Maven repository in etc/org.ops4j.pax.url.mvn.cfg and add my repository in the org.ops4j.pax.url.mvn.repositories property:

org.ops4j.pax.url.mvn.repositories = \, \, \

Now, we can start Karaf as usual:


NB: I use Karaf 2.3.1.

We can now install the hadoop-karaf bundle:

karaf@root> osgi:install -s mvn:org.apache.hadoop/hadoop-karaf/3.0.0-SNAPSHOT
karaf@root> la|grep -i hadoop
[ 54] [Active ] [Created ] [ 80] Apache Hadoop Karaf (3.0.0.SNAPSHOT)

hadoop:* and hdfs:* commands

The hadoop-karaf bundle comes with new Karaf shell commands.

For this first blog post, we are going to use only one command: hadoop:fs.

The hadoop:fs command allow you to use a HDFS directly in Karaf (it’s a wrapper to hadoop -fs):

karaf@root> hadoop:fs -ls /
Found 1 items
drwxr-xr-x - jbonofre supergroup 0 2013-07-07 22:18 /bundles
karaf@root> hadoop:fs -df
Filesystem Size Used Available Use%
hdfs://localhost 5250875392 307200 4976799744 0%

HDFS URL handler

Another thing provided by the hadoop-karaf bundle is an URL handler to support directly hdfs URL.

It means that you can use hdfs URL in Karaf commands, as osgi:install, features:addurl, ….

It also means that you can use HDFS to store your Karaf bundles, features, or configuration files.

For instance, we can copy an OSGi bundle in the HDFS:

$HADOOP_INSTALL/bin/hadoop fs -copyFromLocal ~/.m2/repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.commons-lang/2.4_6/org.apache.servicemix.bundles.commons-lang-2.4_6.jar /bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar

The commons-lang bundle is now available in the HDFS. We can check that directly in Karaf using the hadoop:fs command:

karaf@root> hadoop:fs -ls /bundles
Found 1 items
-rw-r--r-- 1 jbonofre supergroup 272039 2013-07-07 22:18 /bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar

Now, we can install the commons-lang bundle in Karaf directly from HDFS, using a hdfs URL:

karaf@root> osgi:install hdfs:/bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar
karaf@root> la|grep -i commons-lang
[ 55] [Installed ] [ ] [ 80] Apache ServiceMix :: Bundles :: commons-lang (

If we list the bundles location, we can the hdfs URL support:

karaf@root> la -l
[ 53] [Active ] [Created ] [ 30]
[ 54] [Active ] [Created ] [ 80] mvn:org.apache.hadoop/hadoop-karaf/3.0.0-SNAPSHOT
[ 55] [Installed ] [ ] [ 80] hdfs:/bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar


This first blog post shows how to use Karaf as a HDFS client. The big advantage is that the hadoop-karaf bundle doesn’t change anything from Hadoop core, and so I can provide it for Hadoop 0.20.x, 1.x, 2.x, or trunk (3.0.0-SNAPSHOT).
In Article 3, you will see how to leverage directly HDFS as OSGi services (and so use in your bundles, Camel routes, …).

Again, if you think that this articles serie is interesting, and you would like to see the Karaf support in Hadoop, feel free to post a comment, a message on the Hadoop mailing list, and whatever to promote it 😉