Posts Tagged: ‘servicemix’

Monitoring and alerting with Apache Karaf Decanter

July 28, 2015 Posted by jbonofre

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation 😉

Overview

Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it.

It’s very flexible, providing ready to use features, and also very easy to extend.

Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf.

Decanter provides collectors, appenders, and SLA.

Collectors

Decanter Collectors are responsible of harvesting the monitoring data.

Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic.

A Collector can be:

  • Event Driven, meaning that it will automatically react to an internal event
  • Polled, meaning that it’s periodically executed by the Decanter Scheduler

You can install multiple Decanter Collectors in the same time. In the 1.0.0 release, Decanter provides the following collectors:

  • log is an event-driven collector. It’s actually a Pax Logging PaxAppender that listens for any log messages and send the log details into the EventAdmin topic.
  • jmx is a polled collector. Periodically, the Decanter Scheduler executes this collector. It retrieves all attributes of all MBeans in the MBeanServer, and send the JMX metrics into the EventAdmin topic.
  • camel (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the Camel routes MBeans.
  • activemq (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the ActiveMQ MBeans.
  • camel-tracer is a Camel Tracer TraceEventHandler. In your Camel route definition, you can set this trace event handler to the default Camel tracer. Thanks to that, all tracing details (from URI, to URI, exchange with headers, body, etc) will be send into the EventAdmin topic.

Appenders

The Decanter Appenders receives the data harvested by the collectors. They consume OSGi EventAdmin Events from the decanter/collect/* topics.

They are responsible of storing the monitoring data into a backend.

You can install multiple Decanter Appenders in the same time. In the 1.0.0 release, Decanter provides the following appenders:

  • log creates a log message with the monitoring data
  • elasticsearch stores the monitoring data into an Elasticsearch instance
  • jdbc stores the monitoring data into a database
  • jms sends the monitoring data to a JMS broker
  • camel sends the monitoring data to a Camel route

SLA and alerters

Decanter also provides an alerting system when some data doesn’t validate a SLA.

For instance, you can define the maximum acceptable number of threads running in Karaf. If the current number of threads is over the limit, Decanter calls alerters.

Decanter Alerters are a special kind of appenders, consuming events from the OSGi EventAdmin decanter/alert/* topics.

As for the appenders, you can have multiple alerters active at the same time. Decanter 1.0.0 release provides the following alerters:

  • log to create a log message for each alert
  • e-mail to send an e-mail for each alert
  • camel to execute a Camel route for each alert

Let see Decanter in action to have details how to install and use it !

Quick start

Decanter is pretty easy to install and provide “key turn” functionalities.

The first thing to do is to register the Decanter features repository in the Karaf instance:

karaf@root()> feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features

NB: for the next Karaf releases, I will add Decanter features repository in etc/org.apache.karaf.features.repos.cfg, allowing to easily register Decanter features simply using feature:repo-add decanter 1.0.0.

We now have the Decanter features available:

karaf@root()> feature:list |grep -i decanter
decanter-common                 | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter API                                
decanter-simple-scheduler       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Simple Scheduler                   
decanter-collector-log          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Messages Collector             
decanter-collector-jmx          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMX Collector                      
decanter-collector-camel        | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Collector                    
decanter-collector-activemq     | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter ActiveMQ Collector                 
decanter-collector-camel-tracer | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Tracer Collector             
decanter-collector-system       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter OS Collector                       
decanter-appender-log           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Appender                       
decanter-appender-elasticsearch | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Elasticsearch Appender             
decanter-appender-jdbc          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JDBC Appender                      
decanter-appender-jms           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMS Appender                       
decanter-appender-camel         | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Appender                     
decanter-sla                    | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA support                        
decanter-sla-log                | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA log alerter                    
decanter-sla-email              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA email alerter                  
decanter-sla-camel              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA Camel alerter                  
elasticsearch                   | 1.6.0            |           | karaf-decanter-1.0.0     | Embedded Elasticsearch node                       
kibana                          | 3.1.1            |           | karaf-decanter-1.0.0     | Embedded Kibana dashboard

For a quick start, we will use elasticsearch embedded to store the monitoring data. Decanter provides a ready to use elasticsearch feature, starting an embedded elasticsearch node:

karaf@root()> feature:install elasticsearch

The elasticsearch feature installs the elasticsearch configuration: etc/elasticsearch.yml.

We now have a ready to use elasticsearch node, where we will store the monitoring data.

Decanter also provides a kibana feature, providing a ready to use set of kibana dashboards:

karaf@root()> feature:install kibana 


We can now install the Decanter Elasticsearch appender: this appender will get the data harvested by the collectors, and store it in elasticsearch:


karaf@root()> feature:install decanter-appender-elasticsearch

The decanter-appender-elasticsearch feature also installs etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file. You can configure the location of the Elasticsearch node there. By default, it uses a local elasticsearch node, especially the one embedded that we installed with the elasticsearch feature.

The etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file contains hostname, port and clusterName of the elasticsearch instance to use:

################################################
# Decanter Elasticsearch Appender Configuration
################################################

# Hostname of the elasticsearch instance
host=localhost
# Port number of the elasticsearch instance
port=9300
# Name of the elasticsearch cluster
clusterName=elasticsearch

Now, our Decanter appender and elasticsearch node are ready.

It's now time to install some collectors to harvest the data.

Karaf monitoring

First, we install the log collector:

karaf@root()> feature:install decanter-collector-log 

This collector is event-driven and will automatically listen for log events, and send into the EventAdmin collect topic.

We install a second collector: the JMX collector.

karaf@root()> feature:install decanter-collector-jmx

The JMX collector is a polled collector. So, it also installs and starts the Decanter Scheduler.

You can define the call execution period of the scheduler in etc/org.apache.karaf.decanter.scheduler.simple.cfg configuration file. By default, the Decanter Scheduler calls the polled collectors every 5 seconds.

The JMX collector is able to retrieve all metrics (attributes) from multiple MBeanServers.

By default, it uses the etc/org.apache.karaf.decanter.collector.jmx-local.cfg configuration file. This file polls the local MBeanServer.

You can create new configuration files (for instance etc/org.apache.karaf.decanter.collector.jmx-mystuff.cfg configuration file), to poll other remote or local MBeanServers.

The etc/org.apache.karaf.decanter.collector.jmx-*.cfg configuration file contains:

type=jmx-mystuff
url=service:jmx:rmi:///jndi/rmi://hostname:1099/karaf-root
username=karaf
password=karaf
object.name=*.*:*

The type property is a free field allowing you to identify the source of the metrics.

The url property allows you to define the JMX URL. You can also use the local keyword to poll the local MBeanServer.
The username and password allows you to define the username and password to connect to the MBeanServer.

The object.name property is optional. By default, the collector harvests all the MBeans in the server. But you can filter to harvest only some MBeans (for instance org.apache.camel:context=*,type=routes,name=* to harvest only the Camel routes metrics).

Now, we can go in the Decanter Kibana to see the dashboards using the harvested data.

You can access to the Decanter Kibana using http://localhost:8181/kibana.

You have the Decanter Kibana welcome page:

Decanter Kibana

Decanter provides ready to use dashboard. Let see the Karaf Dashboard.

Decanter Kibana Karaf 1

These histograms use the metrics harvested by the JMX collector.

You can also see the log details harvested by the log collector:

Decanter Karaf 2

As Kibana uses Lucene, you can extract exactly the data that you need using filtering or queries.

You can also define the time range to get the metrics and logs.

For instance, you can create the following query to filter only the message coming from Elasticsearch:

loggerName:org.elasticsearch*

Camel monitoring and tracing

We can also use Decanter for the monitoring of the Camel routes that you deploy in Karaf.

For instance, we add Camel in our Karaf instance:

karaf@root()> feature:repo-add camel 2.13.2
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features
karaf@root()> feature:install camel-blueprint

In the deploy, we create the following very simple route (using the route.xml file):

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana, we can go in the Camel dashboard:

Decanter Kibana Camel 1

We can see the histograms here, using the JMX metrics retrieved on the Camel MBeans (especially, we can see for our route the exchanges completed, failed, the last processing time, etc).

You can also see the log messages related to Camel.

Another feature provided by Decanter is a Camel Tracer collector: you can enable the Decanter Camel Tracer to log all exchange state in the backend.

For that, we install the Decanter Camel Tracer feature:

karaf@root()> feature:install decanter-collector-camel-tracer

We update our route.xml in the deploy folder like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="eventAdmin" interface="org.osgi.service.event.EventAdmin"/>

    <bean id="traceHandler" class="org.apache.karaf.decanter.collector.camel.DecanterTraceEventHandler">
        <property name="eventAdmin" ref="eventAdmin"/>
    </bean>

    <bean id="tracer" class="org.apache.camel.processor.interceptor.Tracer">
        <property name="traceHandler" ref="traceHandler"/>
        <property name="enabled" value="true"/>
        <property name="traceOutExchanges" value="true"/>
        <property name="logLevel" value="OFF"/>
    </bean>

    <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana Camel dashboard, you can see the details in the tracer panel:

Decanter Kibana Camel 2

Decanter Kibana also provides a ready to use ActiveMQ dashboard, using the JMX metrics retrieved from an ActiveMQ broker.

SLA and alerting

Another Decanter feature is the SLA (Service Level Agreement) checking.

The purpose is to check if a harvested data validate a check condition. If not, an alert is created and send to SLA alerters.

We want to send the alerts to two alerters:

  • log to create a log message for each alert (warn log level for serious alerts, error log level for critical alerts)
  • camel to call a Camel route for each alert.

First, we install the decanter-sla-log feature:

karaf@root()> feature:install decanter-sla-log

The SLA checker uses the etc/org.apache.karaf.decanter.sla.checker.cfg configuration file.

Here, we want to throw an alert when the number of threads in Karaf is greater to 60. So in the checker configuration file, we set:

ThreadCount.error=range:[0,60]

The syntax in this file is:

attribute.level=check

where:

  • attribute is the name of the attribute in the harvested data (coming from the collectors).
  • level is the alert level. The two possible values are: warn or error.
  • check is the check expression.

The check expression can be:

  • range for numeric attribute, like range:[x,y]. The alert is thrown if the attribute is out of the range.
  • equal for numeric attribute, like equal:x. The alert is thrown if the attribute is not equal to the value.
  • notequal for numeric attribute, like notequal:x. The alert is thrown if the attribute is equal to the value.
  • match for String attribute, like match:regex. The alert is thrown if the attribute doesn't match the regex.
  • notmatch for String attribute, like nomatch:regex. The alert is thrown if the attribute match the regex.

So, in our case, if the number of threads is greater than 60 (which is probably the case ;)), we can see the following messages in the log:

2015-07-28 22:17:11,950 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:17:11,951 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:5639 | alertLevel:error | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:22000000000 | PeakThreadCount:225 | AllThreadIds:[J@6d9ad2c5 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:22911917003 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:221 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/error | 

Let's now extend the range, add a new check on the thread, and add a new check to throw alerts when we have errors in the log:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,300]
loggerLevel.error=match:ERROR

Now, we want to call a Camel route to deal with the alerts.

We create the following Camel route, using the deploy/alert.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

        <camelContext xmlns="http://camel.apache.org/schema/blueprint">
                <route id="alerter">
                        <from uri="direct-vm:decanter-alert"/>
                        <to uri="log:alert"/>
                </route>
        </camelContext>

</blueprint>

Now, we can install the decanter-sla-camel feature:

karaf@root()> feature:install decanter-sla-camel

This feature also installs a etc/org.apache.karaf.decanter.sla.camel.cfg configuration file. In this file, you can define the Camel endpoint URI where you want to send the alert:

alert.destination.uri=direct-vm:decanter-alert

Now, let's decrease the thread range in etc/org.apache.karaf.decanter.sla.checker.cfg configuration file to throw some alerts:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,60]
loggerLevel.error=match:ERROR

Now, in the log, we can see the alerts.

From the SLA log alerter:

2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:6234 | alertLevel:warn | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:193150000000 | PeakThreadCount:225 | AllThreadIds:[J@28f0ef87 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:201484424892 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:222 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/warn | 

but also from the SLA Camel alerter:

2015-07-28 22:39:15,293 | INFO  | Thread-41        | alert                            | 114 - org.apache.camel.camel-core - 2.13.2 | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {hostName=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root, alertPattern=range:[0,60], ThreadAllocatedMemorySupported=true, ThreadContentionMonitoringEnabled=false, TotalStartedThreadCount=6236, alertLevel=warn, CurrentThreadCpuTimeSupported=true, CurrentThreadUserTime=193940000000, PeakThreadCount=225, AllThreadIds=[J@408db39f, type=jmx-local, ThreadAllocatedMemoryEnabled=true, CurrentThreadCpuTime=202296849879, ObjectName=java.lang:type=Threading, ThreadContentionMonitoringSupported=true, ThreadCpuTimeSupported=true, ThreadCount=222, event.topics=decanter/alert/warn, ThreadCpuTimeEnabled=true, ObjectMonitorUsageSupported=true, SynchronizerUsageSupported=true, alertAttribute=ThreadCount, DaemonThreadCount=198}]

Decanter also provides the SLA e-mail alerter to send the alerts by e-mail.

Now, you can play with the SLA checker, and add the checks on the attributes that you need. The Decanter Kibana dashboards help a lot there: in the "Event Monitoring" table, you can see all raw harvested data, allowing you to find the attributes.

What's next

It's just the first Decanter release, but I think it's an interesting one.

Now, we are in the process of adding:

  • a new Decanter CXF interceptor collector, thanks to this collector, you will be able to send details about the request/response on CXF endpoints (SOAP-Request, SOAP-Response, REST message, etc).
  • a new Decanter Redis appender, to send the harvested data to Redis
  • a new Decanter Cassandra appender, to send the harvested data to Cassandra
  • a Decanter WebConsole, allowing to easily manipulate the SLA
  • improvement the SLA support with "recovery" support to send only one alert when the check failed, and another alert when the value "recovered"

Anyway, if you have ideas and want to see new features in Decanter, please let us know.

I hope you like Decanter and see interest in this new Karaf project !

Apache Karaf 2.3.0 released !

October 16, 2012 Posted by jbonofre

Waiting for Karaf 3.0.0, we worked hard in the Karaf team to provide Apache Karaf 2.3.0.

The Karaf 2.2.x branch is now only in maintenance mode: it means that no new features will be implemented in this branch, only major bug fixes.

The new “stable” branch is now Karaf 2.3.x which is a perfect transition branch between Karaf 2.2.x (heavely used) and the future Karaf 3.x (which should arrive very soon).

What’s new in this 2.3.0 release:

* OSGi r4.3: Karaf 2.2.x branch was powered by OSGi frameworks implementing OSGi r4.2 norm. Karaf 2.3.0 is now powered by the new OSGi r4.3 framework (Apache Felix 4.0.3 and Equinox 3.8.x), for both OSGi core and compendium. It provides new features like weaving, etc.
* Aries Blueprint 1.0.x: Karaf 2.3.0 uses the new Aries Blueprint version at different level (core, JMX, etc).
* Update to ASM 4.0: in order to work with Aries proxies, we updated to new ASM bundle. We also provided configuration that allows you to enable/disable weaving.
* OSGi Regions and SCR support: Karaf 2.3.0 provides both Regions and SCR support.
* JMX improvement: the previous MBeanRegistrer from Karaf 2.2.x has been removed to be replaced by Aries JMX. It allows an easier way to integrate MBeans by registering OSGi services. The MBeans have been improved to provide new operations and options corresponding with what you can do using the shell commands.
* Complete itest framework: Karaf 2.3.0 provides a new tool: Karaf exam. This tool provides a framework to very easily implements integration tests. It’s able to download and bootstrap a Karaf version on which you can run your commands, deploy your features and bundles, etc. It allows you to run a complete integration tests chain from Maven.
* Dependencies upgrade: a lot of dependencies have been updated. Karaf 2.3.0 uses Pax Logging 1.7.0 including bug fixes and SLF4J 1.7.1 support, new Pax Web and Jetty version for the web container, new JLine, SSHD and Mina versions which especially fix weird behavior on Windows for certain keys, etc.
* KAR improvements: if Karaf 3.x will provide a lot of enhancements around the KAR files, Karaf 2.3.0 already provides fixes in the KAR lifecycle.
* JAAS commands improvements: the jaas:* commands have been enhanced to allow you a fine-grained management of the realms and login modules.

You can find the Karaf 2.3.0 content details on the Release Notes.

The Karaf team is proud to provide this release to you. We hope you will enjoy it !

Apache Karaf Cellar 2.2.4

May 20, 2012 Posted by jbonofre

Apache Karaf Cellar 2.2.4 has been released. This release is a major release, including a bunch of bug fixes and new features.

Here’s the list of key things included in this release.

Consistent behavior

Cellar is composed by two parts:

  • the distributed resources which is a datagrid maintained by each cluster nodes and containing the current cluster status (for instance of the bundles, features, etc)
  • the cluster event which is broadcasted from a node to the others

Cluster shell commands, cluster MBeans, synchronizers (called at startup) and listeners (called when a local event is fired, such as feature installed) update the distributed resource and broadcast cluster events.

To broadcast cluster events, we use an event producer. The cluster event is consommed by a consumer which delegates the handling of the cluster event to a handler. We have a handler for feature, bundle, etc.

Now, all Cellar “producers” do:

  1. check if the cluster event producer is ON
  2. check if the resource is allowed, checking in the blacklist/whitelist configuration
  3. update the distributed resources
  4. broadcast the cluster event

Only use hazelcast.xml

The org.apache.karaf.cellar.instance.cfg file has disappear. It’s now fully replaced by the hazelcast.xml.

It fixes issue around the network configuration and allows new configuration, especially around the encryption.

OSGi event support

cellar-event feature now provides OSGi event support in Cellar. It uses eventadmin layer. All local event generates a cluster event which is broadcasted to the cluster. It allows to sync remote nodes.

Better shell commands

Now, all cluster:* shell commands mimic the core Karaf commands. It means that we will find quite the same arguments and options and similar output.

The cluster:group-* shell commands have been improved and fixed.

A new shell command has been introduced: cluster:config-propappend to append a String to a config property.

Check everywhere

We added a bunch of check to be sure to have a consistent situation on the cluster and predictable behavior.

It means that the MBeans and shell commands check if a cluster group exists, if a cluster event producer is on, if a resource is allowed on the cluster (for the given cluster group), etc.

You have clean messages informing you about the current status of your commands.

Improvement on the config layer

The Cellar config layer has been improved. It now uses a karaf.cellar.sync property to avoid infinite loop. The config delete operation support has been added, including the cluster:config-delete commands.

Feature repositories

Previously, the feature repositories handling was hidden for the users.

Now, you have a full access to the distributed features repositories set. It means that you can see the distributed repositories for a cluster group, add a new features repository to a cluster group, and remove a features repository from a cluster group.

To do that, you have the cluster:feature-url-* shell commands.

CellarOBRMBean

Cellar provides a MBean for all parts of the cluster resources (bundles, features, config, etc).

However, if an user installed cellar-obr feature, he got the cluster:obr-* shell commands but no corresponding MBean.

The CellarOBRMBean has been introduced and is installed with the cellar-obr feature.

Summary

Karaf Cellar 2.2.4 is really a major release, and I think it should have been named 2.3.0 due to the bunch of the bug fixes and new features: we fixed 77 Jiras in this release and performed lot of manual tests.

The quality has been heavily improved in this release comparing to the previous one.

I encourage all Cellar users to update to Karaf Cellar 2.2.4 and I hope you will be pleased with this new release 😉

Apache Karaf Cellar and central management

February 29, 2012 Posted by jbonofre

Introduction

One of the first purpose of Apache Karaf Cellar is to synchronize the state of each Karaf instances in the Cellar cluster.

It means that any change performed on one node (install a feature, start a bundle, add a config, etc) generates a “cluster event” which is broadcasted by Cellar to all others nodes. The target node handles the “cluster event” and performed the corresponding action (install a feature, start a bundle, add a config, etc).

By default, the nodes have the same role. It means that you can perform actions on any node.

But, you may prefer to have one node dedicated to the management. It’s what we name “central management”.

Central management

With central management, one node is identified by the manager. It means that cluster actions will be performed only on this node. The manager is the only one which is able to produce cluster event. The managed nodes are only able to receive and handle events, not to produce.

With this approach, you can give access (for instance grant ssh or JMX access) to the manager node to an operator and manage the cluster (and all cluster groups) from this manager node.

We will see now how to implement central management.

We assume that we have three nodes:

  • manager
  • node1
  • node2

First, we install Cellar on the three nodes:


karaf@manager> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.4-SNAPSHOT/xml/features
karaf@manager> features:install cellar


karaf@node1> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.4-SNAPSHOT/xml/features
karaf@node1> features:install cellar


karaf@node2> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.3/xml/features
karaf@node2> features:install cellar

It’s a default installation, nothing special here.

Now, we disable the event producing on node1 and node2.

To do that, from the manager:


karaf@manager> cluster:producer-stop node1:5702
Node Status
node1:5702 false
karaf@manager> cluster:producer-stop node2:5703
Node Status
node2:5703 false

We can check that the producer are really stopped:


karaf@manager> cluster:producer-status node1:5702
Node Status
node1:5702 false
karaf@manager> cluster:producer-status node2:5703
Node Status
node2:5703 false

Whereas the manager is always able to produce event:


karaf@manager> cluster:producer-status
Node Status
manager:5701 true

Now, for instance, we can install eventadmin feature from the manager:


karaf@manager> cluster:features-install default eventadmin

We can see that this feature has been installed on node1:


karaf@node1> features:list|grep -i eventadmin
[installed ] [2.2.5 ] eventadmin karaf-2.2.5

and node2:


karaf@node2> features:list|grep -i eventadmin
[installed ] [2.2.5 ] eventadmin karaf-2.2.5

Now, we uninstall this feature from the manager:


karaf@manager> cluster:features-uninstall default eventadmin

We can see that the feature has been uninstall from node1:


karaf@node1> features:list|grep -i eventadmin
[uninstalled] [2.2.5 ] eventadmin karaf-2.2.5

And now, we try to install the eventadmin feature on the cluster from the node1:


karaf@node1> cluster:features-install default eventadmin

The event is not generated and the others nodes are not updated.

However, two kind of events are always produced (even if the producer is stopped):

  • the events with the “force” flag. When you create an event (programmatically), you can use setForce(true) to force the event producing.
  • the “admin” events, like the events produced to stop/start a producer, consumer, handler in the cluster.

NB: the event produce handling has been largely improved for Cellar 2.2.4.

Coming in next Cellar release

Currently, the cellar feature install the cellar core bundles, but also the cellar shell commands, and the MBeans.

If it makes sense for the manager, the managed nodes should not “display” the cluster:* commands as they are managed by the manager.

I will make a little refactoring of the Cellar features to split in two parts:

  • the cellar-core feature will install hazelcast, and Cellar core bundles
  • the cellar-manager feature will install the cluster:* shell commands and the MBeans

Apache Karaf Cellar and DOSGi

November 29, 2011 Posted by jbonofre

Next version of Apache Karaf Cellar will include a DOSGi (Distributed OSGi) implementation.

There are several existing DOSGi implementations: in Apache CXF (powered by the CXF stack), in Fuse Fabric, etc.

The purpose of the Cellar one is to leverage the Cellar existing (Hazelcast instance, distributed map, etc), and to be very easy to use.

Karaf Cellar DOSGi installation

Cellar DOSGi is part of the main cellar feature. To install it:


karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.0-SNAPSHOT/xml/features
karaf@root> features:install cellar

Distributed services

You can note a new command available in Cellar:


karaf@root> cluster:list-services

It displays the list of services “cluster aware”. It means a service that could be used remotely from another node.

Code sample

To illustrate the Cellar DOSGi usage, we will use two bundles:

  • the provider bundle is installed on node A and “expose” a distributed service
  • the client bundle is installed on node B and will use the provider service

Provider bundle

The provider bundle expose an OSGi service, flagged as a distributed service.

The service is very simple, it’s just an echo service.

Here’s the interface describing the service:


package org.apache.karaf.cellar.sample;

public interface EchoService {

  String process(String message);

}

and the corresponding implementation:


package org.apache.karaf.cellar.sample;

public class EchoServiceImpl implements EchoService {

&nbps; public String process(String message) {
    return "Processed by distributed service: " + message;
  }

}

Up to now, nothing special, nothing related to DOSGi or Cellar.

To expose the service in Karaf, we create a blueprint descriptor:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <bean id="echoService" class="org.apache.karaf.cellar.sample.EchoServiceImpl"/>

  <service ref="echoService" interface="org.apache.karaf.cellar.sample.EchoService">
    <service-properties>
      <entry key="service.exported.interfaces" value="*"/>
    </service-properties>
  </service>

</blueprint>

We can note that the only “special” part is that we added the service.exported.interfaces property to the echoService.

It’s just a flag to define the interface/service to define as distributed (it means accessible from remote node).

We didn’t change the code of the service itself, just added this property. It means that it’s really easy to turn an existing service as a distributed service.

Client bundle

The client bundle will get a reference to the echoService. In fact, the reference will be a kind of proxy to the service implementation located remotely, on another node.

The client is really simple, it indefinitely iterates to use the clusterService:


package org.apache.karaf.cellar.sample.client;

public class ServiceClient {

  private EchoService echoService;

  public void setEchoService(EchoService echoService) {
    this.echoService = echoService;
  }

  public EchoService getEchoService() {
    return this.echoService;
  }

  public void process() throws Exception {
    int count = 0;
    while (true) {
      System.out.println(echoService.process("Call " + count));
      Thread.sleep(5000);
      count++;
    }
  }

}

We inject the echoService using Blueprint:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <reference id="echoService" interface="org.apache.karaf.cellar.sample.EchoService"/>

  <bean id="serviceClient" class="org.apache.karaf.cellar.sample.client.ServiceClient" init-method="process">
&nbps;   <property name="echoService" ref="echoService"/>
  </bean>

</blueprint>

It’s done. The serviceClient will use the echoService. If a “local” echoService exists, the OSGi framework will bind the reference to this service, else Cellar will look for a distributed service (on all node) exporting the EchoService interface and bind a proxy to the distributed service.

Camel 2.8.0 new features for Karaf/ServiceMix

June 29, 2011 Posted by jbonofre

Camel provides Karaf features descriptor since quite a long time now. But Camel 2.8.0 will include new Karaf features very useful and turning Karaf and ServiceMix as the main container to run Camel.

Install Camel in Karaf

Installing Camel in Karaf is very simple as a features descriptor is provided. It means that you can register the Camel features descriptor in your running Karaf instance:

karaf@root> features:addurl mvn:org.apache.camel.karaf/apache-camel/2.8.0/xml/features

Now, you have the Camel features available:

karaf@root> features:list|grep -i camel

[uninstalled] [2.8.0 ] camel repo-0

[uninstalled] [2.8.0 ] camel-core repo-0

[uninstalled] [2.8.0 ] camel-spring repo-0

[uninstalled] [2.8.0 ] camel-blueprint repo-0

Deploy Camel features

To start using Camel in Karaf, you have to install at least the camel feature:

root@karaf> features:install camel

Depending of your requirements, you will certainly install others Camel features.

For instance, if you use Blueprint Camel DSL, you have to install the camel-blueprint feature:

root@karaf> features:install camel-blueprint

or, if you use stream component in an endpoint (for instance “stream:out”), you will install the camel-stream feature:

root@karaf> features:install camel-stream

Camel and OSGi

When Camel is used in an OSGi environment, it automatically  exposes CamelContexts as OSGi services. It means that, when you deploy a route, the associated CamelContext is available for OSGi bundles.

You can look up for CamelContext OSGi services with this small code snippet:


ServiceReference[] references = bundleContext.getServiceReferences(CamelContext.class.getName(), null);
if (references != null) {
  for (ServiceReference reference : references) {
    if (reference != null) {
      CamelContext camelContext = (CamelContext) bundleContext.getService(reference);
      if (camelContext != null) {
        // do what you want on the CamelContext
      }
    }
  }
}

Camel Karaf commands

Camel 2.8.0 provides a set of Karaf commands which allow you to get information, start, stop about Camel contexts and routes.

  • camel:list-contexts displays the list of Camel context currently available in the running Karaf/ServiceMix instance
  • camel:list-routes displays the list of Camel routes currently available in the running Karaf/ServiceMix instance
  • camel:start-context starts a Camel context
  • camel:stop-context stops a Camel context
  • camel:info-context display detailed information about a Camel context
  • camel:start-route starts a Camel route
  • camel:stop-route stops a Camel route
  • camel:show-route displays the XML rendering of a Camel route (whatever your route DSL is)
  • camel:info-route displays detailed information about a Camel route including statistics