Category: ‘Apache ServiceMix’

What’s new in Apache Karaf Cellar 4.0.0 ?

September 22, 2015 Posted by jbonofre

Apache Karaf Cellar 4.0.0 release is now on vote, and hopefully, it should be available very soon.

This release is a major new release. More than just bug fixes, this release brings several refactoring and new features.

It’s time to take a tour in the new Cellar 4.0.0.

HTTP features

Cellar 4.0.0 brings new HTTP features.

HTTP load balancer

Cellar 4.0.0 provides a new feature: cellar-http-balancer.

The purpose is to use any nodes in a cluster group to access a web application, even if the web application is not actually deployed on the local node.

For instance, you have a cluster group containing four nodes. You deploy a web application on two nodes. So basically, to access your web application, you have to:

  • specify the address of one of two nodes where the web application is deployed in your browser
  • use a load balancer (mod_proxy_balancer, Cisco, Juniper, F5, whatever) to load balance on the two nodes. The drawback of this is that the load balancer is a single point of failure, and adding a new node providing the web application needs to update the load balancer configuration.

The cellar-http-balancer feature install a proxy in the nodes where the web application is not deployed. It means that you can use any node in the cluster group to access your web application, even if the application is not deployed there.

To illustrate this, let’s take a cluster with two nodes: node1 and node2.

On node1, we install http, http-whiteboard, and cellar feature:

karaf@node1()> feature:install http
karaf@node1()> feature:install http-whiteboard
karaf@node1()> feature:repo-add cellar 4.0.0
karaf@node1()> feature:install cellar

We now install the cellar-http-balancer feature on the cluster:

karaf@node1()> cluster:feature-install default cellar-http-balancer

Now, we install the webconsole only on node1:

karaf@node1()> feature:install webconsole

We can see the webconsole locally deployed using the http:list command:

karaf@node1()> http:list 
ID  | Servlet          | Servlet-Name    | State       | Alias               | Url
------------------------------------------------------------------------------------------------------
101 | KarafOsgiManager | ServletModel-2  | Undeployed  | /system/console     | [/system/console/*]
105 | InstancePlugin   | ServletModel-7  | Deployed    | /instance           | [/instance/*]
101 | ResourceServlet  | /res            | Deployed    | /system/console/res | [/system/console/res/*]
103 | GogoPlugin       | ServletModel-5  | Deployed    | /gogo               | [/gogo/*]
101 | KarafOsgiManager | ServletModel-11 | Deployed    | /system/console     | [/system/console/*]
102 | FeaturesPlugin   | ServletModel-9  | Deployed    | /features           | [/features/*]

Using a browser, we can access the webconsole using the http://localhost:8181/system/console URL.

But we can also see that the webconsole is also available on the cluster group:

karaf@node1()> cluster:http-list default
Alias               | Locations
---------------------------------------------------------------
/system/console/res | http://10.0.42.1:8181/system/console/res
/gogo               | http://10.0.42.1:8181/gogo
/instance           | http://10.0.42.1:8181/instance
/system/console     | http://10.0.42.1:8181/system/console
/features           | http://10.0.42.1:8181/features

It means that I can use any node member of this cluster group to access the webconsole from node1 (I agree it’s not really interesting, but it’s just for the example, imagine that the webconsole is your own web application).

On node2, as I’m using the same machine, I have to use another port than 8181 for the HTTP service, so I’m adding etc/org.ops4j.pax.web.cfg file containing:

org.osgi.service.http.port=8041

It means that the HTTP service on node2 will listen on port 8041.

Now, on node2, I install the http, http-whiteboard, and cellar features:

karaf@node2()> feature:install http
karaf@node2()> feature:install http-whiteboard
karaf@node2()> feature:repo-add cellar 4.0.0
karaf@node2()> feature:install cellar

As we installed the cellar-http-balancer feature on the default cluster group, it’s automatically installed on node2 when we enable Cellar.

Of course, on node2, we can see the HTTP applications available on the cluster, with node1 as location:

karaf@node2()> cluster:http-list default 
Alias               | Locations
---------------------------------------------------------------
/system/console/res | http://10.0.42.1:8181/system/console/res
/gogo               | http://10.0.42.1:8181/gogo
/instance           | http://10.0.42.1:8181/instance
/system/console     | http://10.0.42.1:8181/system/console
/features           | http://10.0.42.1:8181/features

Now, if we take a look on the “local” HTTP applications on node2 (using http:list), we can see:

karaf@node2()> http:list 
ID  | Servlet                    | Servlet-Name   | State       | Alias               | Url
---------------------------------------------------------------------------------------------------------------
100 | CellarBalancerProxyServlet | ServletModel-3 | Deployed    | /gogo               | [/gogo/*]
100 | CellarBalancerProxyServlet | ServletModel-2 | Deployed    | /system/console/res | [/system/console/res/*]
100 | CellarBalancerProxyServlet | ServletModel-6 | Deployed    | /features           | [/features/*]
100 | CellarBalancerProxyServlet | ServletModel-5 | Deployed    | /system/console     | [/system/console/*]
100 | CellarBalancerProxyServlet | ServletModel-4 | Deployed    | /instance           | [/instance/*]

We can see the same URLs available on node2, providing by CellarBalancerProxyServlet. In your browser, if you access to http://localhost:8041/system/console you will access to the webconsole deployed on node1 whereas you use node2.

It means that the CellarBalancerProxyServlet act as a proxy. It does:

  1. Cellar HTTP Balancer is listening for HTTP servlets on local node. When a servlet is deployed locally to the node, it updates the servlets set on the cluster, and send a cluster event to the other nodes in the same cluster group.
  2. When a node receives a cluster event from the HTTP balancer, if a servlet with the same alias is not already deployed locally, Cellar HTTP balancer creates a CellarBalancerProxyServlet with the same alias.
  3. When the CellarBalancerProxyServlet receives a HTTP request, it retrieves the locations where the servlet is actually deployed from the cluster set, and randomly choose one, where the request is proxied.

HTTP sessions replication

Cellar 4.0.0 also brings support of HTTP session replication.

You don’t need any specific Cellar feature: just install http, http-whiteboard, and cellar features (in this order):

karaf@node1()> feature:install http
karaf@node1()> feature:install http-whiteboard
karaf@node1()> feature:repo-add cellar 4.0.0
karaf@node1()> feature:install cellar

To be able to use HTTP sessions replication, you have to implement serializable HTTP sessions in your web application.

Now, the only change in your application, is to add a specific filter. For that, you have to update the WEB-INF/web.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
        version="3.0">

        <filter>
                <filter-name>hazelcast-filter</filter-name>
                <filter-class>com.hazelcast.web.WebFilter</filter-class>
            <!--
                Name of the distributed map storing
                your web session objects
            --> 
                <init-param>
                        <param-name>map-name</param-name>
                        <param-value>my-sessions</param-value>
                </init-param>
            <!-- How is your load-balancer configured? stick-session means all requests of
                a session is routed to the node where the session is first created. This is
                excellent for performance. If sticky-session is set to false, when a session
                 is updated on a node, entry for this session on all other nodes is invalidated.
                 You have to know how your load-balancer is configured before setting this
                 parameter. Default is true. -->
                <init-param>
                        <param-name>sticky-session</param-name>
                        <param-value>false</param-value>
                </init-param>
            <!--
                Are you debugging? Default is false.
            -->
                <init-param>
                        <param-name>debug</param-name>
                        <param-value>false</param-value>
                </init-param>
        </filter>
        <filter-mapping>
                <filter-name>hazelcast-filter</filter-name>
                <url-pattern>/*</url-pattern>
                <dispatcher>FORWARD</dispatcher>
                <dispatcher>INCLUDE</dispatcher>
                <dispatcher>REQUEST</dispatcher>
        </filter-mapping>
        <listener>
                <listener-class>com.hazelcast.web.SessionListener</listener-class>
        </listener>

</web-app>

That’s all: if you deploy your web application on several nodes, then the sessions will be replicated and available on all node. It means that your clients will be able to transparently switch from a node to another.

Refactoring of the synchronizers

Cellar 4.0.0 also brings refactoring of the different synchronizers.

Now the synchronizers:

  • support new sync policies
  • send cluster events to the other nodes allowing a complete sync when a node joins a cluster group

If you take a look in the etc/org.apache.karaf.cellar.groups.cfg file, you will see:

default.bundle.sync = cluster
default.config.sync = cluster
default.feature.sync = cluster
default.obr.urls.sync = cluster
default.balanced.servlet.sync = cluster

Now, the synchronizers support the following policies:

  • disabled means that the synchronizer doesn’t do anything
  • cluster means that, first the synchronizer retrieves the state from the cluster, and update the node state if needed (pull first), and then, push the node state to cluster and send cluster events if needed (push after)
  • node means that, first the synchronizer pushed the state of the node to the cluster and send cluster events if needed (push first), and then, retrieves the state from the cluster and update the local node if needed (pull after)
  • clusterOnly means that the synchronizer only retrieve the state from the cluster and the local node if needed, nothing is pushed to the cluster. With this policy, the cluster acts as a “master”.
  • nodeOnly means that the synchronizer only pushed the local node state to the cluster and send cluster events if required. With this policy, the node acts as a “master”.

Karaf 4 powered and complete refactoring

Cellar 4.0.0 is a complete refactoring compared to previous versions, as it’s designed for Karaf 4.0.0:

  • blueprint is not used anymore, Cellar modules use their own activator extending Karaf BaseActivator, and leveraging the Karaf annotations (@Services, @RequireService, @ProvideService, etc) and the karaf-services-maven-plugin.
  • the Cellar commands use the new Karaf 4 API, and annotations

It allows Cellar to install faster than before, and ready to support the new Karaf Features Resolver, including requirements/capabilities definitions.

What’s next

But, Cellar 4.0.0 is the first release on the new 4.x serie. I’m already planning a 4.1.0 bringing new features and enhancements (and of course bug fixes).

DOSGi refactoring and load balancing policies

I would like to refactore the DOSGi layer:

  • right now, Cellar DOSGi uses two ServiceListeners. It would like to replace the ServiceListeners with pure ServiceTrackers, and use the same design used for the Cellar HTTP Balancer (tracking services, sending cluster events to the other nodes, where the handler creates proxies). It will gives more flexibility and easier lifecycle/tracking of DOSGi.
  • Cellar DOSGi doesn’t support cluster group. A remote OSGi service is available on all cluster nodes, whatever in which cluster group the node is. The refactoring will leverage the cluster group, as we will have the OSGi services available per cluster group, so the proxies on cluster group members.
  • Cellar DOSGi will also support balancing policy. Assuming that several nodes provide the same service, the client nodes will be able to use random, round-robin, weight based balancing selection of the remote node. After this refactoring, it could make sense to include the local service as part of the balancing selection (I have to think about that ;)).

New HTTP balancer policies

Right now, the Cellar HTTP balancer (in the CellarBalancerProxyServlet) only supports random balancing. For instance, if two nodes provides the same service, the balancer randomly choses one of the two.

I will introduce new balancing policies, configurable using the etc/org.apache.karaf.cellar.groups.cfg file:

  • random: as we have right now, it will still be there
  • round-robin: in the cluster, I will keep the index of the last node used in the proxy. The next call will use the next node in the list.
  • weight-based: the user will be able to give a weight on each node (based on the node ID). It’s a ratio of the number of requests that each node should deal with. The proxies will proxy the requests according to these ratios.

New Cellar HTTP sessions replication

Right now, the Cellar HTTP replications directly leverage the Hazelcast WebFilter and sessions replication.

The only drawback is that we don’t leverage the Cellar cluster groups.

In 4.1.0, Cellar will provide its own WebFilter (extending the Hazelcast one) in order to support cluster groups: it means that the sessions replication can be narrowed to only nodes member of the same cluster group.

It will give more flexibility to the users and advanced sessions replications.

Conclusion

Of course, Cellar 4.0.0 also brings lot of bug fixes. I think it’s a good start in the new Cellar 4 series, leveraging Karaf 4.

I hope you will enjoy it and let me know if you have any new ideas !

Monitoring and alerting with Apache Karaf Decanter

July 28, 2015 Posted by jbonofre

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation 😉

Overview

Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it.

It’s very flexible, providing ready to use features, and also very easy to extend.

Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf.

Decanter provides collectors, appenders, and SLA.

Collectors

Decanter Collectors are responsible of harvesting the monitoring data.

Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic.

A Collector can be:

  • Event Driven, meaning that it will automatically react to an internal event
  • Polled, meaning that it’s periodically executed by the Decanter Scheduler

You can install multiple Decanter Collectors in the same time. In the 1.0.0 release, Decanter provides the following collectors:

  • log is an event-driven collector. It’s actually a Pax Logging PaxAppender that listens for any log messages and send the log details into the EventAdmin topic.
  • jmx is a polled collector. Periodically, the Decanter Scheduler executes this collector. It retrieves all attributes of all MBeans in the MBeanServer, and send the JMX metrics into the EventAdmin topic.
  • camel (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the Camel routes MBeans.
  • activemq (jmx) is a specific JMX collector configuration, that retrieves the metrics only for the ActiveMQ MBeans.
  • camel-tracer is a Camel Tracer TraceEventHandler. In your Camel route definition, you can set this trace event handler to the default Camel tracer. Thanks to that, all tracing details (from URI, to URI, exchange with headers, body, etc) will be send into the EventAdmin topic.

Appenders

The Decanter Appenders receives the data harvested by the collectors. They consume OSGi EventAdmin Events from the decanter/collect/* topics.

They are responsible of storing the monitoring data into a backend.

You can install multiple Decanter Appenders in the same time. In the 1.0.0 release, Decanter provides the following appenders:

  • log creates a log message with the monitoring data
  • elasticsearch stores the monitoring data into an Elasticsearch instance
  • jdbc stores the monitoring data into a database
  • jms sends the monitoring data to a JMS broker
  • camel sends the monitoring data to a Camel route

SLA and alerters

Decanter also provides an alerting system when some data doesn’t validate a SLA.

For instance, you can define the maximum acceptable number of threads running in Karaf. If the current number of threads is over the limit, Decanter calls alerters.

Decanter Alerters are a special kind of appenders, consuming events from the OSGi EventAdmin decanter/alert/* topics.

As for the appenders, you can have multiple alerters active at the same time. Decanter 1.0.0 release provides the following alerters:

  • log to create a log message for each alert
  • e-mail to send an e-mail for each alert
  • camel to execute a Camel route for each alert

Let see Decanter in action to have details how to install and use it !

Quick start

Decanter is pretty easy to install and provide “key turn” functionalities.

The first thing to do is to register the Decanter features repository in the Karaf instance:

karaf@root()> feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features

NB: for the next Karaf releases, I will add Decanter features repository in etc/org.apache.karaf.features.repos.cfg, allowing to easily register Decanter features simply using feature:repo-add decanter 1.0.0.

We now have the Decanter features available:

karaf@root()> feature:list |grep -i decanter
decanter-common                 | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter API                                
decanter-simple-scheduler       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Simple Scheduler                   
decanter-collector-log          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Messages Collector             
decanter-collector-jmx          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMX Collector                      
decanter-collector-camel        | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Collector                    
decanter-collector-activemq     | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter ActiveMQ Collector                 
decanter-collector-camel-tracer | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Tracer Collector             
decanter-collector-system       | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter OS Collector                       
decanter-appender-log           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Log Appender                       
decanter-appender-elasticsearch | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Elasticsearch Appender             
decanter-appender-jdbc          | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JDBC Appender                      
decanter-appender-jms           | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter JMS Appender                       
decanter-appender-camel         | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter Camel Appender                     
decanter-sla                    | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA support                        
decanter-sla-log                | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA log alerter                    
decanter-sla-email              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA email alerter                  
decanter-sla-camel              | 1.0.0            |           | karaf-decanter-1.0.0     | Karaf Decanter SLA Camel alerter                  
elasticsearch                   | 1.6.0            |           | karaf-decanter-1.0.0     | Embedded Elasticsearch node                       
kibana                          | 3.1.1            |           | karaf-decanter-1.0.0     | Embedded Kibana dashboard

For a quick start, we will use elasticsearch embedded to store the monitoring data. Decanter provides a ready to use elasticsearch feature, starting an embedded elasticsearch node:

karaf@root()> feature:install elasticsearch

The elasticsearch feature installs the elasticsearch configuration: etc/elasticsearch.yml.

We now have a ready to use elasticsearch node, where we will store the monitoring data.

Decanter also provides a kibana feature, providing a ready to use set of kibana dashboards:

karaf@root()> feature:install kibana 


We can now install the Decanter Elasticsearch appender: this appender will get the data harvested by the collectors, and store it in elasticsearch:


karaf@root()> feature:install decanter-appender-elasticsearch

The decanter-appender-elasticsearch feature also installs etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file. You can configure the location of the Elasticsearch node there. By default, it uses a local elasticsearch node, especially the one embedded that we installed with the elasticsearch feature.

The etc/org.apache.karaf.decanter.appender.elasticsearch.cfg file contains hostname, port and clusterName of the elasticsearch instance to use:

################################################
# Decanter Elasticsearch Appender Configuration
################################################

# Hostname of the elasticsearch instance
host=localhost
# Port number of the elasticsearch instance
port=9300
# Name of the elasticsearch cluster
clusterName=elasticsearch

Now, our Decanter appender and elasticsearch node are ready.

It's now time to install some collectors to harvest the data.

Karaf monitoring

First, we install the log collector:

karaf@root()> feature:install decanter-collector-log 

This collector is event-driven and will automatically listen for log events, and send into the EventAdmin collect topic.

We install a second collector: the JMX collector.

karaf@root()> feature:install decanter-collector-jmx

The JMX collector is a polled collector. So, it also installs and starts the Decanter Scheduler.

You can define the call execution period of the scheduler in etc/org.apache.karaf.decanter.scheduler.simple.cfg configuration file. By default, the Decanter Scheduler calls the polled collectors every 5 seconds.

The JMX collector is able to retrieve all metrics (attributes) from multiple MBeanServers.

By default, it uses the etc/org.apache.karaf.decanter.collector.jmx-local.cfg configuration file. This file polls the local MBeanServer.

You can create new configuration files (for instance etc/org.apache.karaf.decanter.collector.jmx-mystuff.cfg configuration file), to poll other remote or local MBeanServers.

The etc/org.apache.karaf.decanter.collector.jmx-*.cfg configuration file contains:

type=jmx-mystuff
url=service:jmx:rmi:///jndi/rmi://hostname:1099/karaf-root
username=karaf
password=karaf
object.name=*.*:*

The type property is a free field allowing you to identify the source of the metrics.

The url property allows you to define the JMX URL. You can also use the local keyword to poll the local MBeanServer.
The username and password allows you to define the username and password to connect to the MBeanServer.

The object.name property is optional. By default, the collector harvests all the MBeans in the server. But you can filter to harvest only some MBeans (for instance org.apache.camel:context=*,type=routes,name=* to harvest only the Camel routes metrics).

Now, we can go in the Decanter Kibana to see the dashboards using the harvested data.

You can access to the Decanter Kibana using http://localhost:8181/kibana.

You have the Decanter Kibana welcome page:

Decanter Kibana

Decanter provides ready to use dashboard. Let see the Karaf Dashboard.

Decanter Kibana Karaf 1

These histograms use the metrics harvested by the JMX collector.

You can also see the log details harvested by the log collector:

Decanter Karaf 2

As Kibana uses Lucene, you can extract exactly the data that you need using filtering or queries.

You can also define the time range to get the metrics and logs.

For instance, you can create the following query to filter only the message coming from Elasticsearch:

loggerName:org.elasticsearch*

Camel monitoring and tracing

We can also use Decanter for the monitoring of the Camel routes that you deploy in Karaf.

For instance, we add Camel in our Karaf instance:

karaf@root()> feature:repo-add camel 2.13.2
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features
karaf@root()> feature:install camel-blueprint

In the deploy, we create the following very simple route (using the route.xml file):

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana, we can go in the Camel dashboard:

Decanter Kibana Camel 1

We can see the histograms here, using the JMX metrics retrieved on the Camel MBeans (especially, we can see for our route the exchanges completed, failed, the last processing time, etc).

You can also see the log messages related to Camel.

Another feature provided by Decanter is a Camel Tracer collector: you can enable the Decanter Camel Tracer to log all exchange state in the backend.

For that, we install the Decanter Camel Tracer feature:

karaf@root()> feature:install decanter-collector-camel-tracer

We update our route.xml in the deploy folder like this:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <reference id="eventAdmin" interface="org.osgi.service.event.EventAdmin"/>

    <bean id="traceHandler" class="org.apache.karaf.decanter.collector.camel.DecanterTraceEventHandler">
        <property name="eventAdmin" ref="eventAdmin"/>
    </bean>

    <bean id="tracer" class="org.apache.camel.processor.interceptor.Tracer">
        <property name="traceHandler" ref="traceHandler"/>
        <property name="enabled" value="true"/>
        <property name="traceOutExchanges" value="true"/>
        <property name="logLevel" value="OFF"/>
    </bean>

    <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="test">
            <from uri="timer:fire?period=10000"/>
            <setBody><constant>Hello World</constant></setBody>
            <to uri="log:test"/>
        </route>
    </camelContext>

</blueprint>

Now, in Decanter Kibana Camel dashboard, you can see the details in the tracer panel:

Decanter Kibana Camel 2

Decanter Kibana also provides a ready to use ActiveMQ dashboard, using the JMX metrics retrieved from an ActiveMQ broker.

SLA and alerting

Another Decanter feature is the SLA (Service Level Agreement) checking.

The purpose is to check if a harvested data validate a check condition. If not, an alert is created and send to SLA alerters.

We want to send the alerts to two alerters:

  • log to create a log message for each alert (warn log level for serious alerts, error log level for critical alerts)
  • camel to call a Camel route for each alert.

First, we install the decanter-sla-log feature:

karaf@root()> feature:install decanter-sla-log

The SLA checker uses the etc/org.apache.karaf.decanter.sla.checker.cfg configuration file.

Here, we want to throw an alert when the number of threads in Karaf is greater to 60. So in the checker configuration file, we set:

ThreadCount.error=range:[0,60]

The syntax in this file is:

attribute.level=check

where:

  • attribute is the name of the attribute in the harvested data (coming from the collectors).
  • level is the alert level. The two possible values are: warn or error.
  • check is the check expression.

The check expression can be:

  • range for numeric attribute, like range:[x,y]. The alert is thrown if the attribute is out of the range.
  • equal for numeric attribute, like equal:x. The alert is thrown if the attribute is not equal to the value.
  • notequal for numeric attribute, like notequal:x. The alert is thrown if the attribute is equal to the value.
  • match for String attribute, like match:regex. The alert is thrown if the attribute doesn't match the regex.
  • notmatch for String attribute, like nomatch:regex. The alert is thrown if the attribute match the regex.

So, in our case, if the number of threads is greater than 60 (which is probably the case ;)), we can see the following messages in the log:

2015-07-28 22:17:11,950 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:17:11,951 | ERROR | Thread-44        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:5639 | alertLevel:error | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:22000000000 | PeakThreadCount:225 | AllThreadIds:[J@6d9ad2c5 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:22911917003 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:221 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/error | 

Let's now extend the range, add a new check on the thread, and add a new check to throw alerts when we have errors in the log:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,300]
loggerLevel.error=match:ERROR

Now, we want to call a Camel route to deal with the alerts.

We create the following Camel route, using the deploy/alert.xml:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

        <camelContext xmlns="http://camel.apache.org/schema/blueprint">
                <route id="alerter">
                        <from uri="direct-vm:decanter-alert"/>
                        <to uri="log:alert"/>
                </route>
        </camelContext>

</blueprint>

Now, we can install the decanter-sla-camel feature:

karaf@root()> feature:install decanter-sla-camel

This feature also installs a etc/org.apache.karaf.decanter.sla.camel.cfg configuration file. In this file, you can define the Camel endpoint URI where you want to send the alert:

alert.destination.uri=direct-vm:decanter-alert

Now, let's decrease the thread range in etc/org.apache.karaf.decanter.sla.checker.cfg configuration file to throw some alerts:

ThreadCount.error=range:[0,600]
ThreadCount.warn=range:[0,60]
loggerLevel.error=match:ERROR

Now, in the log, we can see the alerts.

From the SLA log alerter:

2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: ThreadCount out of pattern range:[0,60]
2015-07-28 22:39:09,268 | WARN  | Thread-43        | Logger                           | 119 - org.apache.karaf.decanter.sla.log - 1.0.0 | DECANTER SLA ALERT: Details: hostName:service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root | alertPattern:range:[0,60] | ThreadAllocatedMemorySupported:true | ThreadContentionMonitoringEnabled:false | TotalStartedThreadCount:6234 | alertLevel:warn | CurrentThreadCpuTimeSupported:true | CurrentThreadUserTime:193150000000 | PeakThreadCount:225 | AllThreadIds:[J@28f0ef87 | type:jmx-local | ThreadAllocatedMemoryEnabled:true | CurrentThreadCpuTime:201484424892 | ObjectName:java.lang:type=Threading | ThreadContentionMonitoringSupported:true | ThreadCpuTimeSupported:true | ThreadCount:222 | ThreadCpuTimeEnabled:true | ObjectMonitorUsageSupported:true | SynchronizerUsageSupported:true | alertAttribute:ThreadCount | DaemonThreadCount:198 | event.topics:decanter/alert/warn | 

but also from the SLA Camel alerter:

2015-07-28 22:39:15,293 | INFO  | Thread-41        | alert                            | 114 - org.apache.camel.camel-core - 2.13.2 | Exchange[ExchangePattern: InOnly, BodyType: java.util.HashMap, Body: {hostName=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root, alertPattern=range:[0,60], ThreadAllocatedMemorySupported=true, ThreadContentionMonitoringEnabled=false, TotalStartedThreadCount=6236, alertLevel=warn, CurrentThreadCpuTimeSupported=true, CurrentThreadUserTime=193940000000, PeakThreadCount=225, AllThreadIds=[J@408db39f, type=jmx-local, ThreadAllocatedMemoryEnabled=true, CurrentThreadCpuTime=202296849879, ObjectName=java.lang:type=Threading, ThreadContentionMonitoringSupported=true, ThreadCpuTimeSupported=true, ThreadCount=222, event.topics=decanter/alert/warn, ThreadCpuTimeEnabled=true, ObjectMonitorUsageSupported=true, SynchronizerUsageSupported=true, alertAttribute=ThreadCount, DaemonThreadCount=198}]

Decanter also provides the SLA e-mail alerter to send the alerts by e-mail.

Now, you can play with the SLA checker, and add the checks on the attributes that you need. The Decanter Kibana dashboards help a lot there: in the "Event Monitoring" table, you can see all raw harvested data, allowing you to find the attributes.

What's next

It's just the first Decanter release, but I think it's an interesting one.

Now, we are in the process of adding:

  • a new Decanter CXF interceptor collector, thanks to this collector, you will be able to send details about the request/response on CXF endpoints (SOAP-Request, SOAP-Response, REST message, etc).
  • a new Decanter Redis appender, to send the harvested data to Redis
  • a new Decanter Cassandra appender, to send the harvested data to Cassandra
  • a Decanter WebConsole, allowing to easily manipulate the SLA
  • improvement the SLA support with "recovery" support to send only one alert when the check failed, and another alert when the value "recovered"

Anyway, if you have ideas and want to see new features in Decanter, please let us know.

I hope you like Decanter and see interest in this new Karaf project !

Apache Karaf Christmas gifts: docker.io, profiles, and decanter

December 15, 2014 Posted by jbonofre

We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you 😉

Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2.

Some sub-project releases are also in preparation, especially Cellar. We completely refactored Cellar internals, to provide a more reliable, predictable, and stable behavior. New sync policies are available, new properties, new commands, and also interesting new features like HTTP session replication, or HTTP load balancing. I will prepare a blog about this very soon.

But, we’re also preparing brand-new features.

Docker.io

I heard some people saying: “why do I need Karaf when I have docker.io ?”.

Honestly, I don’t understand this as the purpose is not the same: actually, Karaf on docker.io is a great value.

First, docker.io concepts are not new. It’s more or less new on Linux, but the same kind of features exists for a long time on other systems:

  • zones on Solaris
  • jail on FreeBSD
  • xen on Linux, in the past

So, nothing revolutionary in docker.io, however it’s a very convenient way to host multiple images/pseudo-system on the same machine.

However, docker.io (like the other systems) is focus on the OS: it doesn’t cover by its own the application container. For that, you have to prepare an images with OS plus the application container. For instance, you want to deploy your war file, you have to bootstrap a docker.io image with OS and tomcat (or Karaf ;)).

Moreover, remember the cool features provided by Karaf: ConfigAdmin and dynamic configuration, hotdeployment, features, etc.

You want to deploy your Camel routes, your ActiveMQ broker, your CXF webservices, your application: just use the docker.io image providing a Karaf instance!

And it’s what the Karaf docker.io feature provides. Actually, it provides two things:

  • a set of Karaf docker.io images ready to use, with ubuntu/centos images with ready to use Karaf instances (using different combinations)
  • a set of shell commands and Karaf commands to easily bootstrap the images from a Karaf instance. It’s actually a good alternative to the Karaf child instances (which are only local to the machine).

Basically, docker.io doesn’t replace Karaf. However, Karaf on docker.io provides a very flexible infrastructure, allowing you to easily bootstrap Karaf instances. Associated with Cellar, you can bootstrap a Karaf cluster very easily as well.

I will prepare the donation and I will blog about the docker.io feature very soon. Stay tuned !!!

Karaf Profiles

A new feature comes in Karaf 4: the Karaf profiles. The purpose is to apply a ready to use set of configurations and provisioning to a Karaf instance.

Thanks to that you can prepare a complete profile containing your configuration and your application (features) and apply multiple profiles to easily create a ready-to-go Karaf instance.

It’s a great complete to the Karaf docker.io feature: the docker.io feature bootstraps the Karaf image, on which you can apply your profiles, all in a row.

Some profiles description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201412.mbox/%3CCAA66TpodJWHVpOqDz2j1QfkPchhBepK_Mwdx0orz7dEVaw8tPQ%40mail.gmail.com%3E.

I’m working on the storage of profiles on Karaf Cave, the application of profiles on running/existing Karaf instances, support of cluster profiles in Cellar, etc.

Again, I will create a specific blog post about profiles soon. Stay tuned again !! 🙂

Karaf Decanter

As a fully enterprise ready container, Karaf has to provide monitoring and management feature. We already provide a bunch of metrics via JMX (we have multiple MBeans for Karaf, Camel, ActiveMQ, CXF, etc).

However, we should provide:

  • storage of metrics and messages to be able to have an activity timeline
  • SLA definition of the metrics and messages, raising alerts when some metrics are not in the expected value range or when the messages contain a pattern
  • dashboard to configure the SLA, display messages, and graph the metrics

As always in Karaf, it should be very simple to install such kind of feature, with an integration of the supported third parties.

That’s why we started to work on Karaf Decanter, a complete and flexible monitoring solution for Karaf and the applications hosted by Karaf (Camel, ActiveMQ, CXF, etc).

The Decanter proposal and description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201410.mbox/%3C543D3D62.6050608%40nanthrax.net%3E.

The current codebase is also available: https://github.com/jbonofre/karaf-decanter.

I’m preparing the donation (some cleansing/polishing in progress).

Again, I will blog about Karaf Decanter asap. Stay tuned again again !! 🙂

Conclusion

You can see like, as always, the Karaf team is committed and dedicated to provide to you very convenient and flexible features. Lot of those features come from your ideas, discussions, proposals. So, keep on discussing with us, we love our users 😉

We hope you will enjoy those new features. We will document and blog about these Christmas gifts soon.

Enjoy Karaf, and Happy Christmas !

MDC logging with Apache Karaf and Camel

August 31, 2014 Posted by jbonofre

MDC (Mapped Diagnostic Context) logging is an interesting feature to log contextual messages.

It’s classic to want to log contextual messages in your application. For instance, we want to log the actions performed by an user (identified by an username or user id). As you have a lot of simultaneous users on your application, it’s easier to “follow” the log.

MDC is supported by several logging frameworks, like log4j or slf4j, and so by Karaf (thanks to pax-logging) as well.
The approach is pretty simple:

  1. You define the context using a key ID and a value for the key:
    MDC.put("userid", "user1");
    
  2. You use the logger as usual, the log messages to this logger will be contextual to the context:
    logger.debug("my message");
    
  3. After that, we can change the context by overriding the key:
    MDC.put("userid", "user2");
    logger.debug("another message");
    

    Or you can remove the key, so to remove the context, and the log will be “global” (not local to a context):

    MDC.remove("userid"); // or MDC.clear() to remove all
    logger.debug("my global message");
    
  4. In the configuration, we can use pattern with %X{key} to log context. A pattern like %X{userid} - %m%n will result to a log file looking like:
    user1 - my message
    user2 - another message
    

In this blog, we will see how to use MDC in different cases (directly in your bundle, generic Karaf OSGi, and in Camel routes.

The source code of the blog post are available on my github: http://github.com/jbonofre/blog-mdc.

Using MDC in your application/bundle

The purpose here is to use slf4j MDC in our bundle and configure Karaf to create one log file per context.

To illustrate this, we will create multiple threads in the bundle, given a different context key for each thread:

package net.nanthrax.blog.mdc;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;

public class MdcExampleBean {

    private Logger logger = LoggerFactory.getLogger(MdcExampleBean.class);

    public void init() throws Exception {
        CycleThread thread1 = new CycleThread("thread1");
        CycleThread thread2 = new CycleThread("thread2");
        CycleThread thread3 = new CycleThread("thread3");
        thread1.start();
        thread2.start();
        thread3.start();
    }

    class CycleThread extends Thread {
        private String mdcContext;
        public CycleThread(String mdcContext) {
            this.mdcContext = mdcContext;
        }
        public void run() {
            MDC.put("threadId", mdcContext);
            for (int i = 0; i < 20; i++) {
                logger.info("Cycle {}", i);
            }
        }
    }

}

After deploying this bundle in Karaf 3.0.1, we can see the log messages:

karaf@root()> bundle:install mvn:net.nanthrax.blog/mdc-bundle/1.0-SNAPSHOT
karaf@root()> log:display
...
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 17
2014-08-30 09:44:25,594 | INFO  | Thread-13        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19
2014-08-30 09:44:25,594 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 18
2014-08-30 09:44:25,595 | INFO  | Thread-15        | MdcExampleBean                   | 78 - net.nanthrax.blog.mdc-bundle - 1.0.0.SNAPSHOT | Cycle 19

Now, we can setup the Karaf etc/org.ops4j.pax.logging.cfg file to use our MDC. For that, we add a MDCSiftingAppender, providing the threadId as MDC key, and displaying the threadId in the log message pattern. We will create one log file per key (threadId in our case), and finally, we add this appender to the rootLogger:

...
log4j.rootLogger=INFO, out, mdc-bundle, osgi:*
...
# MDC Bundle appender
log4j.appender.mdc-bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.mdc-bundle.key=threadId
log4j.appender.mdc-bundle.default=unknown
log4j.appender.mdc-bundle.appender=org.apache.log4j.FileAppender
log4j.appender.mdc-bundle.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.mdc-bundle.appender.layout.ConversionPattern=%d | %-5.5p | %X{threadId} | %m%n
log4j.appender.mdc-bundle.appender.file=${karaf.data}/log/mdc-bundle-$\\{threadId\\}.log
log4j.appender.mdc-bundle.appender.append=true
...

Now, in the Karaf data/log folder, we can see:

mdc-bundle-thread1.log
mdc-bundle-thread2.log
mdc-bundle-thread3.log

each file containing the log messages contextual to the thread:

$ cat data/log/mdc-bundle-thread1.log
2014-08-30 09:54:48,287 | INFO  | thread1 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread1 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread1 | Cycle 4
...
$ cat data/log/mdc-bundle-thread2.log
2014-08-30 09:54:48,287 | INFO  | thread2 | Cycle 0
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 1
2014-08-30 09:54:48,298 | INFO  | thread2 | Cycle 2
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 3
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 4
2014-08-30 09:54:48,299 | INFO  | thread2 | Cycle 5
...

In addition, Karaf “natively” provides OSGi MDC data that we can use.

Using Karaf OSGi MDC

So, in Karaf, you can use directly some OSGi headers for MDC logging, especially the bundle name.

We can use this MDC key to create one log file per bundle.

Karaf already provides a pre-defined appender configuration in etc/org.ops4j.pax.logging.cfg:

...
# Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name
log4j.appender.sift.default=karaf
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %m%n
log4j.appender.sift.appender.file=${karaf.data}/log/$\\{bundle.name\\}.log
log4j.appender.sift.appender.append=true
...

The only thing that we have to do is to add this appender to the rootLogger:

log4j.rootLogger=INFO, out, sift, osgi:*

Now, in the Karaf data/log folder, we can see one file per bundle:

data/log$ ls -1
karaf.log
net.nanthrax.blog.mdc-bundle.log
org.apache.aries.blueprint.core.log
org.apache.aries.jmx.core.log
org.apache.felix.fileinstall.log
org.apache.karaf.features.core.log
org.apache.karaf.region.persist.log
org.apache.karaf.shell.console.log
org.apache.sshd.core.log

Especially, we can see our mdc-bundle, containing the log messages “local” to the bundle.

However, if this approach works great, it doesn’t always create interesting log file. For instance, when you use Camel, using OSGi headers for MDC logging will gather most of the log messages into the camel-core bundle log file, so, not really contextual to something or easy to read/seek.

The good news is that Camel also provides MDC logging support.

Using Camel MDC

If Camel provides MDC logging support, it’s not enabled by default. It’s up to you to enable it on the camel context.

Once enabled, Camel provides the following MDC logging properties:

  • camel.exchangeId providing the exchange ID
  • camel.messageId providing the message ID
  • camel.routeId providing the route ID
  • camel.contextId providing the Camel Context ID
  • camel.breadcrumbId providing an unique id used for tracking messages across transports
  • camel.correlationId providing the correlation ID of the exchange (if it’s correlated, for instance like in Splitter EIP)
  • camel.trasactionKey providing the ID of the transaction (for transacted exchange).

To enable the MDC logging, you have to:

  • if you use the Blueprint or Spring XML DSL:
    <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true">
    
  • if you use the Java DSL:
    CamelContext context = ...
    context.setUseMDCLogging(true);
    
  • using the Talend ESB studio, you have to use a cConfig component from the palette:
    studio1

So, let say, we create the following route using the Blueprint DSL:

<?xml version="1.0" encoding="UTF-8"?> 
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> 

   <camelContext xmlns="http://camel.apache.org/schema/blueprint" useMDCLogging="true"> 
      <route id="my-route"> 
         <from uri="timer:fire?period=5000"/> 
         <setBody> 
            <constant>Hello Blog</constant> 
         </setBody> 
         <to uri="log:net.nanthrax.blog?level=INFO"/>
      </route>
   </camelContext>
 
</blueprint>

We want to create one log file per route (using the routeId). So, we update the Karaf etc/org.ops4j.pax.logging.cfg file to add a MDC sifting appender using the Camel MDC properties, and we add this appender to the rootLogger:

...
log4j.rootLogger=INFO, out, camel-mdc, osgi:*
...
# Camel MDC appender
log4j.appender.camel-mdc=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.camel-mdc.key=camel.routeId
log4j.appender.camel-mdc.default=unknown 
log4j.appender.camel-mdc.appender=org.apache.log4j.FileAppender
log4j.appender.camel-mdc.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.camel-mdc.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{camel.exchangeId} | %m%n
log4j.appender.camel-mdc.appender.file=${karaf.data}/log/camel-$\\{camel.routeId\\}.log
log4j.appender.camel-mdc.appender.append=true
...

The camel-mdc appender will create one log file by route (named camel-(routeId).log). The log messages will contain the exchange ID.

We start Karaf, and after the installation of the camel-blueprint feature, we can drop our route.xml directly in the deploy folder:

karaf@root()> feature:repo-add camel 2.12.1
Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.12.1/xml/features
karaf@root()> feature:install camel-blueprint
cp route.xml apache-karaf-3.0.1/deploy/

Using log:display command in Karaf, we can see the messages for our route:

karaf@root()> log:display

2014-08-31 08:58:24,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO | 0 – timer://fire | blog | 85 – org.apache.camel.camel-core – 2.12.1 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Now, if we go into the Karaf data/log folder, we can see the log file for our route:

$ ls -1 data/log
camel-my-route.log
...

If we take a look in the camel-my-route.log file, we can see the messages contextual to the route, including the exchange ID:

2014-08-31 08:58:19,196 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-2 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:24,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-4 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:29,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-6 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]
2014-08-31 08:58:34,176 | INFO  | 0 - timer://fire | blog                             | ID-latitude-57336-1409468297774-0-8 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Blog]

Apache Karaf, Cellar, Camel, ActiveMQ monitoring with ELK (ElasticSearch, Logstash, and Kibana)

March 17, 2014 Posted by jbonofre

Apache Karaf, Cellar, Camel, and ActiveMQ provides a lot of information via JMX.

More over, another very useful source of information is in the log files.

If these two sources are very interesting, for a “real life” monitoring, we need some additional features:

  • The JMX information and log messages should be stored in order to be requested later and history. For instance, using jconsole, you can request all the JMX attributes to get the number, but these numbers have to be store somewhere. It’s quite the same for the log. Most of the time, you define a log file rotation, or you periodically cleanup the logs. So the log messages should be store as well to be requested later.
  • Numbers are good, graphics are even better. Once the JMX “numbers” are stored somewhere, a good feature is to use these numbers to create some charts. And also, we can define some kind of SLA: at some point, if a number is not “acceptable” for instance greater than a “watermark” value), we should raise a alert.
  • For high availability and scalability, most of production systems use multiple Karaf instances (synchronize with Cellar for instance). It means that the log files are spread on different machines. In that case, it’s really helpful to “centralize” the log messages.

Of course, there are already open source solutions (zabbix, nagios, etc) or commercial solutions (dynatrace, etc) to cover these needs.

In this blog, I just introduce a possible solution leveraging “big data” tools: we will see how to use the ELK (Elasticsearch, Logstash, and Kibana) solution.

Toplogy

For this example, let say we have to following architecture:

  • node1 is a machine hosting a Karaf container with a set of Camel routes.
  • node2 is a machine hosting a Karaf container with another set of Camel routes.
  • node3 is a machine hosting a ActiveMQ broker (used by the Camel routes from node1 and node2).
  • monitor is a machine hosting the monitoring platform.

Local to node1, node2, and node3, we install and configure logstash with both file and JMX input plugins. This logstash will get the log messages and pool JMX MBeans attributes, and send to a “central” Redis server (using the redis output plugin).

On monitor, we install:

  • redis server to receive the messages and events coming from logstash installed on node1, node2, and node3
  • elasticsearch to store the messages and events
  • a first logstash acting as an indexer to take the messages/events from redis and store into elasticsearch (including the update of indexes, etc)
  • a second logstash providing the kibana web console

Redis and Elasticsearch

Redis

Redis is a key-value store. But it also may acts as a broker to receive the messages/events from the different logstash (node1, node2, and node3).

For the demo, I use Redis 2.8.7 (that you can download from http://download.redis.io/releases/redis-2.8.7.tar.gz.

We uncompress the redis tarball in the /opt/monitor folder:

cp redis-2.8.7.tar.gz /opt/monitor
tar zxvf redis-2.8.7.tar.gz

Now, we have to compile Redis server on the machine. To do so, we have to execute make in the Redis src folder:

cd redis-2.8.7/src
make

NB: this step requires make and gcc installed on the machine.

make created a redis-server binary in the src folder. It’s the binary that we use to start Redis:

./redis-server --loglevel verbose
[12130] 16 Mar 21:04:28.387 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984.
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 2.8.7 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 12130
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

[12130] 16 Mar 21:04:28.388 # Server started, Redis version 2.8.7
[12130] 16 Mar 21:04:28.388 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[12130] 16 Mar 21:04:28.388 * The server is now ready to accept connections on port 6379
[12130] 16 Mar 21:04:28.389 - 0 clients connected (0 slaves), 443376 bytes in use

The redis server is now ready to accept connections coming from the “remote” logstash.

Elasticsearch

We use elasticsearch as storage backend for all messages and events. For this demo, I use elasticsearch 1.0.1, that you can download from https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.1.tar.gz.

We uncompress the elasticsearch tarball in the /opt/monitor folder:

cp elasticsearch-1.0.1.tar.gz /opt/monitore
tar zxvf elasticsearch-1.0.1.tar.gz

We start elasticsearch with the bin/elasticsearch binary (the default configuration is OK):

cd elasticsearch-1.0.1
bin/elasticsearch
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] version[1.0.1], pid[12466], build[5c03844/2014-02-25T15:52:53Z]
[2014-03-16 21:16:13,783][INFO ][node                     ] [Solarr] initializing ...
[2014-03-16 21:16:13,786][INFO ][plugins                  ] [Solarr] loaded [], sites []
[2014-03-16 21:16:15,763][INFO ][node                     ] [Solarr] initialized
[2014-03-16 21:16:15,764][INFO ][node                     ] [Solarr] starting ...
[2014-03-16 21:16:15,902][INFO ][transport                ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.134.11:9300]}
[2014-03-16 21:16:18,990][INFO ][cluster.service          ] [Solarr] new_master [Solarr][V9GO0DiaT4SFmRmxgwYv0A][vostro][inet[/192.168.134.11:9300]], reason: zen-disco-join (elected_as_master)
[2014-03-16 21:16:19,010][INFO ][discovery                ] [Solarr] elasticsearch/V9GO0DiaT4SFmRmxgwYv0A
[2014-03-16 21:16:19,021][INFO ][http                     ] [Solarr] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.134.11:9200]}
[2014-03-16 21:16:19,072][INFO ][gateway                  ] [Solarr] recovered [0] indices into cluster_state
[2014-03-16 21:16:19,072][INFO ][node                     ] [Solarr] started

Logstash

Logstash is a tool for managing events and logs.

It works with a chain of input, filter, output.

On node1, node2, and node3, we will setup logstash with:

  • a file input plugin to read the log files
  • a jmx input plugin to read the different MBeans attributes
  • a redis output to send the messages and events to the monitor machine.

For this blog, I use logstash 1.4 SNAPSHOT with a contrib that I did. You can find my modified plugin on my github: https://github.com/jbonofre/logstash-contrib.

The first thing is to checkout latest logstach codebase and build it:

git clone https://github.com/elasticsearch/logstash/
cd logstash
make tarball

It will create the logstash distribution tarball in the build folder.

We can install logstash in a folder (for instance /opt/monitor/logstash):

mkdir /opt/monitor
cp build/logstash-1.4.0.rc1.tar.gz /opt/monitor
cd /opt/monitor
tar zxvf logstash-1.4.0.rc1.tar.gz
rm logstash-1.4.0.rc1.tar.gz

JMX is not a “standard” logstash plugin. It’s a plugin from logstash-contrib project. As I modified the logstash JMX plugin (to work “smoothly” with Karaf MBeanServer), waiting that my pull request will be integrated in logstash-contrib (I hope ;)), you have to clone my github fork:

git clone https://github.com/jbonofre/logstash-contrib/
cd logstash-contrib
make tarball

We can add the contrib plugins into our logstash installation (in /opt/monitor/logstash-1.4.0.rc1 folder):

cd build
tar zxvf logstash-contrib-1.4.0.beta2.tar.gz
cd logstash-contrib-1.4.0.beta2
cp -r * /opt/monitor/logstash-1.4.0.rc1

Our logstash installation is now ready, including the logstash-contrib plugins.

It means that on node1, node2, node3 and monitor, you should have the /opt/monitor/logstash-1.4.0.rc1 folder with the installation (you can use scp or rsync to install logstash on the machines).

Indexer

On monitor machine, we have a logstash instance acting as an indexer: it gets the messages from redis and stores in elasticsearch.

We create the /opt/monitor/logstash-1.4.0.rc1/conf/indexer.conf file containing:

input {
  redis {
    host => "localhost"
    data_type => "list"
    key => "logstash"
    codec => json
  }
}
output {
  elasticsearch {
    host => "localhost"
  }
}

We can start logstash using this configuration file:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/indexer.conf

Collector

On node1, node2, and node3, logstash will act as a collector:

  • the file input plugin will read the messages from the log files (you can configure multiple log files)
  • the jmx input plugin will periodically pool MBean attributes

Both will send messages to the redis server using the redis output plugin.

We create a folder /opt/monitor/logstash-1.4.0.rc1/conf. It’s where we store the logstash configuration. In this folder, we create a collector.conf file.

For node1 and node2 (both hosting a karaf container with camel routes), the collector.conf file contains:

input {
  file {
    type => "log"
    path => ["/opt/karaf/data/log/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

On node3 (hosting an ActiveMQ broker), the collector.conf is the same, just the location of the log file is different:

input {
  file {
    type => "log"
    path => ["/opt/activemq/data/*.log"]
  }
  jmx {
    path => "/opt/monitor/logstash-1.4.0.rc1/conf/jmx"
    polling_frequency => 10
    type => "jmx"
    nb_thread => 4
  }
}
output {
  redis {
    host => "monitor"
    data_type => "list"
    key => "logstash"
  }
}

The redis output plugin send the messages/events to the redis server located on “monitor” machine.

These messages and events come from two input plugins:

  • the file input plugin takes the path of the log file (using glob)
  • the jmx input plugin takes a folder. This folder contains json file (see later) with the MBeans queries. The plugin executes the queries every 10 seconds (polling_frequency).

So, the jmx input plugin reads all files located in the /opt/monitor/logstash-1.4.0.rc1/conf/jmx folder.

On node1 and node2 (again hosting a karaf container with camel routes), for instance, we want to monitor the number of thread on the Karaf instance (using the thread MBean), and a route named “route1” (using the Camel route MBean).
We specify this in /opt/monitor/logstash-1.4.0.rc1/conf/jmx/karaf file:

{
  "host" : "localhost",
  "port" : 1099,
  "url" : "service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root",
  "username" : "karaf",
  "password" : "karaf",
  "alias" : "node1",
  "queries" : [
    {
      "object_name" : "java.lang:type=Threading",
      "object_alias" : "Threading"
    }, {
      "object_name" : "org.apache.camel:context=*,type=routes,name=\"route1\"",
      "object_alias" : "Route1"
    }
   ]
}

On node3, we will have a different JMX configuration file (for instance /opt/monitor/logstash-1.4.0.rc1/conf/jmx/activemq) containing the ActiveMQ MBeans that we want to query.

Now, we can start the logstash “collector”:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash -f conf/collector.conf

We can see the clients connected in the redis log:

[12130] 17 Mar 14:33:27.041 - Accepted 127.0.0.1:46598
[12130] 17 Mar 14:33:31.267 - 2 clients connected (0 slaves), 484992 bytes in use

and the data populated in the elasticsearch log:

[2014-03-17 14:21:59,539][INFO ][cluster.service          ] [Solarr] added {[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-vostro-32001-2010][dhXgnFLwTHmbdsawAEJbyg][vostro][inet[/192.168.134.11:9301]]{client=true, data=false}])
[2014-03-17 14:30:59,584][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [_default_]
[2014-03-17 14:31:00,665][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [log] (dynamic)
[2014-03-17 14:33:28,247][INFO ][cluster.metadata         ] [Solarr] [logstash-2014.03.17] update_mapping [jmx] (dynamic)

Now, we have JMX data and log messages from different containers, brokers, etc stored in one centralized place (the monitor machine).

We can now add a web application to read the data and create charts using the data.

Kibana

Kibana is the web application provided with logstash. The default configuration use elasticsearch default port. So, we just have to start Kibana on the monitor machine:

cd /opt/monitor/logstash-1.4.0.rc1
bin/logstash-web

We access to Kibana on http://monitor:9292.

On the welcome page, we click on the “Logstash dashboard” link, and we arrive on a console looking like:
logstash1

It’s time to configure Kibana.

We remove the default histogram, to add a custom one to chart the thread count.

First, we create a query to isolate the thread count for node1. Kibana uses the Apache Lucene query syntax.
Our query is here very simple: metric_path:"node1.Threading.ThreadCount".

Now, we can create a histogram using this query, getting the metric_value_number:
kibana2

Now, we want to chart the lastProcessingTime on the Camel route (to see for instance if the route takes more time at some point).
We create a new query to isolate the route1 lastProcessingTime on node1: metric_path:"node1.Route1.LastProcessingTime".

We can now create a histogram using this query, getting the metric_value_number:
kibana3

For the demo, we can create a histogram chart to display the exchanges completed and failed for route1 on node1. We create two queries:

  • metric_path:”node1.Route1.ExchangesFailed”
  • metric_path:”node1.Route1.ExchangesCompleted”

We create a new chart in the same row:
kibana4

We cleanup a bit the events panel. We create a query to display only the log messages (not the JMX queries): type:"log".
We configure the log event panel to change the name and use the log query:
kibana6

We have now a kibana console looking like:

kibana5

With this very simple kibana configuration, we have:
– a chart of the thread count on node1
– a chart of the last processing time for route1 (on node1)
– a chart of the exchanges (failed/completed) for route1 (on node1)
– a view of all logs messages

You can now play with Kibana, add a lot of new charts leveraging all information that you have into elasticsearch (both log messages and JMX data).

Next

I’m working on some new Karaf, Cellar, ActiveMQ, Camel features providing “native” and “enhanced” support for logstash. The purpose is to just type feature:install monitoring to get:

  • jmx:* commands in Karaf
  • broadcast of event in elasticsearch
  • integration of redis, elasticsearch, logstash in Karaf (to avoid to install it “externally” from Karaf) and provide ready to use configuration (pre-configured logstash jmx input plugin, pre-configured kibana console/charts, …).

If you have other ideas to enhance and improve monitoring in Karaf, Cellar, Camel, ActiveMQ, don’t hesitate to propose on the mailing lists ! Any idea is welcome.

Some books review: Instant Apache Camel Messaging System,Learning Apache Karaf, and Instant Apache ServiceMix How-To

November 21, 2013 Posted by jbonofre

I’m pleased to be reviewer on new books published by Packt:

I received a “hard” copy from Packt (thanks for that), and I’m now able to do the review.

Instant Apache Camel Messaging System, by Evgeniy Sharapov. Published by Packt publishing in September 2013

This book is a good introduction to Camel. It covers Camel fundamentals.

What is Apache Camel

It’s a quick introduction about Camel, in only four pages. We have a good overview about Camel basics: what is a component, routes, contexts, EIPs, etc.

We have to see that as it is: it’s just a quick introduction. Don’t expect a lot of details about the Camel basics, it just provides a very high level overview.

Installation

To be honest, I don’t like this part. It focus mostly on using Maven with Camel: how to use Camel with Maven, integrate Camel in your IDE (Eclipse, or IntelliJ), usage of the archetypes.

I think it’s too much restrictive. I would have prefered a quick listing of the differents ways to install and use Camel: in a Karaf/ServiceMix container, in a Spring application context, in Tomcat or another application server, etc.

I’m afraid that some users will take “bad habits” reading this part.

Quickstart

This part goes in bit deeper about CamelContext and RouteBuilder. It’s a good chapter, but I would have focus a bit more about the DSL (at least Java, Spring, and Blueprint).

The example used is interesting as it uses different components, transformation, predicates and expressions.

It’s a really good introduction.

Conclusion

It’s a good introduction book, only for new Camel users. If you already know Camel, I’m afraid that you will be a disapointed and you won’t learn a lot.

If you are a Camel rookie rider, and you want to move forward quickly, with a “ready to use example”, this book is good one.

I would have expects more details on some key Camel features, especially the EIPs, and some real use cases on EIP with some components.

Learning Apache Karaf, by Jamie Goodyear, Johan Edstrom, Heath Kesler. Published by Packt publishing in October 2013

I helped a lot on this book and I would like to congratulate my friends Jamie Goodyear, Johan Edstrom, Heath Kesler. You did a great job guys !

It’s the perfect book to start with Apache Karaf. All Karaf features are introduced, and more, like Karaf Cellar.

It’s based on Karaf 2.x (an update will be required for Karaf 3.0.0 as a lot of commands, etc changed).

The global content is great for beginner. If you already know Karaf, you probably know most of the content, however, the book can be helpful to discover some features like Cellar.

Good job guys !

Instant Apache ServiceMix How-To, by Henryk Konsek. Published by Packt publishing in June 2013

This book is a good complement from the Camel and Karaf ones. Unfortunately, some chapters are a bit redondent: you will find the same information in both books.

However, as Apache ServiceMix is powered by Karaf, starting from Learning Apache Karaf makes sense and give you details about the core of ServiceMix (the “ServiceMix Kernel”, which is the genesis of Karaf ;)).

This book is a good jump to ServiceMix.

I would have expect some details about some ServiceMix NMR (naming for instance), the different distributions.

ServiceMix is more than an umbrella project gathering Karaf, Camel, CXF, ActiveMQ, etc. It also provides some interesting features like Naming, etc. It would have been great to introduce this.

Conclusion

These three books are great for beginners, especially the Karaf one.

I was really glad and pleased to review these books. It’s a really a tough job to write this kind of books, and we have to congratulate the authors for their job.

It’s a great work guys !

Apache Karaf Cellar 2.3.0 released

May 24, 2013 Posted by jbonofre

The latest Cellar release (2.2.5) didn’t work with the new Karaf branch and release: 2.3.0.

If the first purpose of Cellar 2.3.0 is to be able to work with Karaf 2.3.x, actually, it’s more than that.

Let’s take a tour in the new Apache Karaf Cellar 2.3.0.

Apache Karaf 2.3.x support

Cellar 2.3.0 is fully compatible with Karaf 2.3.x branch.

Starting from Karaf 2.3.2, Cellar can be install “out of the box”.
If you want to use Cellar with Karaf 2.3.0 or Karaf 2.3.1, in order to avoid some Cellar bootstrap issue, you have to add the following property in etc/config.properties:


org.apache.aries.blueprint.synchronous=true

Upgrade to Hazelcast 2.5

As you may know, Cellar is clustered provision tool powered by Hazelcast.

We did a big jump: from Hazelcast 1.9 to Hazelcast 2.5.

Hazelcast 2.5 brings a lot of bug fixes and interesting new features. You can find more details here: http://www.hazelcast.com/docs/2.5/manual/multi_html/ch18s04.html.

In Cellar, all Hazelcast configuration is performed using an unique file: etc/hazelcast.xml.

Hazelcast 2.5 gives you more properties to configure your cluster, and the behaviors of the cluster events. The default configuration is largely enough for most use cases, but thanks to this Hazelcast version, you have the possibility to perform fine tuning.

More over, some new features are interesting for Cellar, especially:

  • IPv6 support
  • more complete backup support, when a node is disconnected from the cluster
  • better security and encryption support
  • higher tolerancy to connection failures
  • parallel IO support

Cluster groups persistence

In previous Cellar versions, the cluster groups were not store, and relay only on the cluster states. It means that it was possible to loose an existing cluster group if the group didn’t have any node.

Now, each node stores the cluster groups list, and its membership.

Like this, the cluster groups are persistent and we can restart the cluster, we won’t loose the “empty” cluster groups.

Cluster event producers, consumers, handlers status persistence

A Cellar node uses different components to manage cluster events:

  • the producer (one per node) is responsible to broadcast a cluster event to the other nodes
  • the consumer (one per node) receives cluster events and delegates the handling of the event to a handler
  • handlers (one per resource) handles a specific cluster events (features, bundles, etc) and update the node local states

The user has a complete control on producer, consumer, handlers. It means that it can stop or start the node producer, consumer, or handler.

The problem is that the current state of the producer/consumer/handler was not persistent. It means that a restart of the node will reset producer/consumer/handler to the default state (and not the previous one).
To avoid this issue, the producer/consumer/handler state is now persistent on the local node.

Smart synchronization

The synchronization of the different resources supported by Cellar is now better than before. Cellar now checks the local state of the node. Cellar checks a kind of diff between the local state and the state on the cluster. If the states differ, Cellar updates the local state as described on the cluster.

For the config especially, to avoid important CPU consumption, some properties are not considered during the synchronization because they are local to the node (for instance, service.factoryPid).

A new command has been introduced (cluster:sync) to “force” the synchronization of the local node with the cluster. It’s interesting when the node has been disconnected from the cluster, and you want to re-sync as soon as possible.

Improvement on Cellar Cloud support

My friend Achim (Achim Nierbeck) did a great job on the Cellar Cloud support.
First, he fixes some issues that we had on this module.

He gave a great demo during JAX: Integration In the Cloud With Camel, Karaf and Cellar.

Improvement on the cluster:* commands and MBeans

In order to be closer to the Karaf core commands, the cluster:* commands (and MBeans) now provide exactly the same options that you can find in the Karaf core commands.

And more is coming …

The first purpose of Cellar 2.3.0 is to provide a version ready to run on Karaf 2.3.x, and insure the stability. So I postponed some new features and improvements to Cellar 2.3.1.

In the mean time, I also released a new Cellar 2.2.6 release, containing mostly bug fixes (for the ones that still use Karaf 2.2.x with Cellar 2.2.x).

Apache Karaf 2.3.0 released !

October 16, 2012 Posted by jbonofre

Waiting for Karaf 3.0.0, we worked hard in the Karaf team to provide Apache Karaf 2.3.0.

The Karaf 2.2.x branch is now only in maintenance mode: it means that no new features will be implemented in this branch, only major bug fixes.

The new “stable” branch is now Karaf 2.3.x which is a perfect transition branch between Karaf 2.2.x (heavely used) and the future Karaf 3.x (which should arrive very soon).

What’s new in this 2.3.0 release:

* OSGi r4.3: Karaf 2.2.x branch was powered by OSGi frameworks implementing OSGi r4.2 norm. Karaf 2.3.0 is now powered by the new OSGi r4.3 framework (Apache Felix 4.0.3 and Equinox 3.8.x), for both OSGi core and compendium. It provides new features like weaving, etc.
* Aries Blueprint 1.0.x: Karaf 2.3.0 uses the new Aries Blueprint version at different level (core, JMX, etc).
* Update to ASM 4.0: in order to work with Aries proxies, we updated to new ASM bundle. We also provided configuration that allows you to enable/disable weaving.
* OSGi Regions and SCR support: Karaf 2.3.0 provides both Regions and SCR support.
* JMX improvement: the previous MBeanRegistrer from Karaf 2.2.x has been removed to be replaced by Aries JMX. It allows an easier way to integrate MBeans by registering OSGi services. The MBeans have been improved to provide new operations and options corresponding with what you can do using the shell commands.
* Complete itest framework: Karaf 2.3.0 provides a new tool: Karaf exam. This tool provides a framework to very easily implements integration tests. It’s able to download and bootstrap a Karaf version on which you can run your commands, deploy your features and bundles, etc. It allows you to run a complete integration tests chain from Maven.
* Dependencies upgrade: a lot of dependencies have been updated. Karaf 2.3.0 uses Pax Logging 1.7.0 including bug fixes and SLF4J 1.7.1 support, new Pax Web and Jetty version for the web container, new JLine, SSHD and Mina versions which especially fix weird behavior on Windows for certain keys, etc.
* KAR improvements: if Karaf 3.x will provide a lot of enhancements around the KAR files, Karaf 2.3.0 already provides fixes in the KAR lifecycle.
* JAAS commands improvements: the jaas:* commands have been enhanced to allow you a fine-grained management of the realms and login modules.

You can find the Karaf 2.3.0 content details on the Release Notes.

The Karaf team is proud to provide this release to you. We hope you will enjoy it !

Apache Karaf Cellar 2.2.4

May 20, 2012 Posted by jbonofre

Apache Karaf Cellar 2.2.4 has been released. This release is a major release, including a bunch of bug fixes and new features.

Here’s the list of key things included in this release.

Consistent behavior

Cellar is composed by two parts:

  • the distributed resources which is a datagrid maintained by each cluster nodes and containing the current cluster status (for instance of the bundles, features, etc)
  • the cluster event which is broadcasted from a node to the others

Cluster shell commands, cluster MBeans, synchronizers (called at startup) and listeners (called when a local event is fired, such as feature installed) update the distributed resource and broadcast cluster events.

To broadcast cluster events, we use an event producer. The cluster event is consommed by a consumer which delegates the handling of the cluster event to a handler. We have a handler for feature, bundle, etc.

Now, all Cellar “producers” do:

  1. check if the cluster event producer is ON
  2. check if the resource is allowed, checking in the blacklist/whitelist configuration
  3. update the distributed resources
  4. broadcast the cluster event

Only use hazelcast.xml

The org.apache.karaf.cellar.instance.cfg file has disappear. It’s now fully replaced by the hazelcast.xml.

It fixes issue around the network configuration and allows new configuration, especially around the encryption.

OSGi event support

cellar-event feature now provides OSGi event support in Cellar. It uses eventadmin layer. All local event generates a cluster event which is broadcasted to the cluster. It allows to sync remote nodes.

Better shell commands

Now, all cluster:* shell commands mimic the core Karaf commands. It means that we will find quite the same arguments and options and similar output.

The cluster:group-* shell commands have been improved and fixed.

A new shell command has been introduced: cluster:config-propappend to append a String to a config property.

Check everywhere

We added a bunch of check to be sure to have a consistent situation on the cluster and predictable behavior.

It means that the MBeans and shell commands check if a cluster group exists, if a cluster event producer is on, if a resource is allowed on the cluster (for the given cluster group), etc.

You have clean messages informing you about the current status of your commands.

Improvement on the config layer

The Cellar config layer has been improved. It now uses a karaf.cellar.sync property to avoid infinite loop. The config delete operation support has been added, including the cluster:config-delete commands.

Feature repositories

Previously, the feature repositories handling was hidden for the users.

Now, you have a full access to the distributed features repositories set. It means that you can see the distributed repositories for a cluster group, add a new features repository to a cluster group, and remove a features repository from a cluster group.

To do that, you have the cluster:feature-url-* shell commands.

CellarOBRMBean

Cellar provides a MBean for all parts of the cluster resources (bundles, features, config, etc).

However, if an user installed cellar-obr feature, he got the cluster:obr-* shell commands but no corresponding MBean.

The CellarOBRMBean has been introduced and is installed with the cellar-obr feature.

Summary

Karaf Cellar 2.2.4 is really a major release, and I think it should have been named 2.3.0 due to the bunch of the bug fixes and new features: we fixed 77 Jiras in this release and performed lot of manual tests.

The quality has been heavily improved in this release comparing to the previous one.

I encourage all Cellar users to update to Karaf Cellar 2.2.4 and I hope you will be pleased with this new release 😉

Apache Karaf Cellar and DOSGi

November 29, 2011 Posted by jbonofre

Next version of Apache Karaf Cellar will include a DOSGi (Distributed OSGi) implementation.

There are several existing DOSGi implementations: in Apache CXF (powered by the CXF stack), in Fuse Fabric, etc.

The purpose of the Cellar one is to leverage the Cellar existing (Hazelcast instance, distributed map, etc), and to be very easy to use.

Karaf Cellar DOSGi installation

Cellar DOSGi is part of the main cellar feature. To install it:


karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.0-SNAPSHOT/xml/features
karaf@root> features:install cellar

Distributed services

You can note a new command available in Cellar:


karaf@root> cluster:list-services

It displays the list of services “cluster aware”. It means a service that could be used remotely from another node.

Code sample

To illustrate the Cellar DOSGi usage, we will use two bundles:

  • the provider bundle is installed on node A and “expose” a distributed service
  • the client bundle is installed on node B and will use the provider service

Provider bundle

The provider bundle expose an OSGi service, flagged as a distributed service.

The service is very simple, it’s just an echo service.

Here’s the interface describing the service:


package org.apache.karaf.cellar.sample;

public interface EchoService {

  String process(String message);

}

and the corresponding implementation:


package org.apache.karaf.cellar.sample;

public class EchoServiceImpl implements EchoService {

&nbps; public String process(String message) {
    return "Processed by distributed service: " + message;
  }

}

Up to now, nothing special, nothing related to DOSGi or Cellar.

To expose the service in Karaf, we create a blueprint descriptor:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <bean id="echoService" class="org.apache.karaf.cellar.sample.EchoServiceImpl"/>

  <service ref="echoService" interface="org.apache.karaf.cellar.sample.EchoService">
    <service-properties>
      <entry key="service.exported.interfaces" value="*"/>
    </service-properties>
  </service>

</blueprint>

We can note that the only “special” part is that we added the service.exported.interfaces property to the echoService.

It’s just a flag to define the interface/service to define as distributed (it means accessible from remote node).

We didn’t change the code of the service itself, just added this property. It means that it’s really easy to turn an existing service as a distributed service.

Client bundle

The client bundle will get a reference to the echoService. In fact, the reference will be a kind of proxy to the service implementation located remotely, on another node.

The client is really simple, it indefinitely iterates to use the clusterService:


package org.apache.karaf.cellar.sample.client;

public class ServiceClient {

  private EchoService echoService;

  public void setEchoService(EchoService echoService) {
    this.echoService = echoService;
  }

  public EchoService getEchoService() {
    return this.echoService;
  }

  public void process() throws Exception {
    int count = 0;
    while (true) {
      System.out.println(echoService.process("Call " + count));
      Thread.sleep(5000);
      count++;
    }
  }

}

We inject the echoService using Blueprint:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <reference id="echoService" interface="org.apache.karaf.cellar.sample.EchoService"/>

  <bean id="serviceClient" class="org.apache.karaf.cellar.sample.client.ServiceClient" init-method="process">
&nbps;   <property name="echoService" ref="echoService"/>
  </bean>

</blueprint>

It’s done. The serviceClient will use the echoService. If a “local” echoService exists, the OSGi framework will bind the reference to this service, else Cellar will look for a distributed service (on all node) exporting the EchoService interface and bind a proxy to the distributed service.