Posts Tagged: ‘cellar’

Apache Karaf Cellar 2.3.0 released

May 24, 2013 Posted by jbonofre

The latest Cellar release (2.2.5) didn’t work with the new Karaf branch and release: 2.3.0.

If the first purpose of Cellar 2.3.0 is to be able to work with Karaf 2.3.x, actually, it’s more than that.

Let’s take a tour in the new Apache Karaf Cellar 2.3.0.

Apache Karaf 2.3.x support

Cellar 2.3.0 is fully compatible with Karaf 2.3.x branch.

Starting from Karaf 2.3.2, Cellar can be install “out of the box”.
If you want to use Cellar with Karaf 2.3.0 or Karaf 2.3.1, in order to avoid some Cellar bootstrap issue, you have to add the following property in etc/config.properties:


org.apache.aries.blueprint.synchronous=true

Upgrade to Hazelcast 2.5

As you may know, Cellar is clustered provision tool powered by Hazelcast.

We did a big jump: from Hazelcast 1.9 to Hazelcast 2.5.

Hazelcast 2.5 brings a lot of bug fixes and interesting new features. You can find more details here: http://www.hazelcast.com/docs/2.5/manual/multi_html/ch18s04.html.

In Cellar, all Hazelcast configuration is performed using an unique file: etc/hazelcast.xml.

Hazelcast 2.5 gives you more properties to configure your cluster, and the behaviors of the cluster events. The default configuration is largely enough for most use cases, but thanks to this Hazelcast version, you have the possibility to perform fine tuning.

More over, some new features are interesting for Cellar, especially:

  • IPv6 support
  • more complete backup support, when a node is disconnected from the cluster
  • better security and encryption support
  • higher tolerancy to connection failures
  • parallel IO support

Cluster groups persistence

In previous Cellar versions, the cluster groups were not store, and relay only on the cluster states. It means that it was possible to loose an existing cluster group if the group didn’t have any node.

Now, each node stores the cluster groups list, and its membership.

Like this, the cluster groups are persistent and we can restart the cluster, we won’t loose the “empty” cluster groups.

Cluster event producers, consumers, handlers status persistence

A Cellar node uses different components to manage cluster events:

  • the producer (one per node) is responsible to broadcast a cluster event to the other nodes
  • the consumer (one per node) receives cluster events and delegates the handling of the event to a handler
  • handlers (one per resource) handles a specific cluster events (features, bundles, etc) and update the node local states

The user has a complete control on producer, consumer, handlers. It means that it can stop or start the node producer, consumer, or handler.

The problem is that the current state of the producer/consumer/handler was not persistent. It means that a restart of the node will reset producer/consumer/handler to the default state (and not the previous one).
To avoid this issue, the producer/consumer/handler state is now persistent on the local node.

Smart synchronization

The synchronization of the different resources supported by Cellar is now better than before. Cellar now checks the local state of the node. Cellar checks a kind of diff between the local state and the state on the cluster. If the states differ, Cellar updates the local state as described on the cluster.

For the config especially, to avoid important CPU consumption, some properties are not considered during the synchronization because they are local to the node (for instance, service.factoryPid).

A new command has been introduced (cluster:sync) to “force” the synchronization of the local node with the cluster. It’s interesting when the node has been disconnected from the cluster, and you want to re-sync as soon as possible.

Improvement on Cellar Cloud support

My friend Achim (Achim Nierbeck) did a great job on the Cellar Cloud support.
First, he fixes some issues that we had on this module.

He gave a great demo during JAX: Integration In the Cloud With Camel, Karaf and Cellar.

Improvement on the cluster:* commands and MBeans

In order to be closer to the Karaf core commands, the cluster:* commands (and MBeans) now provide exactly the same options that you can find in the Karaf core commands.

And more is coming …

The first purpose of Cellar 2.3.0 is to provide a version ready to run on Karaf 2.3.x, and insure the stability. So I postponed some new features and improvements to Cellar 2.3.1.

In the mean time, I also released a new Cellar 2.2.6 release, containing mostly bug fixes (for the ones that still use Karaf 2.2.x with Cellar 2.2.x).

Load balancing with Apache Karaf Cellar, and mod_proxy_balancer

February 3, 2013 Posted by jbonofre

Thanks to Cellar, you can deploy your applications, CXF services, Camel routes, … on several Karaf nodes.

When you use Cellar with web applications, or CXF/HTTP endpoints, a “classic” need is to load balance the HTTP requests on the Karaf nodes.

You have different ways to do that:
– using Camel Load Balancer EIP: it’s an interesting EIP, working with any kind of endpoints. However, it requires to have a Karaf running the Load Balancer routes, so it’s not always possible depending of the user security policy (for instance, putting it in DMZ or so)
– using hardware appliances like F5, Juniper, Cisco: it’s a very good solution, “classic” solution in network teams. However, it requires expensive hardwares, not easy to buy and setup for test or “small” solution.
– using Apache httpd with mod_proxy_balancer: it’s the solution that I’m going to detail. It’s a very stable solution, powerful and easy to setup. And it costs nothing 😉

For instance, you have three Karaf nodes, exposing the following services and the hostname:
– http://192.168.134.3:8040/services
– http://192.168.134.4:8040/services
– http://192.168.134.5:8040/services

We want to load balance those three nodes.

On a dedicated server (it could be installed on one hosting Karaf), we just install Apache httpd:


# on Debian/Ubuntu system
aptitude install apache2


# on RHEL/CentOS/Fedora system
yum install httpd
# enable network connect on httpd
/usr/sbin/setsebool -P httpd_can_network_connect 1

Apache httpd comes with mod_proxy, mod_proxy_http, and mod_proxy_balancer modules. Just check if those modules are loaded in the main httpd.conf.

You can now create a new configuration for your load balancer (directly in the main httpd.conf or by creating a conf file in etc/httpd/conf.d):


<Proxy balancer://mycluster>
  BalancerMember http://192.168.134.3:8040
  BalancerMember http://192.168.134.4:8040
  BalancerMember http://192.168.134.5:8040
</Proxy>
ProxyPass /services balancer://mycluster

The load balancer will proxy the /services requests to the different Karaf nodes.

By default, the mod_proxy_balancer module uses a byrequests algorithm: all nodes will receive the same number of requests.
You can switch to bytraffic (using the lbmethod=bytraffic in the proxy configuration): in that case, all nodes will receive the same amount of traffic (by KB).

The mod_proxy_balancer module is able to support session “affinity” if your application needs it.
When a request is proxied to some back-end, then all following requests from the same user should be proxied to the same back-end.
For instance, you can use the cookie in header to define the session affinity:


Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://mycluster>
  BalancerMember http://192.168.134.3:8040 route=1
  BalancerMember http://192.168.134.4:8040 route=2
ProxySet stickysession=ROUTEID
</Proxy>
ProxyPass /myapp balancer://mycluster

The mod_proxy_balancer module also provide a web manager allowing you to see if your Karaf nodes are up or not, the number of requests received by each node, and the current lbmethod in use.

To enable this balancer manager, you just have to add a dedicated handler:


<Location /balancer-manager>
  SetHandler balancer-manager
  Order allow,deny
  Allow from all
</Location>

Point your browser to http://host/balancer-manager and you will see the manager page.

You can find more information about mod_proxy_balancer here: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html.

Apache httpd with mod_proxy_balancer is an easy and good HTTP load balancer solution in front of Karaf and Cellar.

Apache Karaf Cellar 2.2.4

May 20, 2012 Posted by jbonofre

Apache Karaf Cellar 2.2.4 has been released. This release is a major release, including a bunch of bug fixes and new features.

Here’s the list of key things included in this release.

Consistent behavior

Cellar is composed by two parts:

  • the distributed resources which is a datagrid maintained by each cluster nodes and containing the current cluster status (for instance of the bundles, features, etc)
  • the cluster event which is broadcasted from a node to the others

Cluster shell commands, cluster MBeans, synchronizers (called at startup) and listeners (called when a local event is fired, such as feature installed) update the distributed resource and broadcast cluster events.

To broadcast cluster events, we use an event producer. The cluster event is consommed by a consumer which delegates the handling of the cluster event to a handler. We have a handler for feature, bundle, etc.

Now, all Cellar “producers” do:

  1. check if the cluster event producer is ON
  2. check if the resource is allowed, checking in the blacklist/whitelist configuration
  3. update the distributed resources
  4. broadcast the cluster event

Only use hazelcast.xml

The org.apache.karaf.cellar.instance.cfg file has disappear. It’s now fully replaced by the hazelcast.xml.

It fixes issue around the network configuration and allows new configuration, especially around the encryption.

OSGi event support

cellar-event feature now provides OSGi event support in Cellar. It uses eventadmin layer. All local event generates a cluster event which is broadcasted to the cluster. It allows to sync remote nodes.

Better shell commands

Now, all cluster:* shell commands mimic the core Karaf commands. It means that we will find quite the same arguments and options and similar output.

The cluster:group-* shell commands have been improved and fixed.

A new shell command has been introduced: cluster:config-propappend to append a String to a config property.

Check everywhere

We added a bunch of check to be sure to have a consistent situation on the cluster and predictable behavior.

It means that the MBeans and shell commands check if a cluster group exists, if a cluster event producer is on, if a resource is allowed on the cluster (for the given cluster group), etc.

You have clean messages informing you about the current status of your commands.

Improvement on the config layer

The Cellar config layer has been improved. It now uses a karaf.cellar.sync property to avoid infinite loop. The config delete operation support has been added, including the cluster:config-delete commands.

Feature repositories

Previously, the feature repositories handling was hidden for the users.

Now, you have a full access to the distributed features repositories set. It means that you can see the distributed repositories for a cluster group, add a new features repository to a cluster group, and remove a features repository from a cluster group.

To do that, you have the cluster:feature-url-* shell commands.

CellarOBRMBean

Cellar provides a MBean for all parts of the cluster resources (bundles, features, config, etc).

However, if an user installed cellar-obr feature, he got the cluster:obr-* shell commands but no corresponding MBean.

The CellarOBRMBean has been introduced and is installed with the cellar-obr feature.

Summary

Karaf Cellar 2.2.4 is really a major release, and I think it should have been named 2.3.0 due to the bunch of the bug fixes and new features: we fixed 77 Jiras in this release and performed lot of manual tests.

The quality has been heavily improved in this release comparing to the previous one.

I encourage all Cellar users to update to Karaf Cellar 2.2.4 and I hope you will be pleased with this new release 😉

Apache Karaf Cellar and central management

February 29, 2012 Posted by jbonofre

Introduction

One of the first purpose of Apache Karaf Cellar is to synchronize the state of each Karaf instances in the Cellar cluster.

It means that any change performed on one node (install a feature, start a bundle, add a config, etc) generates a “cluster event” which is broadcasted by Cellar to all others nodes. The target node handles the “cluster event” and performed the corresponding action (install a feature, start a bundle, add a config, etc).

By default, the nodes have the same role. It means that you can perform actions on any node.

But, you may prefer to have one node dedicated to the management. It’s what we name “central management”.

Central management

With central management, one node is identified by the manager. It means that cluster actions will be performed only on this node. The manager is the only one which is able to produce cluster event. The managed nodes are only able to receive and handle events, not to produce.

With this approach, you can give access (for instance grant ssh or JMX access) to the manager node to an operator and manage the cluster (and all cluster groups) from this manager node.

We will see now how to implement central management.

We assume that we have three nodes:

  • manager
  • node1
  • node2

First, we install Cellar on the three nodes:


karaf@manager> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.4-SNAPSHOT/xml/features
karaf@manager> features:install cellar


karaf@node1> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.4-SNAPSHOT/xml/features
karaf@node1> features:install cellar


karaf@node2> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.3/xml/features
karaf@node2> features:install cellar

It’s a default installation, nothing special here.

Now, we disable the event producing on node1 and node2.

To do that, from the manager:


karaf@manager> cluster:producer-stop node1:5702
Node Status
node1:5702 false
karaf@manager> cluster:producer-stop node2:5703
Node Status
node2:5703 false

We can check that the producer are really stopped:


karaf@manager> cluster:producer-status node1:5702
Node Status
node1:5702 false
karaf@manager> cluster:producer-status node2:5703
Node Status
node2:5703 false

Whereas the manager is always able to produce event:


karaf@manager> cluster:producer-status
Node Status
manager:5701 true

Now, for instance, we can install eventadmin feature from the manager:


karaf@manager> cluster:features-install default eventadmin

We can see that this feature has been installed on node1:


karaf@node1> features:list|grep -i eventadmin
[installed ] [2.2.5 ] eventadmin karaf-2.2.5

and node2:


karaf@node2> features:list|grep -i eventadmin
[installed ] [2.2.5 ] eventadmin karaf-2.2.5

Now, we uninstall this feature from the manager:


karaf@manager> cluster:features-uninstall default eventadmin

We can see that the feature has been uninstall from node1:


karaf@node1> features:list|grep -i eventadmin
[uninstalled] [2.2.5 ] eventadmin karaf-2.2.5

And now, we try to install the eventadmin feature on the cluster from the node1:


karaf@node1> cluster:features-install default eventadmin

The event is not generated and the others nodes are not updated.

However, two kind of events are always produced (even if the producer is stopped):

  • the events with the “force” flag. When you create an event (programmatically), you can use setForce(true) to force the event producing.
  • the “admin” events, like the events produced to stop/start a producer, consumer, handler in the cluster.

NB: the event produce handling has been largely improved for Cellar 2.2.4.

Coming in next Cellar release

Currently, the cellar feature install the cellar core bundles, but also the cellar shell commands, and the MBeans.

If it makes sense for the manager, the managed nodes should not “display” the cluster:* commands as they are managed by the manager.

I will make a little refactoring of the Cellar features to split in two parts:

  • the cellar-core feature will install hazelcast, and Cellar core bundles
  • the cellar-manager feature will install the cluster:* shell commands and the MBeans

Apache Karaf Cellar 2.2.2 release

August 8, 2011 Posted by jbonofre

What’s new

Quite one month ago, we released Karaf Cellar 2.2.1, the first “official” release of the Karaf clustering sub-project.

This new Karaf Cellar 2.2.2 release includes bug fixes, especially one bug was a blocker as it was not possible to install Cellar on a Karaf instance running on Equinox OSGi framework.

But, it’s not just a bug fix release, we merge two features from the Cellar trunk.

Bundle synchronization

In Karaf Cellar 2.2.1, we were able to synchronize features (including features repositories) and configuration between Karaf Cellar instances. It means that you can install a feature on one node (cluster:features-install group feature), the feature will be install on each Karaf note.

Karaf Cellar 2.2.2 includes the same behavior for pure OSGi bundle. You can install a bundle on one node, the bundle will be installed on each other nodes on the same cluster group.


karaf@root> osgi:install mybundle

mybundle will be installed on all nodes in the same cluster group.

It’s a first step, as we have for features and config, we will add specific command to manipulate bundle, something like:


karaf@root> cluster:install-bundle group mybundle

Cloud support

Cellar relies on Hazelcast in order to discover cluster nodes. This can happen either by using multicast or by unicast (specifying the ip address of each node).
Unfortunately multicast is not allowed in most IaaS providers and specifying the all the ip addresses is not very flexible, since in most cases they are not known in advance.

Cellar solves this problem using a cloud discovery service powered by jclouds.

Cloud discovery service

Most cloud providers among other provide cloud storage. Cellar uses the cloud storage via jclouds, in order to put there the ip addresses of each node so that Hazelcast can found them.
This approach is also called blackboard and in other words is the process where each nodes registers itself in a common storage, so that other nodes know its existence.

Installing Cellar cloud discovery service

To install the cloud discovery service simply the appropriate jclouds provider and then install cellar-cloud feature. For the rest of this manual I will use amazon s3 as an example, but it applies to any provider supported by jclouds.


karaf@root> features:install jclouds-aws-s3
karaf@root> features:install cellar-cloud

Once the feature is installed, it requires you to create a configuration that contains credentials and type of the cloud storage (aka blobstore).
To do that add a configuration file under etc with the name org.apache.karaf.cellar.cloud-.cfg and put there the following information:

provider=aws-s3 (this varries according to the blobstore provider)
identity=”the identity of the blobstore account”
credential=”the credential/password of the blobstore account)”
container=”the name of the bucket”
validity=”the amount of time an entry is considered valid, after that time the entry is removed”

After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration is updated and the instance is restarted.

Apache Karaf Cellar 2.2.1 Released

July 11, 2011 Posted by jbonofre

Apache Karaf Cellar 2.2.1 has been released today.

Cellar is a Karaf sub-project which aim to provide a clustering solution for Karaf.

Quick start

To enable Cellar into a Karaf, you just have to install the Cellar feature.

First, register the Cellar features descriptor in your running Karaf instance:

karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.1/xml/features

Now, you can see the Cellar features available:

karaf@root> features:list|grep -i cellar
[uninstalled] [2.2.1 ] cellar Karaf clustering
[uninstalled] [2.2.1 ] cellar-webconsole Karaf Cellar Webconsole Plugin

To start Cellar, install the cellar feature:

karaf@root> features:install cellar

It’s done: your Karaf instance is Cellar cluster ready.

You can see your cluster node ID and, eventually, others cluster nodes:

karaf@root> cluster:list-nodes
No. Host Name Port ID
* 1 node1.local 5701 node1.local:5701
2 node2.local 5702 node2.local:5702

The * indicates your local node (on which you are connected).

You can ping a given node to see how the network behaves:

karaf@root> cluster:ping node2.local:5702
Pinging node :node2.local:5702
PING 1 node2.local:5702 82ms
PING 2 node2.local:5702 11ms
PING 3 node2.local:5702 14ms

Now, if you install a feature, it will be installed on all nodes. For instance, if you install eventadmin feature on node1:

karaf@node1> features:install eventadmin
karaf@node1> features:list|grep -i eventadmin
[installed ] [2.2.1 ] eventadmin karaf-2.2.1

you can see it installed on node2:

karaf@node2> features:list|grep -i eventadmin
[installed ] [2.2.1 ] eventadmin karaf-2.2.1

Cellar groups

In Karaf Cellar, you can define cluster groups. It allows you to select nodes and resources involved in one group.

By default, you have the default group:

karaf@root> cluster:group-list
Node Group
* node1.local:5701 default
node2.local:5702 default

You can create a route, for instance test:

karaf@root> cluster:group-create test
Name test
Members []

 

For now, the group doesn’t have any member:

kaaf@root> cluster:group-list
Node Group
node1.local:5701 default
* node2.local:5702 default
test

We can add a node as a member of a cluster group:

karaf@root> cluster:group-join test node1.local:5701
Node Group
node1:5701 default
* node2:5702 default
node1:5701 test

The node could be local or remote.

Cluster units

Currently, Apache Karaf Cellar is able to manage features and configuration. It’s based on an event-driven model.

It means that Cellar is listening for some events (such as install/start/remove features, or set configuration) and fire the actions on all nodes.

Features

You can list the features of a given group:

karaf@root> cluster:features-list test

The feature events (install, uninstall) are sync between all members of the same cluster group.

Configuration

Cellar is able to manage configuration event, as it does for features.

You can see the configuration PID associated to a given cluster group:

karaf@root> cluster:config-list test
No PIDs found for group:test

You can create a configuration (with a PID) directly into a given group:

karaf@root> cluster:config-propset test my mykey myvalue

Now, we can see this configuration in the cluster memory:

karaf@root> cluster:config-list test
PIDs for group:test
PID
my

and the value in this configuration PID:

karaf@root> cluster:config-proplist test my
Property list for PID:my for group:test
Key Value
mykey myvalue

Now, you can see that the cluster group test doesn’t have any member for now:

karaf@root> cluster:group-list
Node Group
* node1.local:5701 default
test

If I list the configuration on my node1 local instance, my is not known:

karaf@root> config:list "(pid=my)"

Now, node1 join the test cluster group:

karaf@root> config:group-join test node1.local:5701

And, now, the node1.local inherits of the configuration defined in the cluster:

karaf@root> config:edit my
karaf@root> proplist
service.pid = my
mykey = myvalue

Now, time to work 🙂

OK, we have a first release of Karaf Cellar, allowing mainly discovery, group and state replication between Karaf instances. The bundle events support is also quite implemented (it just needs to be finalized).

We are going to add JMX Cellar MBeans and logging message with Mike Van. As Cellar is fully based on Blueprint, adding MBeans should be very easy and quick 🙂

But it’s not enough 🙂

Now, I’m sure you have a question: is it possible to have a kind of shared OSGi services registry, with proxy, and be able (transparently or not) to use a remote service ? Do we have DOSGi support ?

The short answer is: not in Cellar 2.2.1.

But, we are working on these two topics (I name it “distributed service registry” and “transport”) and it should be included the next Cellar releases.

I hope you enjoy this new Karaf sub-project.