Archive for: ‘August 2011’

Apache Karaf Cave preview

August 25, 2011 Posted by jbonofre

During the Karaf birthday concall, the Karaf team launched the idea of implementing an easy to use OBR (OSGi Bundle Repository) server, and extending it to provide a Karaf Features Repository (KFR).

Welcome to Apache Karaf Cave 😉

Cave is an Apache Karaf sub-project.

The core OBR is a service (RepositoryAdmin service) that can automatically install a bundle, with its deployment dependencies, from a repository.

Cave is a work in progress in the Karaf Sandbox SVN. I think it’s time to provide an initial overview about what we can do with Cave.

Cave already provides the following features:
– Storage: Cave includes a storage backend to host the artifacts. The default storage is a filesystem. We designed the Cave backend to be plug and play. This means that you can implement your own storage backend. For instance, I hope to provide implementations to store the bundles into a database, a LDAP server, or use directly a Maven repository manager as Apache Archiva.
– OBR Metadata Generation: Cave automatically looks for valid OSGi bundles and generates the OBR metadata.
– OBR Service Registration: Cave allows you to directly register a Cave Repository into the OBR RepositoryAdmin service.
– Populate Repository: of course, we can upload a single artifact in a Cave Repository, but you can also grab resources from a remote repository (for instance a Maven repository) via HTTP.
– Proxy Repository: Cave is also able to generate OBR metadata locally, referencing resources present on a remote repository.
– HTTP Wrapper Service: Cave exposes OBR metadata and bundles in an embedded HTTP server, allowing a OBR client to remotely access the Cave Server resources.
– REST Service: Cave provides a REST service layer, to let you administer the Cave Server using a REST client.
– Administration: Cave provides a set of MBeans, allowing you to complete administration of the Cave Server using a JMX client.
– Client Proxy: Cave also provides a Cave Client. It’s an OBR RepositoryAdmin service implementation which delegates the method calls to a remote Cave Server. This means that Karaf will see a local OBR RepositoryAdmin service which is actually using a remote Cave Server.

Checkout and build Cave

The Cave sources are on the Karaf sandbox subversion. You can checkout with svn:


svn co http://svn.apache.org/repos/asf/karaf/cave/trunk cave

To be able to build Cave, you have to use JDK 1.6 and Maven 3.0.3. Simply do, in the directory where you have checkout Cave:


mvn clean install

You should see something like:


[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Karaf :: Cave .............................. SUCCESS [1.221s]
[INFO] Apache Karaf :: Cave :: Server .................... SUCCESS [0.101s]
[INFO] Apache Karaf :: Cave :: Server :: API ............. SUCCESS [2.779s]
[INFO] Apache Karaf :: Cave :: Server :: Storage ......... SUCCESS [4.287s]
[INFO] Apache Karaf :: Cave :: Server :: Management ...... SUCCESS [0.879s]
[INFO] Apache Karaf :: Cave :: Server :: Command ......... SUCCESS [1.372s]
[INFO] Apache Karaf :: Cave :: Server :: HTTP ............ SUCCESS [0.873s]
[INFO] Apache Karaf :: Cave :: Assembly .................. SUCCESS [7.400s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19.558s
[INFO] Finished at: Thu Aug 25 19:01:33 CEST 2011
[INFO] Final Memory: 46M/531M
[INFO] ------------------------------------------------------------------------

Install Cave

To install Cave Server in a running Karaf instance, you have to register the Cave features descriptor:


karaf@root>> features:addurl mvn:org.apache.karaf.cave/apache-karaf-cave/3.0.0-SNAPSHOT/xml/features

You should be able to see the cave-service feature:


karaf@root> features:list|grep -i cave
[uninstalled] [3.0.0-SNAPSHOT ] cave-server repo-0

To install and start Cave, simply install the cave-server feature:


karaf@root> features:install cave-server

NB: the installation of the cave-server feature will install others features, such as obr, http, war, and cxf. It could require several minutes depending of your Internet connection speed.

You can see the Cave bundles installed:


karaf@root> la|grep -i cave
[ 134] [Active ] [ ] [ ] [ 60] Apache Karaf :: Cave :: Server :: API (3.0.0.SNAPSHOT)
[ 135] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cave :: Server :: Storage (3.0.0.SNAPSHOT)
[ 136] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cave :: Server :: Management (3.0.0.SNAPSHOT)
[ 137] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cave :: Server :: Command (3.0.0.SNAPSHOT)
[ 138] [Active ] [ ] [ ] [ 60] Apache Karaf :: Cave :: Server :: HTTP (3.0.0.SNAPSHOT)

and the Cave commands are now available:


karaf@root> cave:
cave:create-repository cave:destroy-repository cave:list-repositories cave:populate-repository
cave:proxy-repository cave:register-repository cave:scan-repository cave:upload-artifact

Cave Repositories

There is no limit in number of Cave Repositories you can create.

A Cave Repository is a container for:
– the OSGi bundles (jar file resources)
– the OBR metadata (repository.xml descriptor)

By default, with the filesystem storage, Cave uses the KARAF_BASE/cave directory to store the repositories.

You can change this storage location in the etc/org.apache.karaf.cave.server.storage.cfg configuration file:


# default value
storage.location=cave
# custom storage location
#storage.location=/path/to/your/storage/folder

Create a Cave Repository

The cave:create-repository command create a Cave Repository:


karaf@root> cave:create-repository cave-repo

A Cave Repository is identified by a name.

Cave creates the repository storage in the global storage location. In our example, we can see a cave-repo directory in the default storage location:


shell$ ls cave/
cave-repo

You can use an existing directory (which can already contain artifacts) using the -l or –location option:


karaf@root< cave:create-repository -l /home/jbonofre/.m2/repository m2

By default, Cave creates the OBR metadata. If you don’t want to create the OBR metadata at repository creation time, you have to use the -nu or –no-update option:


karaf@root> cave:create-repository -nu -l /home/jbonofre/.m2/repository m2

A Cave Repository is directly register in the OBR service. If you don’t want this, you can use -nr or –no-register option:
You can also directly register the Cave Repository in the OBR service as soon as it’s created. To do this, use the -r or –register option.

List of Cave Repositories

You can list the Cave Repositories:


karaf@root> cave:list-repositories
Name Location
[cave-repo] [/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo]
[m2] [/home/jbonofre/.m2/repository]

Remove and destroy a Cave Repository

You can remove a Cave Repository using the cave:remove-repository command:


karaf@root> cave:remove-repository cave-repo

This command only removes the Cave Repository from the repositories registry. It doesn’t physically delete the OBR metadata or the artifacts. It means that you can create the repository using the existing location.

If you want to destroy the Cave Repository including the artifacts and the storage directory, you have to use:


karaf@root> cave:destroy-repository cave-repo

Generate OBR metadata

At any time, you can generate/update the OBR metadata using:


karaf@root> cave:update-repository cave-repo

Cave will scan the repository storage, looking for OSGi bundles, and generate (or update) the OBR metadata.

Register Cave Repository

Once your Cave Repository contains OBR metadata, you can directly register the repository into the OBR service:


karaf@root> cave:register-repository cave-repo

As you will see later in this article, you are now ready to use OBR commands.

We will also see that your repository is remotely available using the Cave HTTP Wrapper Service or the Cave Client.

Populate a Cave Repository

Upload a single artifact

The first way to populate your Cave Repository is by uploading a single artifact:


karaf@root> cave:upload-artifact cave-repo file:/home/jbonofre/.m2/repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.asm/3.3_2/org.apache.servicemix.bundles.asm-3.3_2.jar
karaf@root> cave:upload-artifact cave-repo http://svn.apache.org/repos/asf/servicemix/m2-repo/org/apache/qpid/qpid-broker/0.8.0/qpid-broker-0.8.0.jar

As you can see, you can use file: or http: URL. But you can also use Maven URL (mvn:groupId/artifactId/version):


karaf@root> cave:upload-artifact cave-repo mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.ant/1.7.0_5

Populate from an external repository

If you have a bunch of artifacts to upload, it’s not very efficient to use the cave:upload-artifact command.

The cave:populate-repository command allows you to upload a set of artifacts from an “external” repository:


karaf@root> cave:populate-repository cave-repo file:/home/jbonofre/.m2/repository

In this example, Cave will browse the file:/home/jbonofre/.m2/repository location, looking for OSGi bundles, and will copy the artifacts in your Cave Repository storage location.

Cave supports file: but also http: URL. It means that Cave is able to browse a remote repository.

For instance, you can populate your Cave Repository with all Ant ServiceMix bundles available on the Maven Central Repository:


karaf@root> cave:populate-repository cave-repo http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.ant/

You can also populate your Cave Repository using the whole Maven Central Repository:


karaf@root> cave:populate-repository cave-repo http://repo1.maven.org/maven2

*WARNING*: the Maven Central Repository is really huge and it will require very very long time and space on the hard drive. It’s just for demonstration purpose, not really usable in the real life 😉

By default, Cave updates the OBR metadata at population time. If you don’t want it, you can use the -nu or –no-update option:


karaf@root> cave:populate-repository -nu cave-repo http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles/

Once your have populated your Cave Repository, you don’t need an Internet connection anymore as the bundles are present in the Cave Repository storage.

Proxy a Repository

A great advantage of Cave Repository population is that the bundles are locally present on the Cave Server (in the Cave Repository storage location).

But, you may also prefer to have the bundles on a remote repository (like the Maven Central Repository) and let Cave only handles the OBR metadata.

In that case, you can use the cave:proxy-repository command. The bundles stay on the “external” repository, the Cave Repository only stores the corresponding OBR metadata for the remote artifacts:


karaf@root> cave:proxy-repository cave-repo http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.commons-lang/

*NB*: in this situation, the Cave Repository handles only the OBR metadata, it doesn’t monitor the remote repository for new artifacts or removed artifacts. It means that, if the remote repository changes (for instance, new artifacts are available), you have to re-execute the cave:proxy-repository command to update the OBR metadata.

*NB*: a best practice is to create a Cave Repository dedicated to a remote repository proxy.

OBR commands

When you have register your Cave Repository in the OBR service (using the cave:register-repository command), you can see it using the obr:list command:


karaf@root> obr:listurl
file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/repository.xml

and the bundles present in your Cave Repository are available in the OBR service:


karaf@root> obr:list
[...]
slf4j.log4j12 - slf4j-log4j12 (1.6.1)
slf4j.simple - slf4j-simple (1.6.1)
org.springframework.aop - Spring AOP (3.0.5.RELEASE)
org.springframework.asm - Spring ASM (3.0.5.RELEASE)
org.springframework.beans - Spring Beans (3.0.5.RELEASE)
org.springframework.context - Spring Context (3.0.5.RELEASE)
org.springframework.core - Spring Core (3.0.5.RELEASE)
org.springframework.expression - Spring Expression Language (3.0.5.RELEASE)
org.springframework.web - Spring Web (3.0.5.RELEASE)
org.springframework.osgi.extensions.annotations - spring-osgi-annotation (1.2.1)
org.springframework.osgi.core - spring-osgi-core (1.2.1)
org.springframework.osgi.extender - spring-osgi-extender (1.2.1)
org.springframework.osgi.io - spring-osgi-io (1.2.1)
stax2-api - Stax2 API (3.1.1)
stax2-api - Stax2 API (3.0.2)
woodstox-core-asl - Woodstox XML-processor (4.1.1)
org.apache.ws.xmlschema.core - XmlSchema Core (2.0.0)

You can also get detailed information about a bundle:


karaf@root> obr:info slf4j.api,1.6.1
---------
slf4j-api
---------
id: slf4j.api/1.6.1
description: The slf4j API
symbolicname: slf4j.api
presentationname: slf4j-api
uri: file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/slf4j-api-1.6.1.jar
size: 25496
version: 1.6.1
Requires:
package:(&(package=org.slf4j.impl)(version>=1.6.0))
ee:(|(ee=J2SE-1.3))
Capabilities:
bundle:{manifestversion=2, symbolicname=slf4j.api, presentationname=slf4j-api, version=1.6.1}
package:{package=org.slf4j, version=1.6.1}
package:{package=org.slf4j.spi, version=1.6.1}
package:{package=org.slf4j.helpers, version=1.6.1}

*NB*: in Karaf, the OBR entry format is symbolicname,version

You have detailed information about an OSGi bundle, especially the bundle requirements and capabilities.

The OBR service (client) is able to resolve the dependencies between bundles, depending the requirements and capabilities of each bundle.

For instance, we have the following commons-pool bundle details:


karaf@root> obr:info org.apache.servicemix.bundles.commons-dbcp
--------------------------------------------
Apache ServiceMix :: Bundles :: commons-dbcp
--------------------------------------------
id: org.apache.servicemix.bundles.commons-dbcp/1.4.0.1
description: This OSGi bundle wraps commons-dbcp 1.4 jar file.
documentation: http://www.apache.org/
symbolicname: org.apache.servicemix.bundles.commons-dbcp
presentationname: Apache ServiceMix :: Bundles :: commons-dbcp
license: http://www.apache.org/licenses/LICENSE-2.0.txt
uri: file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar
size: 159721
version: 1.4.0.1
Requires:
package:(&(package=javax.naming))
package:(&(package=javax.naming.spi))
package:(&(package=javax.sql))
package:(&(package=javax.transaction))
package:(&(package=javax.transaction.xa))
package:(&(package=org.apache.commons.pool)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.apache.commons.pool.impl)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.xml.sax))
package:(&(package=org.xml.sax.helpers))
Capabilities:
bundle:{manifestversion=2, symbolicname=org.apache.servicemix.bundles.commons-dbcp, presentationname=Apache ServiceMix :: Bundles :: commons-dbcp, version=1.4.0.1}
package:{package=org.apache.commons.dbcp.cpdsadapter, uses:=org.apache.commons.dbcp,javax.naming,javax.sql,org.apache.commons.pool.impl,org.apache.commons.pool,javax.naming.spi, version=1.4.0}
package:{package=org.apache.commons.dbcp, uses:=org.apache.commons.pool.impl,org.apache.commons.pool,javax.sql,javax.naming,javax.naming.spi,org.apache.commons.jocl,org.xml.sax, version=1.4.0}
package:{package=org.apache.commons.dbcp.managed, uses:=org.apache.commons.dbcp,javax.sql,org.apache.commons.pool.impl,javax.transaction,org.apache.commons.pool,javax.transaction.xa, version=1.4.0}
package:{package=org.apache.commons.dbcp.datasources, uses:=javax.sql,org.apache.commons.pool,javax.naming,org.apache.commons.dbcp,javax.naming.spi,org.apache.commons.pool.impl, version=1.4.0}
package:{package=org.apache.commons.jocl, uses:=org.xml.sax.helpers,org.xml.sax, version=1.4.0}

We can see that commons-dbcp requires org.apache.commons.pool package (in version range 1.3.0 and 2.0.0).

If we take a look on commons-pool bundle details:


karaf@root> obr:info org.apache.servicemix.bundles.commons-pool
--------------------------------------------
Apache ServiceMix :: Bundles :: commons-pool
--------------------------------------------
id: org.apache.servicemix.bundles.commons-pool/1.5.4.3
description: This OSGi bundle wraps commons-pool 1.5.4 jar file.
documentation: http://www.apache.org/
symbolicname: org.apache.servicemix.bundles.commons-pool
presentationname: Apache ServiceMix :: Bundles :: commons-pool
license: http://www.apache.org/licenses/LICENSE-2.0.txt
uri: file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/org.apache.servicemix.bundles.commons-pool-1.5.4.3.jar
size: 97332
version: 1.5.4.3
Capabilities:
bundle:{manifestversion=2, symbolicname=org.apache.servicemix.bundles.commons-pool, presentationname=Apache ServiceMix :: Bundles :: commons-pool, version=1.5.4.3}
package:{package=org.apache.commons.pool.impl, uses:=org.apache.commons.pool, version=1.5.4}
package:{package=org.apache.commons.pool, version=1.5.4}

This bundle provides package org.apache.commons.pool capability.

It means that if we deploy the commons-dbcp bundle, the OBR should also deploy the commons-pool bundle:


karaf@root> obr:deploy org.apache.servicemix.bundles.commons-dbcp
Target resource(s):
-------------------
Apache ServiceMix :: Bundles :: commons-dbcp (1.4.0.1)

Required resource(s):
---------------------
Apache ServiceMix :: Bundles :: commons-pool (1.5.4.3)

Deploying...done.

Done: the OBR has resolved that commons-pool has capabilities corresponding to the commons-dbcp requirements and so installed the commons-pool bundle in the same time of commons-dbcp.

*NB*: in the obr:deploy command, if we don’t explicitly mention the bundle version, the OBR will take the higher version available for the bundle symbolic name.

*NB*: the obr:deploy command doesn’t start the bundle, it only installs. I will enhance this command to add a -s option to start the bundles.

Cave HTTP Wrapper Service

When you install the Cave Service, it automatically starts an HTTP wrapper service.

This service allows you to access the OBR metadata and bundle artifacts via HTTP.

OBR metadata

For instance, you have the following Cave Repositories:


karaf@root> cave:list-repositories
Name Location
[cave-repo] [/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo]
[m2] [/home/jbonofre/.m2/repository]

You can access the OBR metadata using the following URL in your favorite browser:


http://localhost:8181/cave/m2-repository.xml

*NB*: the 8181 port number is the default one used by the Karaf HTTP service.

To access the OBR metadata, the URL format is:


http://[cave_server_hostname]:[http_service_port]/cave/[cave_repository_name]-repository.xml

It means that you can register the Cave Repositories on a remote Karaf instance.

In the remote Karaf instance, you just have to install the obr feature and register the Cave HTTP Wrapper repository.xml URL:


karaf@other> features:install obr
karaf@other> obr:addurl http://cave_server:8181/cave/cave-repo-repository.xml

OSGi bundles access

Cave HTTP Wrapper Service also exposes the bundles via HTTP.

For instance, if you have registered the cave-repo Cave Repository in the OBR service using:


karaf@localhost> cave:register-repository cave-repo

you have the following bundles available in the OBR service:


karaf@localhost> obr:list
org.apache.servicemix.bundles.commons-dbcp - Apache ServiceMix :: Bundles :: commons-dbcp (1.4.0.1)
org.apache.servicemix.bundles.commons-pool - Apache ServiceMix :: Bundles :: commons-pool (1.5.4.3)

If we take a look on the commons-dbcp bundle details:


karaf@localhost> obr:info org.apache.servicemix.bundles.commons-dbcp
--------------------------------------------
Apache ServiceMix :: Bundles :: commons-dbcp
--------------------------------------------
id: org.apache.servicemix.bundles.commons-dbcp/1.4.0.1
description: This OSGi bundle wraps commons-dbcp 1.4 jar file.
documentation: http://www.apache.org/
symbolicname: org.apache.servicemix.bundles.commons-dbcp
presentationname: Apache ServiceMix :: Bundles :: commons-dbcp
license: http://www.apache.org/licenses/LICENSE-2.0.txt
uri: file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar
size: 159721
version: 1.4.0.1
Requires:
package:(&(package=javax.naming))
package:(&(package=javax.naming.spi))
package:(&(package=javax.sql))
package:(&(package=javax.transaction))
package:(&(package=javax.transaction.xa))
package:(&(package=org.apache.commons.pool)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.apache.commons.pool.impl)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.xml.sax))
package:(&(package=org.xml.sax.helpers))
Capabilities:
bundle:{manifestversion=2, symbolicname=org.apache.servicemix.bundles.commons-dbcp, presentationname=Apache ServiceMix :: Bundles :: commons-dbcp, version=1.4.0.1}
package:{package=org.apache.commons.dbcp.cpdsadapter, uses:=org.apache.commons.dbcp,javax.naming,javax.sql,org.apache.commons.pool.impl,org.apache.commons.pool,javax.naming.spi, version=1.4.0}
package:{package=org.apache.commons.dbcp, uses:=org.apache.commons.pool.impl,org.apache.commons.pool,javax.sql,javax.naming,javax.naming.spi,org.apache.commons.jocl,org.xml.sax, version=1.4.0}
package:{package=org.apache.commons.dbcp.managed, uses:=org.apache.commons.dbcp,javax.sql,org.apache.commons.pool.impl,javax.transaction,org.apache.commons.pool,javax.transaction.xa, version=1.4.0}
package:{package=org.apache.commons.dbcp.datasources, uses:=javax.sql,org.apache.commons.pool,javax.naming,org.apache.commons.dbcp,javax.naming.spi,org.apache.commons.pool.impl, version=1.4.0}
package:{package=org.apache.commons.jocl, uses:=org.xml.sax.helpers,org.xml.sax, version=1.4.0}

we can see that the URI is file:/home/jbonofre/apache-karaf-2.2.2/cave/cave-repo/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar.

The Cave HTTP Wrapper Service also exposes the bundle on:


http://localhost:8181/cave/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar

Cave is able to handle bundle URI relatively to the repository one.

If means that, if you register the cave-repo Cave Repository on a remote Karaf instance using the HTTP service:


karaf@remote> features:install obr
karaf@remote> obr:addurl http://cave_server:8181/cave/cave-repo-repository.xml

you can take a look on the commons-dbcp bundle details:


karaf@remote> obr:info org.apache.servicemix.bundles.commons-dbcp
--------------------------------------------
Apache ServiceMix :: Bundles :: commons-dbcp
--------------------------------------------
id: org.apache.servicemix.bundles.commons-dbcp/1.4.0.1
description: This OSGi bundle wraps commons-dbcp 1.4 jar file.
documentation: http://www.apache.org/
symbolicname: org.apache.servicemix.bundles.commons-dbcp
presentationname: Apache ServiceMix :: Bundles :: commons-dbcp
license: http://www.apache.org/licenses/LICENSE-2.0.txt
uri: http://cave_server:8181/cave/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar
size: 159721
version: 1.4.0.1
Requires:
package:(&(package=javax.naming))
package:(&(package=javax.naming.spi))
package:(&(package=javax.sql))
package:(&(package=javax.transaction))
package:(&(package=javax.transaction.xa))
package:(&(package=org.apache.commons.pool)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.apache.commons.pool.impl)(version>=1.3.0)(!(version>=2.0.0)))
package:(&(package=org.xml.sax))
package:(&(package=org.xml.sax.helpers))
Capabilities:
bundle:{manifestversion=2, symbolicname=org.apache.servicemix.bundles.commons-dbcp, presentationname=Apache ServiceMix :: Bundles :: commons-dbcp, version=1.4.0.1}
package:{package=org.apache.commons.dbcp.cpdsadapter, uses:=org.apache.commons.dbcp,javax.naming,javax.sql,org.apache.commons.pool.impl,org.apache.commons.pool,javax.naming.spi, version=1.4.0}
package:{package=org.apache.commons.dbcp, uses:=org.apache.commons.pool.impl,org.apache.commons.pool,javax.sql,javax.naming,javax.naming.spi,org.apache.commons.jocl,org.xml.sax, version=1.4.0}
package:{package=org.apache.commons.dbcp.managed, uses:=org.apache.commons.dbcp,javax.sql,org.apache.commons.pool.impl,javax.transaction,org.apache.commons.pool,javax.transaction.xa, version=1.4.0}
package:{package=org.apache.commons.dbcp.datasources, uses:=javax.sql,org.apache.commons.pool,javax.naming,org.apache.commons.dbcp,javax.naming.spi,org.apache.commons.pool.impl, version=1.4.0}
package:{package=org.apache.commons.jocl, uses:=org.xml.sax.helpers,org.xml.sax, version=1.4.0}

we can see that the URI is now http://cave_server:8181/cave/org.apache.servicemix.bundles.commons-dbcp-1.4.0.1.jar.

We can use directly obr:deploy command as previously:


karaf@root> obr:deploy org.apache.servicemix.bundles.commons-dbcp
Target resource(s):
-------------------
Apache ServiceMix :: Bundles :: commons-dbcp (1.4.0.1)

Required resource(s):
---------------------
Apache ServiceMix :: Bundles :: commons-pool (1.5.4.3)

Deploying...done.

It’s completely transparent for the Karaf instance.

Cave REST Service

Cave Server also provides a REST service API, allowing you to handle the Cave Repositories (create, remove, destroy, etc).

By default, the REST Service is bound on http://localhost:8181/services/cave.

I will write a dedicated blog about that.

*NB*: Cave REST Service uses CXF JAX-RS implementation.

Administrate Cave Server

When you install Cave Server, it provides a set of MBeans. These MBeans allow you to monitore and administer the Cave Server.

It means that you can manipulate the Cave Server using a simple JMX client, like jconsole.

I wil write a dedicated blog about that.

Cave Client

The Cave Client is an implementation of the OBR RepositoryAdmin service which proxy all method calls to a remote Cave Server.

For the Karaf instance, the OBR service looks local, but in fact, each method call is forwarded/proxied to a remote Cave Server.

The Cave Client is still a work in progress and not yet available in this preview. However, you can already use a remote Cave Server using the Cave HTTP Wrapper Service as seen above.

TODO

I hope that this first Cave preview answer to some of your questions.

The current Cave TODO list is:
– finalize the REST service
– finalize the JMX MBeans
– finalize the Cave Client (OBR RepositoryAdmin proxy)
– add the Karaf Features Repository (KFR) support, both for the server and the client
– add a plug for the Karaf Web-Console

Feel free to post any comments, remarks, or questions, it’s exactly the purpose of this blog 😉

Use Camel, CXF and Karaf to implement batches

August 23, 2011 Posted by jbonofre

Introduction

Apache Camel has not be designed to be used for implementing batch tasks.

For instance, if your Camel route has a consumer endpoint polling files in a directory, Camel will periodically and indefinitely monitor the folder and poll any new incoming files.
It’s not a batch behavior: in batch mode, we want to run the file polling on demand, at a certain time, launched by a batch scheduler like ControlM, $Universe or Tivoli Worksheet Scheduler.

However, there are several interesting points to use Camel for batch implementation. First, Camel provides a large set of components. A lot of batches read/write files, read from a JMS queues, write into JMS queues, etc. Usage of Camel components in a batch way is really valuable.
Second, Camel uses a DSL to describe the process executed by the routes. Especially, it supports “human readable” DSL like Spring XML or Blueprint XML. It means that it’s easy to review what the batch is doing, eventually change an endpoint definition, etc. Most of the time, batches are “black box”: you run it, and you only get a status code to know if it’s OK or not. With Camel, you have a look on the batch process.
Third, Camel is a highly plug and play framework. It means that it’s easy to replace an endpoint by another one. For instance, if your batch polls files in a folder currently, it’s very easy to change this to poll messages from a JMS queue. You don’t really have to re-implement the whole batch.

More over, tools like Talend ESB Studio provide an IDE to create and design your Camel routes.

In this article, we are going to see how to use Camel in a “batch way”.

Design

In fact, we are going to have two Camel routes:
– the first one is called “control”. This route will “expose” a REST service to start the batch. A bean in this route will be responsible to start the “batch” route.
– the second one is called “batch”. It’s the core implementation of our batch. It’s a “standard” route, but at the end, we have a processor that “stop” the route (to avoid to have the route up indefinitely). This route is not auto started as it will be controller by the first one.

It means that a simple HTTP client (like a browser or REST client) will start the batch, on-demand. Most of enterprise batch schedulers ship a component to make HTTP requests.

POM

Our batch will be packaged as an OSGi bundle. It will allow us to deploy the batch in an Apache Karaf OSGi container:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>net.nanthrax.examples</groupId>
  <artifactId>camel-batch</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>bundle</packaging>

  <properties>
    <camel.version>2.8.0</camel.version>
    <cxf.version>2.4.1</cxf.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-core</artifactId>
      <version>${camel.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-spring</artifactId>
      <version>${camel.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-cxf</artifactId>
      <version>${camel.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.cxf</groupId>
      <artifactId>cxf-rt-frontend-jaxrs</artifactId>
      <version>${cxf.version}</version>
    </dependency>
    <,dependency>
      <groupId>org.apache.cxf</groupId>
      <artifactId>cxf-rt-transports-http</artifactId>
      <version>${cxf.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.cxf</groupId>
      <artifactId>cxf-rt-transports-http-jetty</artifactId>
      <version>${cxf.version}</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <version>2.3.4</version>
        <extensions>true</extensions>
        <configuration>
          <instructions>
            <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
            <Require-Bundle>org.apache.cxf.bundle,org.apache.camel.camel-cxf,org.springframework.beans</Require-Bundle>
          </instructions>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>

In this POM, we can see:
– the packaging is an OSGi bundle. That’s why we use the Apache Felix maven-bundle-plugin. We name the bundle with the project artifactId, and we define Camel and CXF bundles as dependencies (Require-Bundle).
– in the dependency sets, we define the Camel components that we use (camel-core, camel-spring to use the Camel Spring XML DSL, and camel-cxf to use the CXF JAX-RS implementation) and the CXF dependency to be able to create a JAX-RS server.

Control route

The first Camel route is the “control” one. This route will bind a JAX-RS server, listening HTTP requests (consumer) and will start the “batch” route on-demand.

The route definition will be located in the META-INF/spring/routes.xml folder of our bundle. We use the Camel Spring XML DSL in this file:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:cxf="http://camel.apache.org/schema/cxf"
  xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
    http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd
  ">

  <cxf:rsServer id="rsServer" address="http://localhost:9090/batch"
serviceClass="net.nanthrax.examples.camel.batch.impl.ControllerService"/&t;

  <camelContext xmlns="http://camel.apache.org/schema/spring">
    <route id="control">
      <from uri="cxfrs:bean:rsServer"/>
      <to uri="log:net.nanthrax.examples.camel.batch"/>
      <to uri="controllerBean"/>
    </route>
  </camelContext>

  <bean id="controllerBean" class="net.nanthrax.examples.camel.batch.impl.ControllerBean">
    <property name="routeId" value="batch"/>
  </bean>

</beans>

We use the Camel CXF to create the JAX-RS server (using <cxf:rsServer/gt;). This JAX-RS server will listen on the local machine on the 9090 port, and the context path is /batch.
To “describe” the REST service behavior, we define the ControllerService class in the serviceClass attribute.
The ControllerService class is just an “empty container”. The purpose is just to “describe” the REST service, not to process it:


package net.nanthrax.examples.camel.batch.impl;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;

/**
* REST service implementation of the Camel batch service.
*/
@Path("/")
public class ControllerService {

  @GET
  @Path("/start")
  @Produces("text/plain")
  public String startRoute() throws Exception {
    // nothing to do, it's just a wrapper
    return null;
  }

}

We can see the JAX-RS annotations:
– the ControllerService REST Path is /, it means directly bound to the JAX-RS server context path.
– the startRoute() method will accept GET HTTP method, on the context path /start and it will produce pure text (text/plain).

The process itself will be performed in the controllerBean:


package net.nanthrax.examples.camel.batch.impl;

import org.apache.camel.CamelContext;
import org.apache.camel.Handler;

/**
* Camel controller bean involved in the starting routed
*/
public class ControllerBean {

  private String routeId;

  public String getRouteId() {
    return this.routeId;
  }

  public void setRouteId(String routeId) {
    this.routeId = routeId;
  }

  @Handler
  public String startRoute(CamelContext camelContext) throws Exception {
    camelContext.startRoute(routeId);
    return "Batch " + routeId + " started.";
  }

}

We inject the Camel route ID of the batch route: “batch”. The CamelContext is automatically injected by Camel. This bean is quite simple, as it only starts the “batch” route.

Batch route

This route contains the “batch logic”. You can use any kind of routes, components, Enterprise Integration Patterns, etc provided by Camel. The only specific parts are:
– the autoStartup attribute set to false, to avoid to start the route automatically at context bootstrap
– the final processor which stop the route after processing.

We gather the two routes in the same META-INF/spring/routes.xml file:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:cxf="http://camel.apache.org/schema/cxf"
  xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
    http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd
  ">

  <cxf:rsServer id="rsServer" address="http://localhost:9090/batch"
  serviceClass="net.nanthrax.examples.camel.batch.impl.ControllerService"/>

  <camelContext xmlns="http://camel.apache.org/schema/spring">
    <route id="control">
      <from uri="cxfrs:bean:rsServer"/>
      <to uri="log:net.nanthrax.examples.camel.batch"/>
      <to uri="controllerBean"/>
    </route>
    <route id="batch" autoStartup="false">
      <from uri="file:/tmp"/>
      <to uri="file:output"/>
      <process ref="stopProcessor"/>
    </route>
  </camelContext>

  <bean id="controllerBean" class="net.nanthrax.examples.camel.batch.impl.ControllerBean">
    <property name="routeId" value="batch"/>
  </bean>

  <bean id="stopProcessor" class="net.nanthrax.examples.camel.batch.impl.StopProcessor">
    <property name="routeId" value="batch"/>
  </bean>

</beans>

In this example, the batch polls files in the /tmp folder, and copies into the output folder.

The StopProcessor is Camel processor (aka, it implements the Camel Processor interface). It stops the route after processing the incoming message (we inject the “batch” route ID using Spring):


package net.nanthrax.examples.camel.batch.impl;

import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;

/**
* A Camel processor which stop routes.
*/
public class StopProcessor implements Processor {

  private String routeId;

  public String getRouteId() {
    return this.routeId;
  }

  public void setRouteId(String routeId) {
    this.routeId = routeId;
  }

  public void process(Exchange exchange) throws Exception {
    CamelContext camelContext = exchange.getContext();
    // remove myself from the in flight registry so we can stop this route without trouble
    camelContext.getInflightRepository().remove(exchange);
    // stop the route
    camelContext.stopRoute(routeId);
  }

}

Deployment and execution

Now, we can build our OSGi bundle, simply using:


mvn clean install

In a fresh Apache Karaf instance, we have first to install the CXF and Camel features:


karaf@root> features:addurl mvn:org.apache.cxf.karaf/apache-cxf/2.4.1/xml/features
karaf@root> features:install cxf
karaf@root> features:addurl mvn:org.apache.camel.karaf/apache-camel/2.8.0/xml/features
karaf@root> features:install camel-spring
karaf@root> features:install camel-cxf

Now, we can install our bundle:


karaf@root> osgi:install -s mvn:net.nanthrax.examples/camel-batch/1.0-SNAPSHOT

Our bundle appears as “created”:


karaf@root> la|grep -i batch
[ 134] [Active ] [ ] [Started] [ 60] camel-batch (1.0.0.SNAPSHOT)

Using a simple browser, we can access to http://localhost:9090/batch/start. The route is started (as a batch) and we can see in the browser:


Batch batch started.

Conclusion

Even if the first Camel route purpose is to be up and running all the time, we can use it in a more “batch” way. It allows developers to use the large set of Camel components, and be able to use all Enterprise Integration Patterns. For instance, the batch needs to copy a file, and after send an e-mail and a message into a JMS queue, it’s very easy using a recipient list. You have to send to a target endpoint depending of the content of the message, no problem using a Content Based Router.

You can run such kind of batches in Talend ESB. It’s an interesting addition to the Talend Data Integration products (ETL jobs, MDM, DQ, etc).

JAX-RS services using CXF and Karaf

August 19, 2011 Posted by jbonofre

Apache CXF provides a really great layer to implement JAX-RS services. Especially, it fully supports OSGi, including Blueprint. It means that you can very easily create and deploy your REST services in a Apache Karaf container.

In this example, we will see how to list all Karaf features via a REST service.

This exemple is composed by three modules:
– common is an OSGi bundle containing resources shared between the JAX-RS server and the clients. Basically, it contains the service interface and the objects used in the service.
– service is an OSGi bundle providing the implementation of the service interface
– client is a simple Main class that use CXF JAX-RS client

Common bundle

This bundle contains the interface describing the behavior of the REST service. We define it in the FeaturesRestService:


package net.nanthrax.examples.jaxrs.common;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import java.util.Collection;

/**
* REST service to manipulate Karaf features
*/
@Path("/")
public interface FeaturesRestService {

  /**
  * Returns an explicit collection of all features in XML format in response to HTTP GET requests.
  * @return the collection of features
  */
  @GET
  @Path("/features")
  @Produces("application/xml")
  public Collection getFeatures() throws Exception;

}

In this interface, the getFeatures() method returns a collection of FeatureWrapper. This object is an object that will be sent to the client. It contains JAXB and JAX-RS annotations:


package net.nanthrax.examples.jaxrs.common;

import javax.ws.rs.Path;
import javax.xml.bind.annotation.XmlRootElement;

/**
* Wrapper to a Karaf feature including JAXB nad JAX-RS annotations.
*/
@XmlRootElement(name = "Feature")
public class FeatureWrapper {

  private String name;
  private String version;

  public FeatureWrapper() { }

  public FeatureWrapper(String name, String version) {
    this.name = name;
    this.version = version;
  }

  @Path("name")
  public String getName() {
    return this.name;
  }

  public void setName(String name) {
    this.name = name;
  }

  @Path("version")
  public String getVersion() {
    return this.version;
  }

  public void setVersion(String version) {
    this.version = version;
  }

}

Now, we just need to define the Maven POM to build an OSGi bundle:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <parent>
    <groupId>net.nanthrax.examples</groupId>
    <artifactId>jaxrs-blueprint</artifactId>
    <version>1.0-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath>
  </parent>

  <groupId>net.nanthrax.examples.jaxrs-blueprint</groupId>
  <artifactId>net.nanthrax.examples.jaxrs-blueprint.common</artifactId>
  <packaging>bundle</packaging>

  <dependencies>
    <dependency>
      <groupId>org.apache.cxf</groupId>
      <artifactId>cxf-rt-frontend-jaxrs</artifactId>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <configuration>
          <instructions>
            <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
            <Export-Package>
              net.nanthrax.examples.jaxrs.common*;version=${project.version}
            </Export-Package>
          </instructions>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>

We can note that we have a dependencies to cxf-rt-frontend-jaxrs. This artifact provides us the JAX-RS annotations.
We don’t need additional dependencies for JAXB as it’s included in the JDK.

Service bundle

This bundle contains the implementation of the REST service.

We find two things in this bundle:
– the FeaturesRestServiceImpl class implementing the FeaturesRestService interface.
– the Blueprint descriptor in OSGI-INF/blueprint containing the bean definition of the FeaturesRestServiceImpl and the configuration of the JAX-RS server

The FeaturesRestServiceImpl doesn’t contain any JAX-RS annotations. It’s a pure implementation which use the Karaf FeaturesService OSGi service to get the list of Karaf features. It populates a collection of FeatureWrapper:


package net.nanthrax.examples.jaxrs.service;

import net.nanthrax.examples.jaxrs.common.FeatureWrapper;
import net.nanthrax.examples.jaxrs.common.FeaturesRestService;
import org.apache.karaf.features.Feature;
import org.apache.karaf.features.FeaturesService;

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;

/**
* Implementation of the Features REST service.
*/
public class FeaturesRestServiceImpl implements FeaturesRestService {

  private FeaturesService featuresService;

  public FeaturesService getFeaturesService() {
    return this.featuresService;
  }

  public void setFeaturesService(FeaturesService featuresService) {
    this.featuresService = featuresService;
  }

  @Override
  public Collection getFeatures() throws Exception {
    List featuresWrapper = new ArrayList();
    Feature[] features = featuresService.listFeatures();
    for (int i = 0; i < features.length; i++) {       FeatureWrapper wrapper = new FeatureWrapper(features[i].getName(), features[i].getVersion());       featuresWrapper.add(wrapper);     }     return featuresWrapper;   } }

The Blueprint descriptor (in OSGI-INF/blueprint/rest.xml) is responsible:
- to get the reference to the Karaf FeaturesService OSGi service and inject it in the FeaturesRestServiceImpl bean
- to configure the JAX-RS server and define the FeaturesRestServiceImpl as a service bean
- optionally, we enable debug on the CXF internal bus


<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
  xmlns:jaxws="http://cxf.apache.org/blueprint/jaxws"
  xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs"
  xmlns:cxf="http://cxf.apache.org/blueprint/core"
  xsi:schemaLocation="
  http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
  http://cxf.apache.org/blueprint/jaxws http://cxf.apache.org/schemas/blueprint/jaxws.xsd
  http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd
  http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd
  ">

  <cxf:bus>
    <cxf:features>
      <cxf:logging/>
    </cxf:features>
  </cxf:bus>

  <jaxrs:server id="karafFeaturesService" address="/karaf">
    <jaxrs:serviceBeans>
      <ref component-id="karafFeaturesServiceBean"/>
    </jaxrs:serviceBeans>
  </jaxrs:server>

  <bean id="karafFeaturesServiceBean" class="net.nanthrax.examples.jaxrs.service.FeaturesRestServiceImpl">
    <property name="featuresService" ref="featuresService"/>
  </bean>

  <reference id="featuresService" interface="org.apache.karaf.features.FeaturesService"/>

&tt;/blueprint>

Finally, the service bundle POM creates an OSGi bundle importing package from Karaf (for the features service) and the common (for the interface and the FeatureWrapper):


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <parent>
    <groupId>net.nanthrax.examples</groupId>
    <artifactId>jaxrs-blueprint</artifactId>
    <version>1.0-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath>
  </parent>

  <groupId>net.nanthrax.examples.jaxrs-blueprint</groupId>
  <artifactId>net.nanthrax.examples.jaxrs-blueprint.service</artifactId>
  <packaging>bundle</packaging>

  <dependencies>
    <dependency>
      <groupId>net.nanthrax.examples.jaxrs-blueprint</groupId>
      <artifactId>net.nanthrax.examples.jaxrs-blueprint.common</artifactId>
      <version>${project.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.karaf.features</groupId>
      <artifactId>org.apache.karaf.features.core</artifactId>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <configuration>
          <instructions>
            <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
            <Export-Package>
              net.nanthrax.examples.jaxrs.service*;version=${project.version}
            </Export-Package>
            <Import-Package>
              net.nanthrax.examples.jaxrs.common*;version=${project.version},
              org.apache.karaf.features*;version="[2,4)",
              *
            </Import-Package>
          </instructions>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>

Deployment of the REST service in Karaf

Now, we are ready to deploy our REST service into Karaf.

The first step to perform is the installation of the CXF feature:


karaf@root> features:addurl mvn:org.apache.cxf.karaf/apache-cxf/2.4.2/xml/features
karaf@root> features:install cxf

The CXF feature provides the CXF bundle (with the core engine) and the JAX-RS frontend.

Now, we can install the common bundle and the service bundle:


karaf@root> osgi:install -s mvn:net.nanthrax.examples.jaxrs-blueprint/net.nanthrax.examples.jaxrs-blueprint.common/1.0-SNAPSHOT
karaf@root> osgi:install -s mvn:net.nanthrax.examples.jaxrs-blueprint/net.nanthrax.examples.jaxrs-blueprint.service/1.0-SNAPSHOT

Our REST service is now available. The JAX-RS server uses the OSGi HTTP service of Karaf (the Karaf http feature is automatically installed by CXF). The default port number of the HTTP service (which use Jetty as web container) is 8181.

By default, CXF frontends/servlets use the cxf context root.

So it means that, if you point your browser on http://localhost:8181/cxf/karaf/features, we will see the XML formatted list of all Karaf features:


<Features>
  <Feature>
    <name>saaj-impl</name>
    <version>1.3.2</version>
   /Feature<
  <Feature>
    <name>abdera</name>
    <version>1.1.2</version>
  </Feature>
...

In detail, the URL http://localhost:8181/cxf/karaf/features comes from:
- the port (8181) is the default one of the Karaf HTTP service
- the context root (cxf) is the default one used by CXF
- the "karaf" context is defined in the JAX-RS server (in the Blueprint descriptor, by the address attribute)
- the "features" is defined in the FeaturesRestService interface (by the @Path("/features") annotation on the getFeatures() method)

REST client

CXF also provide a REST client, very easy to use:


package net.nanthrax.examples.jaxrs.client;

import net.nanthrax.examples.jaxrs.common.FeatureWrapper;
import org.apache.cxf.jaxrs.client.WebClient;

import java.util.ArrayList;
import java.util.List;

/**
* Simple JAX-RS client.
*/
public final class Main {

  public static void main(String[] args) throws Exception {
    WebClient webClient = WebClient.create("http://localhost:8181/cxf/karaf/features/");
    List features = new ArrayList(webClient.getCollection(FeatureWrapper.class));
    for (FeatureWrapper feature : features) {
      System.out.println("Feature " + feature.getName() + "/" + feature.getVersion());
    }
  }

}

Conclusion

REST service is really easy to do and Apache CXF fully supports the OSGi environment. We can use Blueprint (for OSGi services lookup) to describe and start the JAX-RS server. That's why Karaf is a great container for this kind of services (and a lot of others ;)).

You can find a bunch of others REST services examples in Talend Service Factory (TSF):

- the Talend SF runtime: http://www.talend.com/download.php?src=HomePage#AI
- the Talend SF examples: http://www.talend.com/resources/documentation.php#SF

Use a “remote” EJB in Camel routes

August 9, 2011 Posted by jbonofre

Introduction

You have an existing application, let say developed using J2EE, including EJB (Session).
The application is running into a J2EE application server like JBoss, WebSphere or Weblogic.

This application “exposes” EJBs to perform some business services.

Now, you can to use these “remote” EJBs into Camel routes.

Context

We want to “expose” the EJB using WebService.

As for all EJBs, we have two interfaces for our EJB: the local and remote interfaces.
Let assume that we have:

* ejb.MyEjbSession
* ejb.MyEjbSessionHome

We assume that the MyEjbSession EJB provides a businessMethod() method, with a String in argument, and returning a String.

The first thing to do is to define an interface containing the WebService annotation. This interface will define the operations and will be used to generate the WSDL on the fly:


package net.nanthrax.blog.camel;

@WebService(targetNamespace = "http://www.nanthrax.net/blog", name = "MyEjbService")
public interface MyEjbService {

    public String businessService(String message);

}

Now, we can create a bean implementing this interface:


package net.nanthrax.blog.camel;

import ejb.MyEjbSession;

@WebService(serviceName = "myEjbService", targetNamespace = "http://www.nanthrax.net/blog", endpointInterface = "net.nanthrax.blog.camel.MyEjbService")
public class MyEjbServiceImpl implements MyEjbService {

&nbps;   private MyEjbSession proxy = null;

    public String businessService(String message) {
        return proxy.businessMethod(message);
    }

    public void setProxy(MyEjbSession proxy) {
        this.proxy = proxy;
    }

    public MyEjbSession getProxy() {
        return this.proxy;
    }

}

Camel routes

Now, we have a bean that we can use in a route. We use Spring Camel DSL. We also use Spring classes to connect to the J2EE application server and to inject the EJB proxy. In this example, we use JBoss application server:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cxf="http://camel.apache.org/schema/cxf"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd
">

    <bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
    <property name="environment">
      <props>
        <prop key="java.naming.factory.initial">org.jnp.interfaces.NamingContextFactory </prop>
        <prop key="java.naming.provider.url">jnp://host:1099</prop>
      </props>
    </property>
    </bean>

    <bean id="ejbProxy" class="org.springframework.ejb.access.SimpleRemoteStatelessSessionProxyFactoryBean">
        <property name="jndiName" value="ejb/jndi/name/MyEjbSession" />
        <property name="businessInterface" value="ejb.MyEjbSession />
        <property name="homeInterface" value="ejb.MyEjbSessionHome" />
        <property name="refreshHomeOnConnectFailure" value="true" />
        <property name="cacheHome" value="true" />
        <property name="lookupHomeOnStartup" value="false" />
        <property name="resourceRef" value="false" />
        <property name="jndiTemplate" ref="jndiTemplate" />
    </bean>

    <bean id="ejbService" class="net.nanthrax.blog.camel.MyEjbServiceImpl">
        <property name="proxy" ref="ejbProxy"/>
    </bean>

    <import resource="classpath:META-INF/cxf/cxf.xml"/>
    <import resource="classpath:META-INF/cxf/cxf-extension-soap.xml"/>
    <import resource="classpath:META-INF/cxf/cxf-extension-http-jetty.xml"/>

    <cxf:cxfEndpoint id="cxfEndpoint"
serviceClass="net.nanthrax.blog.camel.MyEjbService"
address="http://0.0.0.0:9090/blog/ejb-service/"/>

    <camelContext xmlns="http://camel.apache.org/schema/spring">
        <route>
            <from uri="cxf:bean:assetServiceCxfEndpoint"/>
            <to uri="assetServiceBean"/>
        </route>
    </camelContext>

</beans>

Apache Karaf Cellar 2.2.2 release

August 8, 2011 Posted by jbonofre

What’s new

Quite one month ago, we released Karaf Cellar 2.2.1, the first “official” release of the Karaf clustering sub-project.

This new Karaf Cellar 2.2.2 release includes bug fixes, especially one bug was a blocker as it was not possible to install Cellar on a Karaf instance running on Equinox OSGi framework.

But, it’s not just a bug fix release, we merge two features from the Cellar trunk.

Bundle synchronization

In Karaf Cellar 2.2.1, we were able to synchronize features (including features repositories) and configuration between Karaf Cellar instances. It means that you can install a feature on one node (cluster:features-install group feature), the feature will be install on each Karaf note.

Karaf Cellar 2.2.2 includes the same behavior for pure OSGi bundle. You can install a bundle on one node, the bundle will be installed on each other nodes on the same cluster group.


karaf@root> osgi:install mybundle

mybundle will be installed on all nodes in the same cluster group.

It’s a first step, as we have for features and config, we will add specific command to manipulate bundle, something like:


karaf@root> cluster:install-bundle group mybundle

Cloud support

Cellar relies on Hazelcast in order to discover cluster nodes. This can happen either by using multicast or by unicast (specifying the ip address of each node).
Unfortunately multicast is not allowed in most IaaS providers and specifying the all the ip addresses is not very flexible, since in most cases they are not known in advance.

Cellar solves this problem using a cloud discovery service powered by jclouds.

Cloud discovery service

Most cloud providers among other provide cloud storage. Cellar uses the cloud storage via jclouds, in order to put there the ip addresses of each node so that Hazelcast can found them.
This approach is also called blackboard and in other words is the process where each nodes registers itself in a common storage, so that other nodes know its existence.

Installing Cellar cloud discovery service

To install the cloud discovery service simply the appropriate jclouds provider and then install cellar-cloud feature. For the rest of this manual I will use amazon s3 as an example, but it applies to any provider supported by jclouds.


karaf@root> features:install jclouds-aws-s3
karaf@root> features:install cellar-cloud

Once the feature is installed, it requires you to create a configuration that contains credentials and type of the cloud storage (aka blobstore).
To do that add a configuration file under etc with the name org.apache.karaf.cellar.cloud-.cfg and put there the following information:

provider=aws-s3 (this varries according to the blobstore provider)
identity=”the identity of the blobstore account”
credential=”the credential/password of the blobstore account)”
container=”the name of the bucket”
validity=”the amount of time an entry is considered valid, after that time the entry is removed”

After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration is updated and the instance is restarted.