Archive for: ‘December 2012’

Create custom log4j appender for Karaf and Pax Logging

December 15, 2012 Posted by jbonofre

Karaf leverages Pax Logging for the logging layer. Pax Logging provides an abstraction service for most popular logging frameworks, like SLF4J, Log4j, commons-logging, etc.

Karaf provides a default logging configuration in etc/org.ops4j.pax.logging.cfg file.

By default, all INFO log messages (rootLogger) are send into a file appender (in data/log/karaf.log). The file appender “maintains” one file of 1MB, and store up to 10 backup files.

Adding a new appender configuration, example with Syslog appender

We can add new appender configuration in the Karaf logging module.

For instance, we can add a syslog appender in etc/org.ops4j.pax.logging.cfg:


log4j.rootLogger = INFO, out, syslog, osgi:*
...
# Syslog appender
log4j.appender.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.syslog.layout=org.apache.log4j.PatternLayout
log4j.appender.syslog.layout.ConversionPattern=[%p] %c:%L - %m%n
log4j.appender.syslog.syslogHost=localhost
log4J.appender.syslog.facility=KARAF
log4j.appender.syslog.facilityPrinting=false
...

We create the syslog appender configuration, and we use this appender for the rootLogger.

Pax Logging provides all default Log4j appenders.

Creating a custom appender

It’s also possible to create your own appender.

For instance, you want to create MyJDBCAppender, extending the standard Log4J JDBCAppender. MyJDBCAppender has a better management of the quote in the SQL query for a DB2 backend for instance:


package org.apache.karaf.blog.logging.appender;

import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.jdbc.JDBCAppender;

/**
* Override apache log4j JDBCAppender for DB2 use (escaping of ' char in data)
* Need proper substitution of the ' char by {@link SQL_APOS} in the writing of the log4j sql property
*/
public class MyJDBCAppender extends JDBCAppender {

private static final String SQL_APOS = "{sql_apos}";
private static final String XML_APOS = "'";

/** {@inheritDoc} */
@Override
protected String getLogStatement(LoggingEvent event) {
String sqlLayout = getLayout().format(event);
// escape ' as standard sequence (') in the sql statement after layout
sqlLayout = sqlLayout.replace("'", XML_APOS);
// revert specific sequence as ' to have final executable sql statement
sqlLayout = sqlLayout.replace(SQL_APOS, "'");
return sqlLayout;
}

}

We put the MyJDBCAppender java file in a src/main/java/org/apache/karaf/blog/logging folder.

We package this appender as an OSGi bundle. This bundle is a fragment to the Pax Logging service bundle:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>org.apache.karaf.blog.logging.appender</groupId>
  <artifactId>org.apache.karaf.blog.logging.appender.jdbc</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>bundle</packaging>

  <dependencies>
    <dependency>
      <groupId>org.ops4j.pax.logging</groupId>
      <artifactId>pax-logging-service</artifactId>
      <version>1.6.9</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <version>2.3.7</version>
        <extensions>true</extensions>
        <configuration>
          <instructions>
            <Bundle-SymbolicName>org.apache.karaf.blog.logging.appender.jdbc</Bundle-SymbolicName>
            <Export-Package>org.apache.karaf.blog.logging.appender</Export-Package>
            <Import-Package/>
            <Private-Package>org.apache.log4j.jdbc</Private-Package>
            <Fragment-Host>org.ops4j.pax.logging.pax-logging-service</Fragment-Host>
            <_failok>true</_failok>
          </instructions>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>

We can use our appender in etc/org.ops4j.pax.logging.cfg file, for instance:


log4j.rootLogger = INFO, out, myappender, osgi:*
...
log4j.appender.myappender=org.apache.karaf.blog.logging.appender.MyJDBCAppender
log4j.appender.myappender.url=jdbc:db2:....
log4j.appender.myappender.driver=com.ibm.db2.jcc.DB2Driver
log4j.appender.myappender.user=username
log4j.appender.myappender.password=password
log4j.appender.myappender.sql=insert into logs values({{sql_apos}%x{sql_apos}, {sql_apos}%d{sql_apos}, {sql_apos}%C{sql_apos}, {sql_apos}%p{sql_apos}, {sql_apos}%m{sql_apos})
log4j.appender.myappender.layout=org.apache.log4j.PatternLayout

In order to be loading very early in the Karaf bootstrap, our appender bundle should be present in the system folder and defined in etc/startup.properties.

The system folder has a “Maven repo like” structure. So you have to copy with:


system/groupId/artifactId/version/artifactId-version.jar

In our example, it means:


mkdir -p $KARAF_HOME/system/org/apache/karaf/blog/logging/appender
cp target/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar $KARAF_HOME/system/org/apache/karaf/blog/logging/appender/org.apache.karaf.blog.logging.appender.jdbc/1.0-SNAPSHOT/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar

and in etc/startup.properties, we define the appender bundle just after the pax-logging-service bundle:


...
org/ops4j/pax/logging/pax-logging-api/1.6.9/pax-logging-api-1.6.9.jar=8
org/ops4j/pax/logging/pax-logging-service/1.6.9/pax-logging-service-1.6.9.jar=8
org/apache/karaf/blog/logging/appender/org.apache.karaf.blog.logging.appender.jdbc/1.0-SNAPSHOT/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar=8
...

You can now start Karaf, it will use our new custom appender.

How to enable HTTPS certificate client auth with Karaf

December 12, 2012 Posted by jbonofre

I received many times messages from users asking how we can “trust” HTTP clients in Karaf.

The purpose is to exchange certificates and allow only “trusted” clients to use the Karaf HTTP service.

Enable HTTP client auth

First of all, we have to enable the HTTP client auth support in Karaf.

When you install the HTTP feature, Karaf leverages Pax-Web to provide HTTP OSGi service:


karaf@root> features:install http

Now, we have to add a custom etc/org.ops4j.pax.web.cfg file:


org.osgi.service.http.port=8181

org.osgi.service.http.port.secure=8443
org.osgi.service.http.secure.enabled=true
org.ops4j.pax.web.ssl.keystore=./etc/keystores/keystore.jks
org.ops4j.pax.web.ssl.password=password
org.ops4j.pax.web.ssl.keypassword=password
#org.ops4j.pax.web.ssl.clientauthwanted=false
org.ops4j.pax.web.ssl.clientauthneeded=true

NB: clientauthwanted and clientauthneeded properties are valid for Karaf 2.2.x which use Pax Web 1.0.x.

Thanks to the clientauthneeded property, we “force” the client to be trusted.

Create the trusted client certificate

We are going to use keytool (provided with the JDK) to manipulate the keys and certificates.

The first step is to create two key pairs:

  • one for the server side (use for SSL)
  • one as a example of client side (use for “trust”, should be performed for each client, on the client side)


mkdir -p etc/keystores
cd etc/keystores
keytool -genkey -keyalg RSA -validity 365 -alias serverkey -keypass password -storepass password -keystore keystore.jks
keytool -genkey -keyalg RSA -validity 365 -alias clientkey -keypass password -storepass password -keystore client.jks

NB: these key are self-signed. In a production system, you should use a Certificate Authority (CA).

Now, we can export the client certificate to be imported in the server keystore:


keytool -export -rfc -keystore client.jks -storepass password -alias clientkey -file client.cer
keytool -import -trustcacerts -keystore keystore.jks -storepass password -alias clientkey -file client.cer

We can now check that the client certificate is trusted in our keystore:


keytool -list -v -keystore keystore.jks
...
Alias name: clientkey
Creation date: Dec 12, 2012
Entry type: trustedCertEntry
...

and we can now remove the client.cer certificate.

Start Karaf and test with WebConsole

Now we can start Karaf:


bin/karaf

and install the WebConsole feature:


karaf@root> features:install webconsole

If we try to access to the WebConsole (using a simple browser) using https://localhost:8443/system/console, we have:


An error occurred during a connection to localhost:8443.

SSL peer cannot verify your certificate.

(Error code: ssl_error_bad_cert_alert)

which is normal as the browser doesn’t have any trusted certificate.

Now, we can add the client certificate in the browser.

Firefox supports the import of PKCS12 keystore. So, we are going to “transform” the JKS keystore into a PKCS12 keystore:


keytool -importkeystore -srckeystore client.jks -srcstoretype JKS -destkeystore client.pfx -deststoretype PKCS12
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
Entry for alias clientkey successfully imported.
Import command completed: 1 entries successfully imported, 0 entries failed or cancelled

Now, we can import the client certificate in Firefox. To do so, open the Preferences window (in Edit menu), and click on the Advanced tab.
You can go in Encryption tab and click on “View Certificates” button.

In “Your Certificates” tab, you can click on the Import button and choose the client.pfx keystore file.

If you try to access to https://localhost:8443/system/console again, you will have access as a trusted client and use it.

Conclusion

It’s the same with any kind of HTTP client that try to use the HTTPs layer of Karaf.

Now, we can disable the HTTP support in Karaf (to force the usage of HTTPs), and we can allow only “trusted” clients to use the HTTPs layer of Karaf.

It’s a simple mechanism if you want to limit access to HTTP resources only for trusted clients.

Apache Karaf Cellar 2.2.5 released !

December 6, 2012 Posted by jbonofre

During the ApacheCon EU, I made a demo of Karaf and Cellar all together. During this demo, I used Cellar 2.2.5-SNAPSHOT.

Now, Cellar 2.2.5 is released ! But, what’s new in this version ?

Groups are now persistent

In Cellar 2.2.4, the empty groups disappear after a restart.

You created a new cluster group without any member (empty group) with:


karaf@root> cluster:group-create foobar
karaf@root> cluster:group-list|grep -i foobar
foobar []

If you restart Cellar (or Karaf), the empty groups were lost:


karaf@root> cluster:group-list|grep -i foobar

To avoid this, in Cellar 2.2.5, the cluster groups are now persistent on each node. We introduced a new groups property in etc/org.apache.karaf.cellar.groups.cfg to store the list of groups. Cellar now reads this property as startup to populate the cluster groups not present on the cluster.

On the other hand, the groups property in etc/org.apache.karaf.cellar.node.cfg defines the group membership of the local node.

If you restart Karaf (or Cellar), this group disappeared from

Cluster producers, consumers, and handlers persistency

Like for groups, with Cellar 2.2.4, the status of cluster event producers, consumers and handlers was not persistent. It means that if you stop the cluster event producer, for instance, after a restart, the producer was start again. So we loosed the status before the restart.

In Cellar 2.2.5, to avoid that, the status of cluster event producers, consumers and handlers is now persistent in etc/org.apache.karaf.cellar.node.cfg (it’s the current status on the local node). Cellar now reads the properties from this file at startup to set the previous status (before the restart).

Bundles blacklist and whitelist

In Cellar 2.2.4, the bundles blacklist and whitelist were not correct by default in etc/org.apache.karaf.cellar.groups.cfg: all bundles were blocked (inbound and outbound). If you tried to install a bundle on the cluster, you saw a “Bundle xxxx is BLOCKED …” in the log.

We changed the default setup to allow all bundle cluster events.

Config sync enhancement

In Cellar 2.2.4, to avoid infinite loop, we introduced a karaf.cellar.sync property appended to all synchronized configuration PID. This property contained the timestamp of the last Cellar configuration synchronization. This mechanism has two issues:

  • it pollutes the configuration PID (it can be confusing for the users to see a “not usable” property)
  • if a configuration change occurs between the timestamp and the Cellar configuration timeout, it’s not synchronized on the cluster

We changed the configuration synchronization mechanism in Cellar 2.2.5. The karaf.cellar.sync property has been removed. Now we compare the dictionary of configuration PID in the cluster (distributed map) and the local.

Bundle state, name, and symbolic name

The bundle distributed map stored only the bundle name. It was a little bit restrictive.

In Cellar 2.2.5, both bundle name and symbolic name are stored in the cluster distributed map.

It allows the users to select a bundle (on the cluster) using both name and symbolic name.

Improvements on the cluster:* commands and MBeans

In order to mimic the Karaf core commands and MBeans, the Cellar commands and MBeans have been improved.

The cluster:feature-install (and the corresponding MBeans) now supports norefresh and noclean options, as supported by features:install Karaf command.

The cluster:bundle-list supports the -l option (to display the bundle location) and -s option (to display the bundle symbolic name), as the bundles:list/osgi:list Karaf command.

The cluster:config-list command now allows to display directly a configuration PID dictionary.

A new command has been introduced in Cellar 2.2.5: cluster:sync. This command forces a synchronization on the local node. It’s particulary interesting when the node loosed the communication with the other nodes (for instance, due to a network issue), the cluster:sync forces the resynchronization of the node and the cluster (in both direction).

Restart issues

Cellar uses a LocalBundleListener to listen for changes on the local bundles, and broadcast these changes as a bundle cluster event.

In Cellar 2.2.4, this listener was a simple BundleListener. The problem was that this listener get the “bundle stop” local event when stopping the framework and broadcast it to the cluster (including to the local node). It means that the “latest” state on the bundle was “stopped”. At restart, the OSGi framework reset the bundle in “stopped” status (instead of “started”).

In Cellar 2.2.5, the listener has been changed to a SynchronousBundleListener. Thanks to this listener, we are able to get the stopping event from the OSGi framework. When the framework stops, Cellar disable the bundle listener in order to avoid to change the bundle states.

Like this, we restart the bundle in the correct state.

We hope that you will like this new Cellar 2.2.5 release. We mostly focused on the bug fixes to provide a more stable cluster solution for Karaf.

Now, we are preparing Cellar 2.2.6 with new bug fixes, new feature, etc. In the mean time, Cellar 2.3.0 is in preparation, supporting Karaf 2.3.x and new Hazelcast version.