Using Apache Karaf with Kubernetes

In a previous blog post (http://blog.nanthrax.net/?p=893), I introduced the “new” docker support in Karaf and the use of Karaf static distribution.

This post follows the previous one by illustrating how to use Karaf docker image with a Kubernetes cluster.

Preparing Karaf static docker image

As in the previous blog post, we are preparing a Karaf static distribution based docker image.

For this blog post, I’m using the example provided in Karaf: https://github.com/apache/karaf/tree/master/examples/karaf-docker-example/karaf-docker-example-static-dist.

I’m just building the distribution as usual:

karaf-docker-example-static-dist$ mvn clean install

Preparing kubernetes testing cluster

For this blog, I’m using a minikube installed locally.

The installation is pretty simple:

  1. Download minikube from https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
  2. Rename minikube-linux-amd64 as minikube
  3. Copy minikube in your path, for instance /usr/local/bin

Now that minikube is installed on the machine, we can init the testing kubernetes cluster:

$ minikube start
 minikube v1.2.0 on linux (amd64)
 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
 Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
 Pulling images ...
 Launching Kubernetes ... 
 Verifying: apiserver proxy etcd scheduler controller dns
 Done! kubectl is now configured to use "minikube"

Now, we can install the command line tooling kubectl on our machine:

$ apt install kubectl

We can now interact with our kubernetes cluster:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m49s   v1.15.0

Now, we will use the Docker daemon located in minikube to push our image.

To do so, we retrieve the Docker daemon location using minikube and we set the corresponding env variables:

$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/jbonofre/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)
$ eval $(minikube docker-env)

It’s possible to access the kubernetes cluster master using minikube ssh:

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ 

It’s also possible to use the Kubernetes web console dashboard:

$ minikube dashboard
 Verifying dashboard health ...
 Launching proxy ...
 Verifying proxy health ...
 Opening http://127.0.0.1:45199/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser...
[4552:4574:0625/142804.920361:ERROR:browser_process_sub_thread.cc(221)] Waited 3 ms for network service
Opening in existing browser session.

In this blog, I’m only using kubectl command line tool, but you can achieve exactly the same using the Kubernetes web dashboard.

Preparing the docker image

We built the karaf-docker-example-static-dist distribution previously.

Thanks to the karaf-maven-plugin, the karaf:dockerfile goal automatically creates ready to use Dockerfile. Going into the karaf-docker-example-static-dist/target folder, where the Dockerfile is located, we can directly create the docker image in the minikube docker daemon:

$ cd karaf-docker-example-static-dist/target
$ docker build -t karaf .
Sending build context to Docker daemon  58.12MB
Step 1/7 : FROM openjdk:8-jre
8-jre: Pulling from library/openjdk
6f2f362378c5: Pull complete 
494c27a8a6b8: Pull complete 
7596bb83081b: Pull complete 
1e739bce2743: Pull complete 
4dde2a90460d: Pull complete 
1f5b8585072c: Pull complete 
Digest: sha256:ab3c95c9b20a238a2e62201104d54f887da6e231ba1ff1330fae5a29d5b99f5f
Status: Downloaded newer image for openjdk:8-jre
 ---> ad64853179c1
Step 2/7 : ENV KARAF_INSTALL_PATH /opt
 ---> Running in 77defab3df79
Removing intermediate container 77defab3df79
 ---> 2abcb151c984
Step 3/7 : ENV KARAF_HOME $KARAF_INSTALL_PATH/apache-karaf
 ---> Running in 90bb0624c34d
Removing intermediate container 90bb0624c34d
 ---> 1c9da3faa250
Step 4/7 : ENV PATH $PATH:$KARAF_HOME/bin
 ---> Running in 152c96b787f3
Removing intermediate container 152c96b787f3
 ---> 7574018f8973
Step 5/7 : COPY assembly $KARAF_HOME
 ---> 24c8710f2601
Step 6/7 : EXPOSE 8101 1099 44444 8181
 ---> Running in 280795e8e7b6
Removing intermediate container 280795e8e7b6
 ---> 557edaa00ad9
Step 7/7 : CMD ["karaf", "run"]
 ---> Running in eeed1d42ee76
Removing intermediate container eeed1d42ee76
 ---> 945b344ccf12
Successfully built 945b344ccf12
Successfully tagged karaf:latest

We now have a docker image ready in the minikube docker daemon:

$ docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
karaf                                     latest              945b344ccf12        49 seconds ago      267MB
k8s.gcr.io/kube-proxy                     v1.15.0             d235b23c3570        5 days ago          82.4MB
k8s.gcr.io/kube-apiserver                 v1.15.0             201c7a840312        5 days ago          207MB
k8s.gcr.io/kube-scheduler                 v1.15.0             2d3813851e87        5 days ago          81.1MB
k8s.gcr.io/kube-controller-manager        v1.15.0             8328bb49b652        5 days ago          159MB
openjdk                                   8-jre               ad64853179c1        2 weeks ago         246MB
k8s.gcr.io/kube-addon-manager             v9.0                119701e77cbc        5 months ago        83.1MB
k8s.gcr.io/coredns                        1.3.1               eb516548c180        5 months ago        40.3MB
k8s.gcr.io/kubernetes-dashboard-amd64     v1.10.1             f9aed6605b81        6 months ago        122MB
k8s.gcr.io/etcd                           3.3.10              2c4adeb21b4f        6 months ago        258MB
k8s.gcr.io/k8s-dns-sidecar-amd64          1.14.13             4b2e93f0133d        9 months ago        42.9MB
k8s.gcr.io/k8s-dns-kube-dns-amd64         1.14.13             55a3c5209c5e        9 months ago        51.2MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64    1.14.13             6dc8ef8287d3        9 months ago        41.4MB
k8s.gcr.io/pause                          3.1                 da86e6ba6ca1        18 months ago       742kB
gcr.io/k8s-minikube/storage-provisioner   v1.8.1              4689081edb10        19 months ago       80.8MB

As we don’t want to check new image pulling, we tag our karaf image with a version (1.0.0 in our case):

$ docker tag karaf:latest karaf:1.0.0
$ docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
karaf                                     1.0.0               945b344ccf12        4 minutes ago       267MB
karaf                                     latest              945b344ccf12        4 minutes ago       267MB
...

Creating deployment and pod in kubernetes

We can now create a pod with our karaf image in the kubernetes cluster, using kubectl run command:

$ kubectl run karaf --image=karaf:1.0.0 --port=8181
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/karaf created

Note that we define the port number of our Karaf distribution where the example servlet is bound.

We can verify the status of our deployment and associated pods:

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
karaf   1/1     1            1           44s
$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
karaf-cc9c6bd5d-6wbzk   1/1     Running   0          57s   172.17.0.5   minikube   <none>           <none>

We can see our karaf pod running in the minikube node.

We can have more details about our pod:

$ kubectl describe pods
Name:           karaf-cc9c6bd5d-6wbzk
Namespace:      default
Priority:       0
Node:           minikube/10.0.2.15
Start Time:     Tue, 25 Jun 2019 14:52:28 +0200
Labels:         pod-template-hash=cc9c6bd5d
                run=karaf
Annotations:    <none>
Status:         Running
IP:             172.17.0.5
Controlled By:  ReplicaSet/karaf-cc9c6bd5d
Containers:
  karaf:
    Container ID:   docker://0e540e099d88245b690fec69ebaa486ea5faa91813497ed6f9bc2b70af8188bd
    Image:          karaf:1.0.0
    Image ID:       docker://sha256:945b344ccf12b7a1edf9319d3db1041f1d90b62a65e07f0da5f402da17aef5dd
    Port:           8181/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 25 Jun 2019 14:52:29 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bl7ks (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-bl7ks:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bl7ks
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned default/karaf-cc9c6bd5d-6wbzk to minikube
  Normal  Pulled     12m   kubelet, minikube  Container image "karaf:1.0.0" already present on machine
  Normal  Created    12m   kubelet, minikube  Created container karaf
  Normal  Started    12m   kubelet, minikube  Started container karaf

It’s also possible to directly see the pod logs:

$ kubectl logs karaf-cc9c6bd5d-6wbzk
karaf: Ignoring predefined value for KARAF_HOME
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launch
INFO: Installing and starting initial bundles
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launch
INFO: All initial bundles installed and set to start
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main$KarafLockCallback lockAcquired
INFO: Lock acquired. Setting startlevel to 100
12:52:30.814 INFO  [FelixStartLevel] Logging initialized @1571ms to org.eclipse.jetty.util.log.Slf4jLog
12:52:30.837 INFO  [FelixStartLevel] EventAdmin support is not available, no servlet events will be posted!
12:52:30.839 INFO  [FelixStartLevel] LogService support enabled, log events will be created.
12:52:30.842 INFO  [FelixStartLevel] Pax Web started
12:52:31.151 INFO  [paxweb-config-1-thread-1] No ALPN class available
12:52:31.152 INFO  [paxweb-config-1-thread-1] HTTP/2 not available, creating standard ServerConnector for Http
12:52:31.189 INFO  [paxweb-config-1-thread-1] Pax Web available at [0.0.0.0]:[8181]
12:52:31.201 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.http.core [15]] to http service
12:52:31.217 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard [47]] to http service
12:52:31.222 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.examples.karaf-docker-example-app [14]] to http service
12:52:31.251 INFO  [paxweb-config-1-thread-1] will add org.apache.jasper.servlet.JasperInitializer to ServletContainerInitializers
12:52:31.251 INFO  [paxweb-config-1-thread-1] Skipt org.apache.jasper.servlet.JasperInitializer, because specialized handler will be present
12:52:31.252 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer to ServletContainerInitializers
12:52:31.322 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer
12:52:31.323 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer to ServletContainerInitializers
12:52:31.323 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer
12:52:31.366 INFO  [paxweb-config-1-thread-1] registering context DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default], with context-name: 
12:52:31.387 INFO  [paxweb-config-1-thread-1] registering JasperInitializer
12:52:31.446 INFO  [paxweb-config-1-thread-1] No DecoratedObjectFactory provided, using new org.eclipse.jetty.util.DecoratedObjectFactory[decorators=1]
12:52:31.559 INFO  [paxweb-config-1-thread-1] DefaultSessionIdManager workerName=node0
12:52:31.559 INFO  [paxweb-config-1-thread-1] No SessionScavenger set, using defaults
12:52:31.562 INFO  [paxweb-config-1-thread-1] node0 Scavenging every 600000ms
12:52:31.580 INFO  [paxweb-config-1-thread-1] Started HttpServiceContext{httpContext=DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default]}
12:52:31.587 INFO  [paxweb-config-1-thread-1] jetty-9.4.18.v20190429; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 1.8.0_212-b04
12:52:31.631 INFO  [paxweb-config-1-thread-1] Started default@4438a8f7{HTTP/1.1,[http/1.1]}{0.0.0.0:8181}
12:52:31.632 INFO  [paxweb-config-1-thread-1] Started @2393ms

Exposing the service

Now, we want to expose the servlet example running in karaf, to be accessible from “outside” of the kubernetes cluster.

We expose our karaf deployment as a service, using the 8181 port number:

$ kubectl expose deployment/karaf --type="NodePort" --port=8181
service/karaf exposed

We now have a service available, directly accessible:

$ kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
karaf        NodePort    10.104.119.216   <none>        8181:31673/TCP   56s
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          52m

We can see the karaf service.

Now, we can get the “external/public” available URL on minikube:

$ minikube service --url karaf
http://192.168.99.100:31673

It means that we can point our browser to http://192.168.99.100:31673/servlet-example:

If we display the pod logs, we can see the client access:

$ kubectl logs karaf-cc9c6bd5d-6wbzk
karaf: Ignoring predefined value for KARAF_HOME
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launch
INFO: Installing and starting initial bundles
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launch
INFO: All initial bundles installed and set to start
Jun 25, 2019 12:52:29 PM org.apache.karaf.main.Main$KarafLockCallback lockAcquired
INFO: Lock acquired. Setting startlevel to 100
12:52:30.814 INFO  [FelixStartLevel] Logging initialized @1571ms to org.eclipse.jetty.util.log.Slf4jLog
12:52:30.837 INFO  [FelixStartLevel] EventAdmin support is not available, no servlet events will be posted!
12:52:30.839 INFO  [FelixStartLevel] LogService support enabled, log events will be created.
12:52:30.842 INFO  [FelixStartLevel] Pax Web started
12:52:31.151 INFO  [paxweb-config-1-thread-1] No ALPN class available
12:52:31.152 INFO  [paxweb-config-1-thread-1] HTTP/2 not available, creating standard ServerConnector for Http
12:52:31.189 INFO  [paxweb-config-1-thread-1] Pax Web available at [0.0.0.0]:[8181]
12:52:31.201 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.http.core [15]] to http service
12:52:31.217 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard [47]] to http service
12:52:31.222 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.examples.karaf-docker-example-app [14]] to http service
12:52:31.251 INFO  [paxweb-config-1-thread-1] will add org.apache.jasper.servlet.JasperInitializer to ServletContainerInitializers
12:52:31.251 INFO  [paxweb-config-1-thread-1] Skipt org.apache.jasper.servlet.JasperInitializer, because specialized handler will be present
12:52:31.252 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer to ServletContainerInitializers
12:52:31.322 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer
12:52:31.323 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer to ServletContainerInitializers
12:52:31.323 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer
12:52:31.366 INFO  [paxweb-config-1-thread-1] registering context DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default], with context-name: 
12:52:31.387 INFO  [paxweb-config-1-thread-1] registering JasperInitializer
12:52:31.446 INFO  [paxweb-config-1-thread-1] No DecoratedObjectFactory provided, using new org.eclipse.jetty.util.DecoratedObjectFactory[decorators=1]
12:52:31.559 INFO  [paxweb-config-1-thread-1] DefaultSessionIdManager workerName=node0
12:52:31.559 INFO  [paxweb-config-1-thread-1] No SessionScavenger set, using defaults
12:52:31.562 INFO  [paxweb-config-1-thread-1] node0 Scavenging every 600000ms
12:52:31.580 INFO  [paxweb-config-1-thread-1] Started HttpServiceContext{httpContext=DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default]}
12:52:31.587 INFO  [paxweb-config-1-thread-1] jetty-9.4.18.v20190429; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 1.8.0_212-b04
12:52:31.631 INFO  [paxweb-config-1-thread-1] Started default@4438a8f7{HTTP/1.1,[http/1.1]}{0.0.0.0:8181}
12:52:31.632 INFO  [paxweb-config-1-thread-1] Started @2393ms
13:13:52.506 INFO  [qtp1230440715-30] Client 172.17.0.1 request received on http://192.168.99.100:31673/servlet-example

Note the latest log message.

Scaling

Using a concrete kubernetes cluster

minikube is great to local test, but it’s a “limited” kubernetes cluster (it’s not so easy to add new nodes).

Docker registry

In order to share my docker image with the nodes of my Kubernetes cluster, I’m creating a local Docker registry:

$ docker fetch registry:2
$ docker run -d -p 5000:5000 --name registry registry:2

Then, I tag my Karaf docker image and push in my fresh registry:

$ docker tag karaf:1.0.0 localhost:5000/karaf:1.0.0
$ docker push localhost:5000/karaf:1.0.0

VirtualBox VMs with Kubernetes

To illustrate a more concrete kubernetes cluster, we will create couple of Ubuntu VMs in VirtualBox.
The only setup to take care is about the network. To simplify I’m using a “Bridge Network Adapter”, meaning that the VMs will appear on my local network as other machines.

I install a regular Ubuntu OS on each VM.

On each machine, I install some tools that I gonna use:

$ sudo apt install apt-transport-https curl docker.io

As we are going to use our new insecure Docker registry, I add this one in /etc/docker/daemon.json with the IP address of my machine:

{
  "insecure-registries": ["192.168.134.110:5000"]
}

Then, I install Kubernetes on the VMs:

$ sudo su -
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt update
# apt install kubelet kubeadm kubectl

Starting cluster master

Now, we can create the Kubernetes cluster master on the first VM:

$ sudo su -
# kubeadm init
...
kubeadm join 192.168.134.90:6443 --token alpia1.8cjc1yfv5ezganq7 --discovery-token-ca-cert-hash sha256:3f2da2fa1967b8e974b9097fcdd15c66e0d136db5b1f08b3db7fe45c3e2b790b

With my regular user, I init kubectl configuration:

$ mkdir .kube
$ cd .kube
$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ sudo chown jbonofre:jbonofre ~/.kube/config

Now, I can define the node as the master:

$ kubectl taint nodes --all node-role.kubernetes.io/master-

We have one node ready:

$ kubectl get nodes
NAME   STATUS    ROLES    AGE   VERSION
node   Ready     master   2m    v1.15.0

Starting cluster worker

On the second VM, we join the Kubernetes cluster with the command provided with kubeadm init:

$ sudo su -
# kubeadm join 192.168.134.90:6443 --token alpia1.8cjc1yfv5ezganq7 --discovery-token-ca-cert-hash sha256:3f2da2fa1967b8e974b9097fcdd15c66e0d136db5b1f08b3db7fe45c3e2b790b

We now have two nodes:

$ kubectl get nodes
NAME   STATUS    ROLES    AGE   VERSION
node2  Ready     <none>   2m    v1.15.0
node   Ready     master   5m    v1.15.0

We can now scale our deployment 😉

Deployment and scaling

Let’s deploy our Karaf as we did before:

$ kubectl run karaf --image=192.168.134.110:5000/karaf:1.0.0 --port=8181

Then, we have Karaf running on one nodes:

$ kubectl get deployments
NAME     READY     UP-TO-DATE      AVAILABLE       AGE
karaf    1/1       1               1               32s

We can see the pods where it’s running:

$ kubectl get pods -o wide
NAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATES
karaf-68cb45h     1/1             Running         0                106s         192.168.167.130       node1        <none>                     <none>

So Karaf is running on node1. Now, let’s scale the deployment to use the two nodes:

$ kubectl scale deployments/karaf --replicas=2

Now, we can see that our deployment scale to two nodes:

NAME     READY     UP-TO-DATE      AVAILABLE       AGE
karaf    2/2       2               2               4m34s

We can see the pods on the two nodes:

$ kubectl get pods -o wide
NAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATES
karaf-68cb45x     1/1             Running         0                74s          192.168.104.8         node2        <node>                     <none>
karaf-68cb45h     1/1             Running         0                5m30s        192.168.167.130       node1        <none>                     <none>

It’s also possible to scale down:

$ kubectl scale deployments/karaf --replicas=1
$ kubectl get deployments
NAME     READY     UP-TO-DATE      AVAILABLE       AGE
karaf    1/1       1               1               8m40s
$ kubectl get pods -o wide
NAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATES
karaf-68cb45x     1/1             Terminating     0                4m21         192.168.104.8         node2        <node>                     <none>
karaf-68cb45h     1/1             Running         0                8m45s        192.168.167.130       node1        <none>                     <none>
$ kubectl get pods -o wide
NAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATES
karaf-68cb45h     1/1             Running         0                9m50s        192.168.167.130       node1        <none>                     <none>

Summary

Following the previous blog post about Karaf with Docker, we can see in this blog that Karaf is fully ready to run with Kubernetes.

As part of the “kloud initiative” (Karaf for the Cloud), I’m preparing some tooling directly in Karaf to even simplify the use of Kubernetes. There are also some improvements coming in Karaf Cellar to better leverage Kubernetes.

In the mean time, you can already use Karaf with Kubernetes in your datacenter or with your cloud provider.

Enjoy !

You May Also Like

About the Author: jbonofre

ASF Member, PMC for Apache Karaf, PMC for Apache ServiceMix, PMC for Apache Archiva, PMC for Apache Felix, PMC for Apache Camel, PMC for Apache Syncope, PMC for Apache Beam, PMC for Apache CarbonData, PMC for Apache Bahir, PMC for Apache Brooklyn, PMC for Apache Falcon, PMC for Apache Guacamole, PMC for Apache Lens, Committer for Apache ActiveMQ and much more ! Twitter: jbonofre IRC: jbonofre on #servicemix,#karaf,#camel,#cxf on Freenode