Tuesday, May 24, 2016

JBoss Fuse: dynamic Blueprint files with JEXL

In this post I’ll show how to add a little bit of inline scripting in your Apache Aries Blueprint xml files.

I wouldn’t call it necessarely a best practice, but I have always had the idea that this capability might be usueful; probably I started wanting this when I was forced to use xml to simulate imperative programming structures like when using Apache Ant.

And I have found the idea validated in projects like Gradle or Vagrant where a full programming language is actually hiding in disguise, pretending to be a Domain Specific Languge or a surprisingly flexible configuration syntax.

I have talked in past about something similar, when showing how to use MVEL in JBoss Fuse.
This time I will limit myself to show how to use small snippets of code that can be inlined in your otherwise static xml files, trick that might turn useful in case you need to perform simple operations like replacement of strings, aritmetics or anything else but you want to avoid writing a java class for that.

Let me say that I’m not inventing anything new around here. I’m just showing how to use a functionality that has been provided directly by the Apache Aries project but that I haven’t used that often out there.

The goal is to allow you to write snippet like this:



You can see that we are invoking java.lang.String.replaceAll() method on the value of an environment variable.

We can do this thanks to the Apache Aries Bluerpint JEXL Evaluator, an extension to Apache Aries Blueprint, that implements a custom token processor that “extends” the base functionality of Aries Blueprint.

In this specific case, it does it, delegating the token interpolation to the project Apache JEXL.

JEXL, Java Expression Language, it’s just a library that exposes scripting capabilities to the java platorm. It’s not unique in what it does, since you could achieve the same with the native support for Javascript or with Groovy for instance. But we are going to use it since the integration with Blueprint has alredy been written, so we can use it straight away on our Apache Karaf or JBoss Fuse instance.

The following instructions have been verified on JBoss Fuse 6.2.1:

# install JEXL bundle
install -s mvn:org.apache.commons/commons-jexl/2.1.1 
# install JEXL Blueprint integration:
install -s mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.jexl.evaluator/1.0.0

That was all the preparation that we needed, now we just need to use the correct XSD version, 1.2.0 in our Bluerpint file:


Done that, we can leverage the functionality in this way:



Copy that blueprint.xml directly into deploy/ folder, and you can check from Karaf shell that the dynamic invocation of those inline script has actually happened!

JBossFuse:karaf@root> ls (id blueprint.xml) | grep osgi.jndi.service.name
osgi.jndi.service.name = /OPT/RH/JBOSS-FUSE-6.2.1.REDHAT-107___3

This might turn useful in specific scenarios, when you look for a quick way to create dynamic configuration.

In case you might be interested into implementing your custom evaluator, this is the interface you need to provide an implementation of:


And this is an example of the service you need to expose to be able to refer it in your <property-placeholder> node:


Tuesday, April 26, 2016

Deploy and configure a local Docker caching proxy

Recently I was looking into caching for Docker layers downloading for the Fabric8 development environment, to allow me to trash the vms where my Docker daemon was running and still avoiding me to re-download basic images each single time I recreate my vm.

As usual, I tried to hit Google first, and was pointed to these couple of pages:




Since I use quite often Jerome Petazzoni’s approach for a transparent Squid + iptables to cache and sniff simple http traffic (usually http invocation from java programs), I’ve found that the first solution based made sense, so I have tried that first.

It turned out that the second link was what I was looking for; but I have still spent some good learning hours with the no longer working suggestions from the first one, learning the hard way that Squid doesn’t play that nice with Amazon’s Cloudfront CDN, used by Docker Hub. But I have to admit that it’s been fun.
Now I know how to forward calls to Squid to hit an intermediate interceptors that mangles with query params, headers and everything else.
I couldn’t find a working combination for Cloudfront but I am now probably able to reproduce the infamous Cats Proxy Prank. =)

Anyhow, as I was saying, what I was really looking for is that second link that shows you how to setup an intermediate Docker proxy, that your Docker daemon will try to hit, before defaulting to the usual Docker Hub public servers.

Almost everything that I needed was in that page, but I have found the information to be a little more cryptic that needed.

The main reason for that is because that example assumes I need security (TLS), which is not really my case since the proxy is completely local.

Additionally, it shows how to configure you Docker Registry using YAML configuration. Again, not really complex, but indeed more than needed.

Yes, because what you really need to bring up the simplest local (not secured) Docker proxy is this oneliner:

docker run -p 5000:5000 -d --restart=always --name registry   \
  -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io \

The interesting part here is that registry image, supports a smart alternative way to forward configuration to it, that saves you from passing it a YAML confguration file.

The idea, described here, is that if you follow a naming convention for the environment variables, that reflects the hierarchy of the YAML tree, you can turn something like:

  remoteurl: http://registry-1.docker.io

That you should write in a .yaml file and pass to the process in this way:

docker run -d -p 5000:5000 --restart=always --name registry \
  -v `pwd`/config.yml:/etc/docker/registry/config.yml \

Into the much more conventient -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io runtime environment variable!

Let’s improve the example a little, so that we also pass our Docker proxy a non-volatile storage location for the cached layers, so that we are not losing them between invocations:

docker run -p 5000:5000 -d --restart=always --name registry   \
  -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io \
  -v /opt/shared/docker_registry_cache:/var/lib/registry \

Now we have everything we needed to save a good share of bandwidth, each time we need to get some Docker image that had already passed through our local proxy.

The only remaining bit is to tell our Docker daemon to be aware of the new proxy:

# update your docker daemon config, according to your distro
# content of my `/etc/sysconfig/docker` in Fedora 23
OPTIONS=" --registry-mirror=http://localhost:5000"

Reload (or restart) your Docker daemon and you are done! Just be aware that if you restart the daemon you might need also to re-start the Registry container, if you ware working on a single node.

An interesting discovery, it’s been learning that Docker daemon doesn’t break if it cannot find the specified registry-mirror. So you can add the configuration and forget about it, knowing that your interaction with Docker Hub will just benefit of possible hits your caching proxy, assuming it’s running.

You can see it working with the following tests:

docker logs -f registry

will log all the outgoing download requests, and once the set of requrests that compose a single pull image operation will be completed, you will also be able to check that the image is now completely served by your proxy with this invocation:

curl http://localhost:5000/v2/_catalog
# sample output

The article would be finished, but since I feel bad to show how to disable security on the internet, here’s also a very short and fully working and tested example of how to implement the same with TLS enabled:

# generate a self signed certificate; accept default for every value a part from Common Name where you have to put your box hostname
mkdir -p certs && openssl req  -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key  -x509 -days 365 -out certs/domain.crt

# copy to the locally trusted ones, steps for Fedora/Centos/RHEL
sudo cp certs/domain.crt /etc/pki/ca-trust/source/anchors/

# load the newly added certificate
sudo update-ca-trust enable

# run the registy using those keys that you have generated, mounting the files inside the container
docker run -p 5000:5000 --restart=always --name registry \
  -v `pwd`/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \

# now you just need to remember that you are working in https, so you need to use that protocol in your docker daemon configuration, instead of plain http; also use that when you interact with the API in curl

Monday, October 19, 2015

Debugging tip: How to simulate a slow hardisk

As a Software Engineer there are times when you’d like to have a slower system.

It doesn’t happen really often actually: usually it’s when someone reports a bug on your software that you have never seen before and that you can not reproduce.

The majority of time, the reason of those ghost bugs are race conditions.

Race conditions, are issues you might face with multithreaded programming. Imagine that your software does multiple things at the same time. Despite most of the time those things happen in the expected and intuitive order, sometimes, they don’t; leading to unexpected state of your program.

They are indeed a bug. It’s dev’s fault. But the dev doesn’t have much way to protect himself from them. There are programming styles and technologies that push you to avoid the risk altogether, but I think that in general, they are a condition that each developer has to be familiar with.

So why a slower system helps?

Because most of the times, race conditions are masked by the fact the operations still happen “reasonably quickly”. This ambiguos “reasonably quickly” is the main issue. There is no clear limit or number that tells you how quickly. You just have higher chances to see them if things are slow enough to show they are not happening in the correct order or they are not waiting for the correct checkpoints.

In my experience with java applications, the main performance related aspect, while reproducing race condition is disk access speed. More thatn cpu speed or the amount of RAM, I have noticed that disk speed is the biggest differentiation between similar systems.

In this post I will show how to simulate a slow hardisk on Linux to increase your chances to reproduce race conditions.

The solution will be based on nbd and trickle and it will use the network layer to regulate the i/o throughput for your virtual hardisk.

I’d like to start adding that this isn’t anything new and that I’m not suggesting any particularly revolutionary approach. There are many blogpost out there that describe how to achieve this. But for multiple reasons, none of those that I have read worked out of the box on my Fedora 22 or Centos 6 installations.
That is the main reason that pushed me to give back to the internet, adding what might be, just another page on the argument.

Let’s start with the idea of using nbd or Network Block Device to simulate our hardisk.

As far as I understand there aren’t official ways, exposed by the linux Kernel to regulate the I/O speeds of generic block devices.

Over the internet you may find many suggestions, spanning from disabling read and write cache to geneate real load that could make your system busy.

QoS can be enforced on the network layer though. So the idea is to emulate a block device via network.

Despite this might sound very complicated (and maybe it is), the problem has been already been solved by the Linux Kernel with the nbd module.

Since on my Fedora 22 that module is not enabled automatically by default, we have to install it first, and then enable it:

# install nbd module
sudo yum install nbd

# load nbd module
sudo modprobe nbd

# check nbd module is really loaded
lsmod | grep nbd
nbd                    20480  0

Now that nbd is installed and the module loaded we create a configuration file for its daemon:

# run this command as root
"cat > /etc/nbd-server/config" <<EOF
    exportname = /home/pantinor/test_nbd
    copyonwrite = false

Where exportname is a path to a file that will represent your slow virtual hardisk.

You can create the file with this command:

# create an empty file, and reserve it 1GB of space
dd if=/dev/zero of=/home/pantinor/test_nbd bs=1G count=1

Now that config and the destination files are in place, you can start the nbd-server using daemon:

# start ndb-server daemon
sudo systemctl start nbd-server.service

# monitor the daaemon start up with:
journalctl -f --unit nbd-server.service

At this point you have a server network process, listening on port 10809 that any client over your network can connect to , to mount it as a network block device.

We can mount it with this this command:

# "test" corresponds to the configuration section in daemon  config file
sudo nbd-client -N test 10809  /dev/nbd0
# my Centos 6 version of nbd-client needs a slightly different synatx:
#    sudo nbd-client -N test   /dev/nbd0

Now we have created a virtual block device, called /dev/nbd0. Now we can format it like it was a normal one:

# format device
sudo mkfs /dev/nbd0 

# create folder for mounting
sudo mkdir /mnt/nbd

# mount device, sync option is important to not allow the kernel to cheat!
sudo mount -o sync /dev/nbd0 /mnt/nbd

# add write permissions to everyone
sudo chmod a+rwx /mnt/nbd

Not that we have passed to mount command the flag -o sync. This command has an important function: to disable an enhancement in the linux Kernel that delays the completion of write operations to the devices. Without that all the write operations will look like instantaneous, and the kernel will actually complete the write requests in background. With this flag instead, all the operation will wait until the operation has really completed.

You can check that now you are able to read and write on the mount point /mnt/nbd.

Let’s now temporarily unmount and disconnect from nbd-server:

sudo umount /mnt/nbd

sudo nbd-client -d /dev/nbd0

And let’s introduce trickle.

Trickle is a software you can use to wrap other processes and to limit their networking bandwidth.

You can use it to limit any other program. A simple test you can perform with it is to use it with curl:

# download a sample file and limits download speed to 50 KB/s
trickle -d 50 -u 50  curl -O  http://download.thinkbroadband.com/5MB.zip

Now, as you can expect, we just need to join trickle and nbd-server behavior, to obtain the desired behavior.

Let’s start stopping current nbd-server daemon to free up its default port:

sudo systemctl stop nbd-server.service

And let’s start it via trickle:

# start nbd-server limiting its network throughput
trickle -d 20 -u 20 -v nbd-server -d

-d attaches the server process to the console, so the console will be blocked and it will be freed only one you close the process or when a client disconnects.
Ignore the error message: trickle: Could not reach trickled, working independently: No such file or directory

Now you can re-issue the commands to connect to nbd-server and re mount it:

sudo nbd-client -N test 10809  /dev/nbd0

sudo mount -o sync /dev/nbd0 /mnt/nbd

And you are done! Now you have a slow hardisk mounted on /dev/nbd0.

You can verify the slow behavior in this way:

sudo dd if=/dev/nbd0 of=/dev/null bs=65536 skip=100 count=10
10+0 records in
10+0 records out
655360 bytes (655 kB) copied, 18.8038 s, 34.9 kB/s

# when run against an nbd-server that doesn't use trickle the output is:
# 655360 bytes (655 kB) copied, 0.000723881 s, 905 MB/s

Now that you have a slow partition, you can just put the files of your sw there to simulate a slow i/o.

All the above steps can be converted to helper scripts that will make the process much simpler like those described here: http://philtortoise.blogspot.it/2013/09/simulating-slow-drive.html.

Monday, October 5, 2015

JBoss Fuse - Turn your static config into dynamic templates with MVEL

Recently I have rediscovered a JBoss Fuse functionality that I had forgotten about and I’ve thought that other people out there may benefit of this reminder.

This post will be focused on JBoss Fuse and Fabric8 but it might interest also all those developers that are looking for minimally invasive ways to add some degree of dynamic support to their static configuration files.

The idea of dynamic configuration in OSGi and in Fabric8

OSGi framework is more often remembered for its class-loading behavior. But a part of that, it also defines other concepts and functionality that the framework has to implement.
One of the is ConfigAdmin.

ConfigAdmin is a service to define an externalized set of properties files that are logically bounded to your deployment units.

The lifecycle of this external properties files is linked with OSGi bundle lifecycle: if you modify an external property file, your bundle will be notified.
Depending on how you coded your bundle you can decide to react to the notification and, programmatically or via different helper frameworks like Blueprint, you can invoke code that uses the new configuration.

This mechanism is handy and powerful, and all developers using OSGi are familiar with it.

Fabric8 builds on the idea of ConfigAdmin, and extends it.

With its provisioning capabilities, Fabric8 defines the concept of a Profile that encapsulates deployment units and configuration. It adds some layer of functionality on top of plain OSGi and it allows to manage any kind of deployment unit, not only OSGi bundles, as well as any kind of configuration or static file.

If you check the official documentation you will find the list of “extensions” that Fabric8 layer offers and you will learn that they are divided mainly in 2 groups: Url Handlers and Property Resolvers.

I suggest everyone that is interested in this technology to dig through the documentation; but to offer a brief summary and a short example, imagine that your Fabric profiles have the capability to resolve some values at runtime using specific placeholders.


# sample url handler usage, ResourceName is a filename relative to the namespace of the containing Profile:

# sample property handler, the value is read at deploy time, from the Apache Zookeeper distributed registry that is published when you run JBoss Fuse

There are multiple handlers available out of the box, covering what the developers thought were the most common use cases: Zookeeper, Profiles, Blueprint, Spring, System Properties, Managed Ports, etc.

And you might also think to extend the mechanism defining your own extension: for example you might want to react to performance metrics you are storing on some system, you can write an extension, with it’s syntax convention, that injects values taken from your system.

The limit of all this power: static configuration files

The capabilities I have introduced above are exciting and powerful but they have an implicit limit: they are available only to .properties files or to files that Fabric is aware of.

This means that those functionality are available if you have to manage Fabric Profiles, OSGi properties or other specific technology that interact with them like Camel, but they are not enabled for anything that is Fabric-Unaware.

Imagine you have your custom code that reads an .xml configuration file. And imagine that your code doesn’t reference any Fabric object or service.
Your code will process that .xml file as-is. There won’t be any magic replacement of tokens or paths, because despite you are running inside Fabric, you are NOT using any directly supported technology and you are NOT notifying Fabric, that you might want its services.

To solve this problem you have 3 options:

  1. You write an extension to Fabric to handle and recognise your static resources and delegates the dynamic replacement to the framework code.
  2. You alter the code contained in your deployment unit, and instead of consuming directly the static resources you ask to Fabric services to interpolate them for you
  3. *You use mvel: url handler (and avoid touching any other code!)

What is MVEL ?

MVEL is actually a programming language: https://en.wikipedia.org/wiki/MVEL .
In particular it’s also scripting language that you can run directly from source skipping the compilation step.
It actually has multiple specific characteristics that might make it interesting to be embedded within another application and be used to define new behaviors at runtime. For all these reasons, for example, it’s also one of the supported languages for JBoss Drools project, that works with Business Rules you might want to define or modify at runtime.

Why it can be useful to us? Mainly for 2 reasons:

  1. it works well as a templating language
  2. Fabric8 has already a mvel: url handler that implicitly, acts also as a resource handler!

Templating language

Templating languages are those family of languages (often Domain Specific Languages) where you can altern static portion of text that is read as-is and dynamic instructions that will be processed at parsing time. I’m probably saying in a more complicated way the same idea I have already introduced above: you can have tokens in your text that will be translated following a specific convention.

This sounds exactly like the capabilities provided by the handlers we have introduced above. With an important difference: while those were context specific handler, MVEL is a general purpose technology. So don’t expect it to know anything about Zookeeper or Fabric profiles, but expect it to be able to support generic programming language concepts like loops, code invocation, reflection and so on.

Fabric supports it!

A reference to the support in Fabric can be find here: http://fabric8.io/gitbook/urlHandlers.html

But let me add a snippet of the original code that implements the functionality, since this is the part where you might found this approach interesting even outside the context of JBoss Fuse:


public InputStream getInputStream() throws IOException {
  String path = url.getPath();
  URL url = new URL(path);
  CompiledTemplate compiledTemplate = TemplateCompiler.compileTemplate(url.openStream());
  Map<String, Object> data = new HashMap<String, Object>();
  Profile overlayProfile = fabricService.get().getCurrentContainer().getOverlayProfile();
  data.put(“profile”, Profiles.getEffectiveProfile(fabricService.get(), overlayProfile));
  data.put(“runtime”, runtimeProperties.get());
  String content = TemplateRuntime.execute(compiledTemplate, data).toString();
  return new ByteArrayInputStream(content.getBytes());

What’s happening here?

First, since it’s not showed in the snippet, remember that this is a url handler. This means that the behavior get’s triggered for files that are referred to via a specific uri. In this case it’s mvel:. For example a valid path might be mvel:jetty.xml.

The other interesting and relatively simple thing to notice is the interaction with MVEL interpreter.
Like most of the templating technologies, even the simplest ones you can implement yourself you usually have:

  • an engine/complier, here it’s TemplateCompiler
  • a variable that contains your template, here it’s url
  • a variable that represent your context, that is the set of variables you want to expose to the engine, here data

Put them all together, asking the engine to do it’s job, here with TemplateRuntime.execute(...) and what you get in output is a static String. No longer the templating instructions, but all the logic your template was defining has been applied, and eventually, augmented with some of the additional input values taken from the context.

An example

I hope my explanation it’s been simple enough, but probably an example is the best way to express the concept.

Let’s use jetty.xml, contained in JBoss Fuse default.profile, that is a static resource that JBoss Fuse doesn’t handle as any special file, so it doesn’t offer any replacement functionality to it.

I will show both aspects of MVEL integration here: reading some value from the context variables and show how programmatic logic (just the sum of 2 integers here) can be used:

<Property name="jetty.port" default="@{  Integer.valueOf( profile.configurations['org.ops4j.pax.web']['org.osgi.service.http.port'] ) + 10  }"/>

We are modifying the default value for Jetty port, taking its initial value from the “profile” context variable, that is a Fabric aware object that has access to the rest of the configuration:


we explicitly cast it from String to Integer:

Integer.valueOf( ... )

and we add a static value of 10 to the returned value:

.. + 10

Let’s save the file, stop our fuse instance. Restart it and re-create a test Fabric:

# in Fuse CLI shell
shutdown -f

# in bash shell
rm -rf data instances


# in Fuse CLI shell
fabric:create --wait-for-provisioning

Just wait and monitor logs and… Uh-oh. An error! What’s happening?

This is the error:

2015-10-05 12:00:10,005 | ERROR | pool-7-thread-1  | Activator                        | 102 - org.ops4j.pax.web.pax-web-runtime - 3.2.5 | Unable to start pax web server: Exception while starting Jetty
java.lang.RuntimeException: Exception while starting Jetty
at org.ops4j.pax.web.service.jetty.internal.JettyServerImpl.start(JettyServerImpl.java:143)[103:org.ops4j.pax.web.pax-web-jetty:3.2.5]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[:1.7.0_76]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)[:1.7.0_76]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.7.0_76]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)[:1.7.0_76]
at org.eclipse.jetty.xml.XmlConfiguration$JettyXmlConfiguration.set(XmlConfiguration.java:572)[96:org.eclipse.jetty.aggregate.jetty-all-server:8.1.17.v20150415]
at org.eclipse.jetty.xml.XmlConfiguration$JettyXmlConfiguration.configure(XmlConfiguration.java:396)[96:org.eclipse.jetty.aggregate.jetty-all-server:8.1.17.v20150415]
Caused by: java.lang.NumberFormatException: For input string: “@{profile.configurations[’org.ops4j.pax.web'][‘org.osgi.service.http.port’] + 1}”
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)[:1.7.0_76]
at java.lang.Integer.parseInt(Integer.java:492)[:1.7.0_76]
at java.lang.Integer.<init>(Integer.java:677)[:1.7.0_76]
… 29 more

If you notice, the error message says that our template snippet cannot be converted to a Number.

Why our template snippet is displayed in first instance? The templating engine should have done its part of the job and give us back a static String without any reference to templating directives!

I have showed you this error on purpose, to insist on a concept I have described above but that might get uncaught on first instance.

MVEL support in Fabric, is implemented as an url handler.

So far we have just modified the content of a static resource file, but we haven’t given any hint to Fabric that we’d like to handle that file as a mvel template.

How to do that?

It’s just a matter of using the correct uri to refer to that same file.

So, modify the file default.profile/org.ops4j.pax.web.properties that is the place in the default Fabric Profile where you define which static file contains Jetty configuration:

# change it from org.ops4j.pax.web.config.url=profile:jetty.xml to

Now, stop the instance again, remove the Fabric configuration files, recreate a Fabric and notice how you Jetty instance is running correctly.

We can check it in this way:

JBossFuse:karaf@root> config:list | grep org.osgi.service.http.port
   org.osgi.service.http.port = 8181

While from your browser you can verify, that Hawtio, JBoss Fuse web console that is deployed on top Jetty, is accessible to port 8191: http://localhost:8191/hawtio

Tuesday, February 17, 2015

JBoss Fuse - Some less known trick


  1. expose java static calls as Karaf shell native commands
  2. override OSGi Headers at deploy time
  3. override OSGi Headers after deploy time with OSGi Fragments

Expose java static calls as Karaf shell native commands

As part of my job as software engineer that has to collaborate with support guys and customers, I very often find myself in the need of extracting additional information from a system I don't have access to.
Usual approaches, valid in all kind of softwares, are usually extracting logs, invoking interactive commands to obtain specific outputs or in what is the most complex case deploy some PoC unit that is supposed to verify a specific behavior.

JBoss Fuse, adn Karaf, the platform it's based onto do alredy a great job in exposing all those data.

You have:

  • extensive logs and integration with Log4j
  • extensive list of jmx operation (you can eventually invoke over http with jolokia)
  • a large list of shell commands

But sometimes this is not enough. If you have seen my previous post about how to use Byteman on JBoss Fuse, you can imagine all the other cases:

  1. you need to print values that are not logged or returned in the code
  2. you might need to short-circuit some logic to hit a specific execution branch of your code
  3. you want to inject a line of code that wasn't there at all

Byteman is still a very good option to, but Karaf has a facility we can use to run custom code.

Karaf, allows you to write code directly in its shell; and allows you to record these bits of code as macro you can re-invoke. This macro will look like a native Karaf shell command!

Let's see a real example I had to implement:

verify if the jvm running my JBoss Fuse instance was resolving a specific DNS as expected.

The standard JDK has a method you can invoke to resolve a dns name:

Since that command is simple enough, meaning it doesn't requires a complex or structured input, I thought I could turn it into an easy to reuse command:

# add all public static methods on a java class as commands  to the namespace "my_context": 
# bundle 0 is because system libs are served by that bundle classloader
addcommand my_context (($.context bundle 0) loadClass java.net.InetAddress) 

That funky line is explained in this way:

  • addcommand is the karaf shell functionality that accepts new commands
  • my_context is the namespace/prefix you will attach you command to. In my case, "dns" would have made a good namespace. ($.context bundle 0) invokes java code. In particular we are invoking the $.context instances, that is a built-in instance exposed by Karaf shell to expose the OSGi framework, whose type is org.apache.felix.framework.BundleContextImpl, and we are invoking its method called bundle passing it the argument 0 representing the id of the OSGi classloader responsible to load the JDK classes. That call returns an instance of org.apache.felix.framework.Felix that we can use to load the specific class definition we need, that is java.net.InetAddress.

As the inline comment says, an invocation of addcommand, exposes all the public static method on that class. So we are now allowed to invoke those methods, and in particular, the one that can resolve dns entries:

JBossFuse:karaf@root> my_context:getAllByName "www.google.com"

This functionality is described on Karaf documentation page.

Override OSGi Headers at deploy time

If you work with Karaf, you are working with OSGi, love it or hate it.
A typical step in each OSGi workflow is playing (or fighting) with OSGi headers.
If you are in total control of you project, this might be more or less easy, depending on the releationship between your deployment units. See Christian Posta post to have a glimpse of some less than obvious example.

Within those conditions, a very typical situation is the one when you have to use a bundle, yours or someone else's, and that bundle headers are not correct.
What you end up doing, very often is to re-package that bundles, so that you can alter the content of its MANIFEST, to add the OSGi headers that you need.

Karaf has a facility in this regard, called the wrap protocol.
You might alredy know it as a shortcut way to deploy a non-bundle jar on Karaf but it's actually more than just that.
What it really does, as the name suggest, is to wrap. But it can wrap both non-bundles and bundles!
Meaning that we can also use it to alter the metadata of an already packaged bundle we are about to install.

Let's give an example, again taken fron a real life experience.
Apache HttpClient is not totally OSGi friendly. We can install it on Karaf with the wrap: protocol and export all its packages.

JBossFuse:karaf@root> install -s 'mvn:org.apache.httpcomponents/httpclient/4.2.5'
Bundle ID: 257
JBossFuse:karaf@root> exports | grep -i 257
   257 No active exported packages. This command only works on started bundles, use osgi:headers instead
JBossFuse:karaf@root> install -s 'wrap:mvn:org.apache.httpcomponents/httpclient/\ 4.2.5$Export-Package=*; version=4.2.5'
Bundle ID: 259
JBossFuse:karaf@root> exports | grep -i 259
   259 org.apache.http.client.entity; version=4.2.5
   259 org.apache.http.conn.scheme; version=4.2.5
   259 org.apache.http.conn.params; version=4.2.5
   259 org.apache.http.cookie.params; version=4.2.5

And we can see that it works with plain bundles too:

JBossFuse:karaf@root> la -l | grep -i camel-core
[ 142] [Active     ] [            ] [       ] [   50] mvn:org.apache.camel/camel-core/2.12.0.redhat-610379
JBossFuse:karaf@root> install -s 'wrap:mvn:org.apache.camel/camel-core/2.12.0.redhat-610379\
$overwrite=merge&Bundle-SymbolicName=paolo-s-hack&Export-Package=*; version=1.0.1'
Bundle ID: 269

JBossFuse:karaf@root> headers 269

camel-core (269)

Bundle-Vendor = Red Hat, Inc.
Bundle-Activator = org.apache.camel.impl.osgi.Activator
Bundle-Name = camel-core
Bundle-DocURL = http://redhat.com
Bundle-Description = The Core Camel Java DSL based router

Bundle-SymbolicName = paolo-s-hack

Bundle-Version = 2.12.0.redhat-610379
Bundle-License = http://www.apache.org/licenses/LICENSE-2.0.txt
Bundle-ManifestVersion = 2


Export-Package = 


Where you can see Bundle-SymbolicName and the version of the exported packages are carrying the values I set.

Again, the functionality is described on Karaf docs and you might find useful the wrap protocol reference.

Override OSGi Headers after deploy time with OSGi Fragments

Last trick is powerful, but it probably requires you to remove the original bundle if you don't want to risk having half of the classes exposed by one classloader and the remaining ones (those packages you might have added in the overridden Export) in another one.

There is actually a better way to override OSGi headers, and it comes directly from an OSGi standard functionality: OSGi Fragments.

If you are not familiare with the concept, the definition taken directly from OSGi wiki is:

A Bundle fragment, or simply a fragment, is a bundle whose contents are made available to another bundle (the fragment host). Importantly, fragments share the classloader of their parent bundle.

That page gives also a further hint about what I will describe:

Sometimes, fragments are used to 'patch' existing bundles.

We can use this strategy to:

  • inject .jars in the classpath of our target bundle
  • alter headers of our target bundle

I have used the first case to fix a badly configured bundle that was looking for a an xml configuration descriptor that it didn't include, and that I have provided deploying a light Fragment Bundle that contained just that.

But the use case I want to show you here instead, is an improvement regarding the way to deploy Byteman on JBoss Fuse/Karaf.

If you remember my previous post, since Byteman classes needed to be available from every other deployed bundle and potentially need access to every class available, we had to add Byteman packages to the org.osgi.framework.bootdelegation property, that instructs the OSGi Framework to expose the listed packages through the virtual system bundle (id = 0).

You can verify what is currently serving with headers 0, I won't include the output here since it's a long list of jdk extension and framework classes.

If you add your packages, org.jboss.byteman.rule,org.jboss.byteman.rule.exception in my case, even these packages will be listed in the output of that command.

The problem with this solution is that this is a boot time property. If you want to use Byteman to manipulate the bytecode of an already running instance, you have to restart it after you have edited this properties.

OSGi Fragments can help here, and avoid a preconfiguration at boot time.

We can build a custom empty bundle, with no real content, that attaches to the system bundle and extends the list of packages it serves.

    system.bundle; extension:=framework

That's an excerpt of maven-bundle-plugin plugin configuration, see here for the full working Maven project, despite the project it's really just 30 lines of pom.xml:

JBossFuse:karaf@root> install -s mvn:test/byteman-fragment/1.0-SNAPSHOT

Once you have that configuration, you are ready to use Byteman, to, for example, inject a line in java.lang.String default constructor.

# find your Fuse process id
PROCESS_ID=$(ps aux | grep karaf | grep -v grep | cut -d ' ' -f2)

# navigate to the folder where you have extracted Byteman
cd /data/software/redhat/utils/byteman/byteman-download-

# export Byteman env variable:
export BYTEMAN_HOME=$(pwd)
cd bin/

# attach Byteman to Fabric8 process, no output expected unless you enable those verbose flags
sh bminstall.sh -b -Dorg.jboss.byteman.transform.all $PROCESS_ID 
# add these flags if you have any kind of problem and what to see what's going on: -Dorg.jboss.byteman.debug -Dorg.jboss.byteman.verbose

# install our Byteman custom rule, we are passing it directly inline with some bash trick
sh bmsubmit.sh /dev/stdin <<OPTS

# smoke test rule that uses also a custom output file
RULE DNS StringSmokeTest
CLASS java.lang.String
METHOD <init>()
DO traceln(" works: " );
traceOpen("PAOLO", "/tmp/byteman.txt");
traceln("PAOLO", " works in files too " );


Now, to verify that Byteman is working, we can just invoke java.lang.String constructor in Karaf shell:

JBossFuse:karaf@root> new java.lang.String

And as per our rule, you will also see the content in /tmp/byteman.txt

Inspiration for this third trick come from both the OSGi wiki and this interesting page from Spring guys.

If you have any comment or any other interesting workflow please leave a comment.

Tuesday, October 7, 2014

Use Byteman in JBoss Fuse / Fabric8 / Karaf

Have you ever found yourself in the process of try to understand how come something very simple is not working?

You are writing code in any well known context and for whatever reason it's not working. And you trust your platform, so you carefully read all the logs that you have.
And still you have no clue why something is not behaving like expected.

Usually, what I do next, if I am lucky enough to be working on an Open Source project, is start reading the code.
That many times works; but almost always you haven't written that code; and you don't know the product that well. So, yeah, you see which variable are in the context. You have no clue about their possible values and what's worse you have no idea where or even worse, when, those values were created.

At this point, what I usually do is to connect with a debugger. I will never remember the JVM parameters a java process needs to allow debugging, but I know that I have those written somewhere. And modern IDEs suggest me those, so it's not a big pain connecting remotely to a complex application server.

Okay, we are connected. We can place a breakpoint not far from the section we consider important and step trough the code. Eventually adding more brakpoint.
The IDE variables view allows us to see the values of the variables in contexts. We can even browse the whole object tree and invoke snippet of code, useful in case the plain memory state of an object doesn't really gives the precise information that we need(imagine you want to format a Date or filter a collection).

We have all the instruments but... this is a slow process.
Each time I stop at a specific breakpoint I have to manually browse the variables. I know, we can improve the situation with watched variables, that stick on top of the overview window and give you a quick look at what you have already identified as important.
But I personally find that watches makes sense only if you have a very small set of variables: since they all share the same namespace, you end up with many values unset that just distract the eye, when you are not in a scope that sees those variables.

I have recently learnt a trick to improve these workflows that I want to share with you in case you don't know it yet:

IntelliJ and, with a smart trick even Eclipse, allow you to add print statements when you pass through a breakpoint. If you combine this with preventing the breakpoint to pause, you have a nice way to augment the code you are debugging with log invocations.

For IntelliJ check here: http://www.jetbrains.com/idea/webhelp/enabling-disabling-and-removing-breakpoints.html

While instead for Eclipse, check this trick: http://moi.vonos.net/2013/10/adhoc-logging/ or let me know if there is a cleaner or newer way to reach the same result.

The trick above works. But it's main drawback is that you are adding a local configuration to your workspace. You cannot share this easily with someone else. And you might want to re-use your workspace for some other session and seeing all those log entries or breakpoints can distract you.

So while looking for something external respect my IDE, I have decided to give Byteman a try.

Byteman actually offers much more than what I needed this time and that's probably the main reason I have decided to understand if I could use it with Fabric8.

A quick recap of what Byteman does taken directly from its documentation:

Byteman is a bytecode manipulation tool which makes it simple to change the operation of Java applications either at load time or while the application is running.
It works without the need to rewrite or recompile the original program.


  • tracing execution of specific code paths and displaying application or JVM state
  • subverting normal execution by changing state, making unscheduled method calls or forcing an unexpected return or throw
  • orchestrating the timing of activities performed by independent application threads
  • monitoring and gathering statistics summarising application and JVM operation

In my specific case I am going to use the first of those listed behaviors, but you can easily guess that all the other aspects might become handy at somepoint:

  • add some logic to prevent a NullPointerException
  • shortcircuit some logic because you are hitting a bug that is not in your code base but you still want to see what happens if that bug wasn't there
  • anything else you can imagine...

Start using Byteman is normally particularly easy. You are not even forced to start your jvm with specific instruction. You can just attach to an already running process!
This works most of the time but unluckily not on Karaf with default configuration, since OSGi implication. But no worries, the functionality is just a simple configuration editing far.

You have to edit the file :


and add this 2 packages to the proprerty org.osgi.framework.bootdelegation:


That property is used to instruct the osgi framework to provide the classes in those packages from the parent Classloader. See http://felix.apache.org/site/apache-felix-framework-configuration-properties.html

In this way, you will avoid ClassCastException raised when your Byteman rules are triggered.

That's pretty much all the extra work we needed to use Byteman on Fuse.

Here a practical example of my interaction with the platform:

# assume you have modified Fabric8's config.properties and started it and that you are using fabric8-karaf-1.2.0-SNAPSHOT

# find your Fabric8 process id
$ ps aux | grep karaf | grep -v grep | cut -d ' ' -f3

# navigate to the folder where you have extracted Byteman
cd /data/software/redhat/utils/byteman/byteman-download-
# export Byteman env variable:
export BYTEMAN_HOME=$(pwd)
cd bin/
# attach Byteman to Fabric8 process, no output expected unless you enable those verbose flags
sh bminstall.sh 5200 # add this flags if you have any kind of problem and what to see what's going on: -Dorg.jboss.byteman.debug -Dorg.jboss.byteman.verbose 
# install our Byteman custom rules
$ sh bmsubmit.sh ~/Desktop/RBAC_Logging.btm
install rule RBAC HanldeInvoke
install rule RBAC RequiredRoles
install rule RBAC CanBypass
install rule RBAC UserHasRole
# invoke some operation on Fabric8 to trigger our rules:
$ curl -u admin:admin 'http://localhost:8181/jolokia/exec/io.fabric8:type=Fabric/containersForVersion(java.lang.String)/1.0' 
{"timestamp":1412689553,"status":200,"request":{"operation...... very long response}

# and now check your Fabric8 shell:
 OBJECT: io.fabric8:type=Fabric
 METHOD: containersForVersion
 ARGS: [1.0]
 REQUIRED ROLES: [viewer, admin]
 CURRENT_USER_HAS_ROLE(viewer): true

Where my Byteman rules look like:

RULE RBAC HanldeInvoke
CLASS org.apache.karaf.management.KarafMBeanServerGuard
METHOD handleInvoke(ObjectName, String, Object[], String[]) 
DO traceln(" OBJECT: " + $objectName + "
 METHOD: " + $operationName + "
 ARGS: " + java.util.Arrays.toString($params) );

RULE RBAC RequiredRoles
CLASS org.apache.karaf.management.KarafMBeanServerGuard
METHOD getRequiredRoles(ObjectName, String, Object[], String[])
DO traceln(" REQUIRED ROLES: " + $! );

CLASS org.apache.karaf.management.KarafMBeanServerGuard
METHOD canBypassRBAC(ObjectName) 
DO traceln(" CANBYPASS: " + $! );

CLASS org.apache.karaf.management.KarafMBeanServerGuard
METHOD currentUserHasRole(String)
DO traceln(" CURRENT_USER_HAS_ROLE(" + $requestedRole + "): " + $! );

Obviously ths was just a short example of what Byteman can do for you. I'd invite you to read the project documentation since you might discover nice constructs that could allow you to write easier rules or to refine them to really trigger only when it's relevant for you (if in my example you see some noise in the output, you probably have an Hawtio instance open that is doing it's polling thus triggering some of our installed rules)

A special thank you goes to Andrew Dinn that explained me how Byteman work and the reason of my initial failures

The screencast is less than optimal due to my errors ;) but you clearly see the added noise since I had an Hawt.io instance invoking protected JMX operation!

Monday, May 5, 2014

Continuous Integration with JBoss Fuse, Jenkins and Nexus

Recently I was putting together a quickstart Maven project to show a possible approach to the organization of a JBoss Fuse project.

The project is available on Github here: https://github.com/paoloantinori/fuse_ci

And it’s an slight evolution of what I have learnt working with my friend James Rawlings

The project proposes a way to organize your codebase in a Maven Multimodule project.
The project is in continuous evolution, thanks to feedback and suggestions I receive; but it’s key point is to show a way to organize all the artifacts, scripts and configuration that compose your project.

In the ci folder you will find subfolders like features or karaf_scripts with files you probably end up creating in every project and with inline comments to help you with tweaking and customization according to your specific needs.
The project makes also use of Fabric8 to handle the creation of a managed set of OSGi containers and to benefit of all its features to organize workflows, configuration and versioning of your deployments.
In this blogpost I will show you how to deploy that sample project in a very typical development setup that includes JBoss Fuse, Maven, Git, Nexus and Jenkins.
The reason why I decided to cover this topic is because I find that many times I meet good developers that tell me that even if they are aware of the added value of a continuous integration infrastructure, have no time to dedicate to the activity. With no extra time they focus only to development.

I don’t want you to evangelize around this topic or try to tell you what they should do. I like to trust them and believe they know their project priorities and that they accepted the trade-off among available time, backlog and the added overall benefits of each activity. Likewise I like to believe that we all agree that for large and long projects, CI best practices are definitely a must-do and that no one has to argue about their value.

With all this in mind, I want to show a possible setup and workflow, to show how quickly it is to invest one hour of your time for benefits that are going to last longer.

I will not cover step by step instructions. But to prove you that all this is working I have created a bash script, that uses Docker, and that will demonstrate how things can be easy enough to get scripted and, more important, that they really work!

If you want to jump straight to the end, the script is available here:

It uses some Docker images I have created and published as trusted builds on Docker Index:

They are a convenient and reusable way to ship executables and since they show the steps performed; they may also be seen as a way to document the installation and configuration procedure.
As mentioned above, you don’t necessarily need them. You can manually install and configure the services yourself. They are just an verified and open way to save you some time or to show you the way I did it.

Let’s start describing the component of our sample Continuous Integration setup:

1) JBoss Fuse 6.1
It’s the runtime we are going to deploy onto. It lives in a dedicated box. It interacts with Nexus as the source of the artifacts we produce and publish.

2) Nexus
It’s the software we use to store the binaries we produce from our code base. It is accessed by JBoss Fuse, that downloads artifacts from it but it is also accessed from Jenkins, that publishes binaries on it, as the last step of a successful build job.

3) Jenkins
It’s our build jobs invoker. It publishes its outputs to Nexus and it builds its output if the code it checked out with Git builds successfully.

4) Git Server
It’s the remote code repository holder. It’s accessed by Jenkins to download the most recent version of the code we want to build and it’s populated by all the developers when they share their code and when they want to build on the Continous Integration server. In our case, git server is just a filesystem accessed via ssh.
Interaction Diagram



First thing to do is to setup git to act as our source code management (SCM).
As you may guess we might have used every other similar software to do the job, from SVN to Mercurial, but I prefer git since it’s one of the most popular choices and also because it’s an officially supported tool to interact directly with Fabric8 configuration
We don’t have great requirements for git. We just need a filesystem to store our shared code and a transport service that allows to access that code.
To keep things simple I have decided to use SSH as the transport protocol.
This means that on the box that is going to store the code we need just sshd daemon started, some valid user, and a folder they can access.
Something like:

yum install -y sshd git
service sshd start
adduser fuse
mkdir -p /home/fuse/fuse_scripts.git
chmod a+rwx /home/fuse/fuse_scripts.git # or a better stratey based on guid
While the only git specific step is to initialize the git repository with

git init --bare /home/fuse/fuse_scripts.git



Nexus OSS is a repository manager that can be used to store Maven artifacts.
It’s implemented as a java web application. For this reason installing Nexus is particularly simple.
Thanks to the embedded instance of Jetty that empowers it, it’s just a matter of extracting the distribution archive and starting a binary:

wget http://www.sonatype.org/downloads/nexus-latest-bundle.tar.gz /tmp/nexus-latest-bundle.tar.gz
tar -xzvf /tmp/nexus-latest-bundle.tar.gz -C /opt/nexus
Once started Nexus will be available by default at this endpoint:
with admin as user and admin123 as password.



Jenkins is the job scheduler we are going to use to build our project. We want to configure Jenkins in such a way that it will be able to connect directly to our git repo to download the project source.
To do this we need an additional plugin, Git Plugin.
We obviously also need java and maven installed on the box.
Being Jenkins configuration composed of various steps involving the interaction with multiple administrative pages, I will only give some hints on the important steps you are required to perform. For this reason I strongly suggest you to check my fully automated script that does everything in total automation.
Just like Nexus, Jenkins is implemented as a java web application.
Since I like to use RHEL compatible distribution like Centos or Fedora, I install Jenkins in a simplified way. Instead of manually extracting the archive like we did for Nexus, I just define the a new yum repo, and let yum handle the installation and configuration as a service for me:

wget http://pkg.jenkins-ci.org/redhat/jenkins.repo -O /etc/yum.repos.d/jenkins.repo
rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins
service jenkins start
Once Jenkins is started you will find it’s web interface available here:

By default it’s configured in single user mode, and that’s enough for our demo.
You may want to verify the http://your_ip:8080/configure to check if values for JDK, Maven and git look good. They are usually automatically picked up if you have those software already installed before Jenkins.

Then you are required to install Git Plugin:

Once you have everything configured, and after a restart of Jenkins instance, we will be able to see a new option in the form that allows us to create a Maven build job.
Under the section: Source Code Management there is now the option git. It’s just a matter of providing the coordinates of your SSH server, for example:


And in the section Build , under Goals and options, we need to explicitly tell Maven we want to invoke the deploy phase, providing the ip address of the Nexus insance:

clean deploy -DskipTests -Dip.nexus=

The last configuration step, is to specify a different maven settings file, in the advanced maven properties , that is stored together with the source code:

And that contains user and password to present to Nexus, when pushing artifacts there.

The configuration is done but we need an additional step to have Jenkins working with Git.
Since we are using SSH as our transport protocol, we are going to be asked, when connecting to the SSH server for the first time, to confirm that the server we are connecting to is safe and that its fingerprint is the one the we were expecting. This challenge operation will block the build job, since a batch job and there will not be anyone confirming SSH credentials.
To avoid all this, a trick is to connect to the Jenkins box via SSH, become the user that is used to run Jenkins process, jenkins in my case, and from there, manually connect to the ssh git server, to perform the identification operation interactively, so that it will no longer required in future:

ssh fuse@IP_GIT_SERVER
The authenticity of host '[]:22 ([]:22)' can't be established.
DSA key fingerprint is db:43:17:6b:11:be:0d:12:76:96:5c:8f:52:f9:8b:96.
Are you sure you want to continue connecting (yes/no)? 
The alternate approach I use my Jenkins docker image is to totally disable SSH fingerprint identification, an approach that maybe too insecure for you:

mkdir -p /var/lib/jenkins/.ssh ;  
printf "Host * \nUserKnownHostsFile /dev/null \nStrictHostKeyChecking no" >> /var/lib/jenkins/.ssh/config ; 
chown -R jenkins:jenkins /var/lib/jenkins/.ssh
If everything has been configured correctly, Jenkins will be able to automatically download our project, build it and publish it to Nexus.


Before doing that we need a developer to push our code to git, otherwise there will not be any source file to build yet!
To to that, you just need to clone my repo, configure an additional remote repo (our private git server) and push:

git clone git@github.com:paoloantinori/fuse_ci.git
git remote add upstream ssh://fuse@$IP_GIT/home/fuse/fuse_scripts.git
git push upstream master
At this point you can trigger the build job on Jenkins. If it’s the first time you run it Maven will download all the dependencies, so it may take a while.
if everything is successful you will receive the confirmation that your artifacts have been published to Nexus.

JBoss Fuse


Now that our Nexus server is populated with the maven artifacts built from our code base, we just need to tell our Fuse instance to use Nexus as a Maven remote repository.
Teaches us how to do it:
In a karaf shell we need to change the value of a property,

fabric:profile-edit  --pid io.fabric8.agent/org.ops4j.pax.url.mvn.repositories=\"\" default

And we can now verify that the integration is completed with this command:

cat  mvn:sample/karaf_scripts/1.0.0-SNAPSHOT/karaf/create_containers
If everything is fine, you are going to see an output similar to this:

# create broker profile
fabric:mq-create --profile $BROKER_PROFILE_NAME $BROKER_PROFILE_NAME
# create applicative profiles
fabric:profile-create --parents feature-camel MyProfile

# create broker
fabric:container-create-child --jvm-opts "$BROKER_01_JVM" --resolver localip --profile $BROKER_PROFILE_NAME root broker

# create worker
fabric:container-create-child --jvm-opts "$CONTAINER_01_JVM" --resolver localip root worker1
# assign profiles
fabric:container-add-profile worker1 MyProfile
Meaning that addressing a karaf script providing Maven coordinates worked well, and that now you can use shell:source, osgi:install or any other command you want that requires artifacts published on Nexus.



As mentioned multiple times, this is just a possible workflow and example of interaction between those platforms.
Your team may follow different procedures or using different instruments.
Maybe you are already implementing more advanced flows based on the new Fabric8 Maven Plugin.
In any case I invite everyone interested in the topic to post a comment or some link to different approach and help everyone sharing our experience.