Java EE and general Java platforms.
You'll read about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies.

Sunday, August 30, 2015

Coding in a cloud-based enterprise - designing for distributed architectures

14:28 Sunday, August 30, 2015 Posted by Markus Eisele
, ,
One of my recent writings got posted on O'Reilly's Radar. It explores the future of development projects and teams while using all the new xPaaS services offered by today's clouds. While the article is more a thought leadership piece based on a vision Mark Little sketched earlier, it also has some aspects in it, which you can find in projects as of today already.

xPaaS - All The PaaS Offerings
Gartner uses the term xPaaS to describe the whole spectrum of specialized middleware services that can be offered as PaaS. Red Hat decided to also use xPaaS as a description for their offerings, because it is meant to encompass much more than what PaaS has typically come to be associated with (aPaaS, or Application PaaS, is a component of xPaaS). In many ways we've been talking about xPaaS for a couple of years and particularly how technologies and methodologies such as SOA or integration must play within the Cloud and between users of the Cloud.

How Will This Change Development?
This is the real question, the article answers. With the advent of DevOps and various Platform-as-a-Service (PaaS) environments, many complex business requirements need to be met within a much shorter timeframe. The Internet of Things (IoT) is also changing how established applications and infrastructures are constructed. As a result of these converging trends, the enterprise IT landscape is becoming increasingly distributed, and the industry is starting to map how all the various components — from networking and middleware platforms, to ERP systems and microservices — will come together to create a new development paradigm that exists solely in the cloud.

Read it online for free and I am happy to hear about your thoughts and comments about what you think, that development will look like in the Future.

Tuesday, August 18, 2015

Devoxx Morocco: What's in it for you!

21:21 Tuesday, August 18, 2015 Posted by Markus Eisele
Last year was my first year at Devoxx Morocco. Oh, I forgot. Back in the days, it was still called JMaghreb. And I was really curious to go there, because of a couple of reasons. First of all, I never visited Morocco before. But second of all, my dear friend Badr said, that it is an amazing conference and I just had to come. It din't work out before and I felt guilty to not have gone. So, I decided to do my best to make up for it. And first of all had the pleasure to attend JMaghreb 2014. It has been a very warm welcome to me and I really enjoyed all the new impressions this country had waiting for me. Everybody was friendly and I got to see a little of Casablanca, which was the city where the conference was held.



Content, Content Content for a Hungry Audience
The content was amazing. A lot of well known speakers and topics which were spot on. And even I had packed one of my most relevant presentations about Developer Security Awareness. Every session was packed. A little different to from what I was used to, the strong presence of comparably young people, which have been very hungry for information and latest updates. As a speaker you usually get to answer a couple of questions and maybe find the time to have a coffee or water with another three attendees afterwards which have some more detailed questions. And believe it or not, I felt like a rockstar in Morocco: 10+ people wanted to talk about all aspects of my session afterwards and we've been hanging out and talking for at least an hour longer than usual. And this was more a general feedback from all the speakers; the audience was very interested and ready to fire their questions.

Casablanca - A Beautiful City
The venue was a little away from the center and cabs are a rare thing to get hands on. At least the ones I trust (not judging, just telling you that I am German). But we managed to get back and forth to the venue and beside that, there is some great history in Casablanca. I had to see Rick's Cafe and the Hassan II Mosque. And the local markets and .. and ... and.

A Trip To Marrakesh
We've not been done after the conference. Badr invited his speakers on an extended weekend trip over to Marrakesh. Which is another just wonderful piece of history. The hospitality in the country is still a big part for me to remember and I can only briefly tell you about all the historic sites in between the pulsing life, that is waiting for you there. A trip, I'll never forget.

JMaghreb turned Devoxx Morocco
But coming back and adopting to the really cold weather in Germany in November wasn't the only surprise waiting for me. Very soon it turned out, that this amazing conference is joining the Devoox family and after I was asked to join the program committee, I was just left with a simple: yes, I want to! And that is, what I spend a reasonable amount of time on: Selecting the best talks for the upcoming inaugural edition of the most southern Devoxx conference ever.
And here are some of the already selected speakers. I can only highly recommend on going if you have a chance to attend. There is still plenty of time to register!

Wednesday, July 29, 2015

WebLogic Server 12.1.3 on Kubernetes

10:31 Wednesday, July 29, 2015 Posted by Markus Eisele
, , ,
Most of you recall, that I have some little history in Oracle WebLogic server. Coming all the way back from BEA times. I don't want to comment on recent developments or features or standard support, but one thing is for sure: It is out there and having the chance to run it containerized is something many customers will appreciate. Maybe this is the one thing, that will make a real difference in our industry with the ongoing progress in the containerization field. We actually can manage heterogeneous infrastructure components on a common base. If we have the right tools at hand. And this is true for the operations side with OpenShift and of course for all the developers out there, who will appreciate, what Fabric8 can do for them.

License Preface
Whatever happens in this blog-post only happens on my local developer laptop. And I strongly believe, that with regards to Oracle technologies this is absolutely covered by the so called OTN Free Developer License Agreement and the Oracle Binary Code License Agreement for Java SE.
I'm dead sure, that a production environment needs a bunch of licenses. But I'm not a specialist. So, don't ask me. If you want to use RHEL 7, please learn about the Red Hat Subscriptions.

Ok, WebLogic - Where's Your Image?
Not there. I assume for licensing reasons. But, Bruno did a great job in pushing relevant Dockerfiles and scripts to the official Oracle GitHub account. So. the first step in running WebLogic on Kubernetes is to actually build a docker image with it. Go,

git clone https://github.com/oracle/docker

and navigate to the OracleWebLogic folder. In fact, you can delete everything else beside this one. First step is to download the WebLogic ZIP installer and the correct JDK to be used.
Go to the Oracle Website, accept the OTN License (if you feel like it) and download the platform independent ZIP installer (wls1213_dev_update2.zip).
Now browse to the JDK download website, do the same license thing and download the 8u51 JDK as rpm (jdk-8u51-linux-x64.rpm). Place both into the OracleWebLogic\dockerfiles\1.2.1.3 folder. If you're running on a unix like OS yourself, feel free to check back with the official documentation and use the provided scripts. This didn't work for me on Windows, so you get a step-by-step walk-through. Go and rename the Dockerfile.developer to Dockerfile and delete all the other ones.

mv Dockerfile.developer Dockerfile
rm Dockerfile.*

Now you open the Dockerfile and change a couple of things. Base it on RHEL 7:

FROM rhel7 

And comment out the other base, that's in there ... And because, we want to run a decently patched and available Java version, we're going to change the environment variable accordingly

ENV JAVA_RPM jdk-8u51-linux-x64.rpm

Time to build our image. And before you start, let's reuse the fabric8 vagrant installer, that I've been using for the last two blog-posts already. So, bring your vagrant instance with OpenShift up first. Now it's time to build the WebLogic image. Sit back and relax, because this is going to take a couple of more minutes. Do have some housekeeping to do? This might be the right time!

docker build --force-rm=true --tag="vagrant.f8:5000/oracle/weblogic:12.1.3-dev" .

Done? Check if everything is where we expect it to be: (docker images)

vagrant.f8:5000/oracle/weblogic       12.1.3-dev          68f1ea788bba        About a minute ago   2.05 GB

Because this image only contains the server binaries, we now need to build an image which has a configured WLS domain in it. Thankfully, there are some more scripts in samples\12c-domain folder. So, go check, if the Dockerfile and all scripts in container-scripts have the correct UNIX line-ending. Sometimes, Git can mess them up, if you're on Windows. And if you're already there, make sure to change some ports according to your needs. I had to change the admin port to 8011 (do this in the Dockerfile and add-machine.py script. Another thing, we want to do is, to run the instance in development mode. This allows us to just copy our Java EE app into the ./autodeployment folder and have it deployed, when started. You can just changing the attribute in the following line from prod to dev:

setOption('ServerStartMode','dev')

Now, you're ready to go ahead with building the development domain image:

docker build --force-rm=true --tag="vagrant.f8:5000/myfear/weblogic:12.1.3-dev" 

And, after another couple of cups of coffee, we're ready to check if this image made it into our repository (docker images)

vagrant.f8:5000/myfear/weblogic      12.1.3-dev          77a3ec07d176        9 minutes ago       2.052 GB

Before going any further, make sure to give it a shot and see, if the Weblogic instance comes up.

docker run -it myfear/weblogic:12.1.3-dev

If that worked, you're ready to build your third image today. Which will contain your application.

NetBeans And Fabric8 - Take WebLogic Into Greek Heaven
Start NetBeans and create a nice, simple and lean Java EE 6 project from a maven archetype of your choice. Add all the fabric8 and docker-maven plugin dependencies to it, like I've shown you before in the first blog post of the series. Let's tweak the properties to our needs and just name the image: myfear/weblogic-test:latest. Most importantly, you have to map the container port to the Kubernetes service correctly:

<!-- Kubernetes Service Port // Mapped via the HARouter-->
<fabric8.service.port>9050</fabric8.service.port>

<!-- The exposed container port -->
<fabric8.service.containerPort>8011</fabric8.service.containerPort>

<!-- because, I'm working with the remote registry here, base it on the remote image -->
<docker.from>vagrant.f8:5000/myfear/weblogic:12.1.3-dev</docker.from>

<!-- Just cosmetics, changing the container label -->
<fabric8.label.container>weblogic</fabric8.label.container>

Don't forget to use Java EE 6 as dependency, and change both user and deployment base in the docker-maven plugin to:

<user>oracle:oracle:oracle</user>
<basedir>/u01/oracle/weblogic/user_projects/domains/base_domain/autodeploy/</basedir>

Time to build the third and last docker image:

mvn clean install docker:build

And if that finished correctly, we're going to deploy everything to OpenShift with the Fabric8 tooling:

mvn fabric8:json fabric8:apply

And don't forget to add the host-name mapping to your hosts file.

172.28.128.4 myfear-weblogic-test.vagrant.f8

A request to http://myfear-weblogic-test.vagrant.f8/sample shows the application after you waited a couple of minutes (at least, I had to; Looks like my laptop wasn't quick enough.).


Some Further Remarks
This isn't exactly production ready. WLS knows managed servers and node managers and there are a bunch of ports for external communication, that need to be opened. This basically did nothing more than to deploy a teensy application onto the AdminServer. There are a couple of whitepapers and further ideas about how to tweak the domain scripts to fit your needs. I didn't want to do that for obvious reasons. So, consider this a proof of concept.


Monday, July 27, 2015

Scaling and Load Balancing WildFly on OpenShift v3 With Fabric8

15:00 Monday, July 27, 2015 Posted by Markus Eisele
, ,
Did you enjoy the first ride with Fabric8 and OpenShift v3? There's more a lot more to come. After we got the first WildFly container up and running on Kubernetes, without having to deal with all it's inherent complexity, I think it is about time to start to scale and load balance WildFly.

Prerequisites
Make sure, you have the complete Vagrant, Fabric8, OpenShift v3, Kubernetes environment running. I walked you through the installation on Windows in my earlier blog post, but you can also give it a try on Google Container Engine or on OpenShift v3.

The Basics
What we did yesterday was to take our local Java EE 7 application and dockerize it based on latest jboss/wildfly:9.0.1.Final image. After that was done, we build the new myfear/wildfly-test:latest custom image and pushed it to the docker registry running on the vagrant image. The Fabric8 Maven plugin created the Kubernetes JSON for us and pushed it out to OpenShift for us. All this with the help of a nice and easy to use web-console. This post is going to re-use the same Java EE 7 example which you can grep from my github account.

Scaling Your Java EE Application on OpenShift With Fabric
One of the biggest features of Java EE application servers is scaling. Running high load on Kubernetes doesn't exactly match to how this was typically done in the past. With Kubernetes, you can scale Nodes and Minions, Replication Controllers and Pods according to your needs. Instead of launching new JVMs, you launch new container instances. And, we have learned, that Fabric8 is a very handy administration tool for Kubernetes, so we're going to scale our application a bit.
So, build the docker image of your Java EE 7 application and deploy it to Fabric8 with the following maven commands:

mvn clean install docker:build
mvn fabric8:json fabric8:apply

If that succeeded, you can access your application via http://myfear-wildfly-test.vagrant.f8/. The HelloWorld Servlet shows the hostname and the POD ID

No matter how often you hit refresh at this point, there is never going to be another pod id in this response. Of course not, we're only running one instance until now. Let's switch to the Fabric 8 console and scale up the pods. Switch to the "Apps" tab and click on the little green icon on the lower right to your application. In the overlay change the number of pods from one to three.


After a while, the change is reflected in your console and the pods go from downloading to green in a couple of seconds

Let's go back to our web-interface and hit refresh a couple of times. Nothing changes? What happened? What is wrong? Let me walk you through the architecture a little:

Overall Architecture
Did yesterdays blog post left your wondering? How did all the parts work together? Here's a little better overview for you. Spoiler alert: This is overly simplifying the OpenShift architecture. Please dig into the details on your own. I just want to give you a very focused view on scaling and load balancing with Fabric8 and OpenShift.

Everything relies on the OpenShift routing and management of the individual pods. Ports are exposed by containers and mapped through services. And this goes back to back from client to the running instance. And the central component, which does the routing is the HAProxy obviously. Which is a normal pod with one little exception: It has a public IP address. Let's see, what this thing does on OpenShift and how it is configured.

HAProxy As Standard Router On OpenShift
The default router implementation on OpenShift is HAProxy. It uses sticky sessions based on http-keep-alive. In addition, the router plug-in provides the service name and namespace to the underlying implementation. This can be used for more advanced configuration such as implementing stick-tables that synchronize between a set of peers.
The HAProxy router exposes a web listener for the HAProxy statistics. You can view the statistics in our example, by accessing http://vagrant.f8:1936/. It's a little tricky to find out the administrator password. This password and port are configured during the router installation, but they can be found by viewing the haproxy.conf file on the container. All you need find out is the pod, log-in to it, find the configuration file and read the password. In my case it was "6AtZV43YUk".

oc get pods
oc exec -it -p <POD_ID> bash
less haproxy.config

Now, that we found out about this, things got clearer. Once, we have an open connection to one of our instances, this is not going to be released again in the standard configuration. But we can check that the routes are in place by looking at the statistics.


And if you really want to see, that it actually does work, you need to trick out the stickiness with a little curl magic. If you have Mysysgit installed on Windows, you can run the little batch script in my repository. It curl's a REST endpoint which puts out the POD ID which is serving the request:

{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-4oxjj"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-pku0c"}
{"name":"myfear","environment":"sample-web-4oxjj"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-pku0c"}

The first five requests always return the same POD ID until the new PODs come up and the HAProxy starts to dispatch the requests round-robin. If you want to influence this behavior, you can do this. Just read more about administration of the router in the OpenShift Administration documentation. And here is a complete reference about the "oc" command line interface. If you need some ideas how to use the oc client to find out about different objects and types, there is a complete set of batch scripts in the fabric8/bin folder on github.

It's about time to diver deeper into the developer tooling of Fabric8. Stay curious for more details in the next blog posts.

Saturday, July 25, 2015

Running WildFly on Kubernetes. On Windows. Fabric8!

19:32 Saturday, July 25, 2015 Posted by Markus Eisele
, , , ,
Have you ever dreamed about running WildFly on OpenShift and leverage the latest Kubernetes features: On Windows? Sounds like blasphemy: Everything about those technologies is screaming GO and Linux. Windows doesn't seem to be the right fit. But I know, that there are many developers out there, being stuck on Windows. Corporate laptops, easy management and whatever reasons the different employers come up with. The good news is, there is a small and brave group of people, who won't let those  Windows users down. And I have to admit, that running a Windows operating system while working for Red Hat is a challenge.
We're a Linux company and an open source company and everything Windows simply feels wrong.
As my fellow colleague Grant stated in a blog-post a couple of weeks ago:
"That being said, I have decided to use Windows as my primary operating system in order to ensure that OpenShift has a great developer experience for Windows users. "
So, I tried to get Kubernetes and OpenShift running on Windows for a while, natively not possible right now. On the other hand, I really want to get my hand on latest developments and look into fancy stuff.  But there is a solution: Vagrant and Fabric8.
And Fabric8 only because, I am a Java developer. In fact if you are a Java developer wanting to work with Kubernetes Fabric8 really is the easiest and quickest way to get going. So, let's setup OpenShift and Fabric8 on a Windows machine.

Prerequisites
Download and install Vagrant (don't worry, it's MIT licensesed). Done with that? Restart your machine (You know, why it's Windows.) You will need to install an additional Vagrant plugin. Switch to a cmd line and type:

$vagrant plugin install vagrant-hostmanager-fabric8

Vagrant-hostmanager is a Fabric8 Vagrant 1.1+ plugin that manages the /etc/hosts file on guest machines (and optionally the host). Its goal is to enable resolution of multi-machine environments deployed with a cloud provider where IP addresses are not known in advance.
The only other thing you need to have installed and ready is VirtualBox (GPL licensed!)
Go and clone the Fabric8 installer git repository and cd into the openshift/latest folder:

$ git clone https://github.com/fabric8io/fabric8-installer.git
$ cd fabric8-installer/vagrant/openshift/latest

The next steps are needed for proper routing from the host to OpenShift services which are exposed via routes. Unfortunately for Windows no automatic routing for new services is possible.
You have to add new routes manually to %WINDIR%\System32\drivers\etc\hosts.
For your convenience, a set of routes for default Fabric8 applications will be pre-added when you start up vagrant
For new services look for the following line and add your new routes (<service-name>.vagrant.f8) to this file on a new line like this:

## vagrant-hostmanager-start id: 9a4ba3f3-f5e4-4ad4-9e80-b4045c6cf2fc
172.28.128.4  vagrant.f8 fabric8.vagrant.f8 jenkins.vagrant.f8 .....
172.28.128.4 myfear-wildfly-test.vagrant.f8
## vagrant-hostmanager-end

Now startup the Vagrant VM:

vagrant up

If you want to tweak the settings for the vm you have to edit the Vagrantfile. The startup including downloads takes a couple of minutes (Good time for #coffee++). While you're waiting, jump ahead and install the OpenShift client for windows. Download the one for your os from the origin project on github. The windows build has 55 MB. Next is to unpack it into a folder of your choice. Make sure
to add this folder to your PATH environment variable.

set PATH=%PATH%;"D:\Program Files (x86)\openshift-origin-v1.0.3"

While you're at it, add some more environment variables:

set KUBERNETES_DOMAIN=vagrant.f8
set DOCKER_HOST=tcp://vagrant.f8:2375

Assuming, you haven't changed the default routes added to your hosts file by the vagrant start.
The first one allows your OpenShift cli to use the right Kubernetes domain and the second one allows you to re-use the same Docker daemon, which is already running inside your Fabric8 vagrant image. Please make sure to NOT define any of the other docker env vars like DOCKER_CERT_PATH or DOCKER_TLS_VERIFY!
It is probably a good idea to add this into your system environment variables or put it into a batch-script.
Note: Make sure to use the Docker 1.6 client Windows (exe download). The latest 1.7 version doesn't work yet.
After the vagrant box is created and docker images are downloaded, the fabric8 console should appear at http://fabric8.vagrant.f8/.
Your browser will complain about an insecure connection, because the certificate is self signed. You know how to accept this, don't you?
Enter admin and admin as username and password.  Now you see all the already installed fabric8 apps. Learn more about Apps and how to build them in the documentation.

Now, let's see if we can use the docker daemon in the vagrant image :

docker ps

and see the full list of images running (just an excerpt here):

CONTAINER ID        IMAGE                                            COMMAND                CREATED              STATUS                  PORTS                                                             NAMES
d97e438222d1        docker.io/fabric8/kibana4:4.1.0                  "/run.sh"              7 seconds ago        Up Less than a second                                                                      k8s_kibana.7abf1ad4_kibana-4gvv6_default_500af2d1-32b8-11e5-8481-080027bdffff_4de5764e                                   
eaf419a177d6        fabric8/fluentd-kubernetes:1.0                   "fluentd"              About a minute ago   Up About a minute                                                                          k8s_fluentd-elasticsearch.920b947c_fluentd-elasticsearch-172.28.128.4_default_9957562ee416ea2e083f45adb9b6edd0_676633bf  
c4111cea4474        openshift/origin-docker-registry:v1.0.3          "/bin/sh -c 'REGISTR   3 minutes ago        Up 3 minutes                                                                                                              

One last thing to check, login to OpenShift via the command line tool:

oc login https://172.28.128.4:8443

use admin and admin again as username and password. Now check, which services are already running:

oc get services
Now you're ready for the next steps. Let's spin up a WildFly instance on OpenShift with Fabric8.

Dockerizing Your Java EE Application 
Ok, how does that work? OpenShift is build on top of Docker and Kubernetes. And Fabric8 gives the normal developer a reasonable abstraction on top of all those infrastructure issues. Where do we start? Let's start with a simple Java EE 7 project. It's a really simple one in this case. An html page and a HelloWorld servlet. First step is to dockerize it. There is a wonderful plugin out there, which is part of the Fabric8 ecosystem of tools named docker-maven-plugin. Simply add this to your pom.xml and define how the image should look like. The magic is in the plugin configuration:

 <configuration>
                    <images>
                        
                    </images>
 </configuration>
Running a

mvn clean install docker:build
Builds your application and creates your docker image. Plus, this image is going to be uploaded to the docker registry running on your OpenShift instance. This is configured with two additional maven properties

 <docker.host>tcp://vagrant.f8:2375</docker.host>
 <docker.registry>vagrant.f8:5000</docker.registry>
There's one more properties to look after:

<docker.assemblyDescriptorRef>artifact</docker.assemblyDescriptorRef>
It defines which parts of the build will be copied over to the Docker image.
The resulting Dockerfile looks like this:

FROM jboss/wildfly:9.0.1.Final
MAINTAINER markus at jboss.org
COPY maven /opt/jboss/wildfly/standalone/deployments/
USER root
RUN ["chown", "-R", "jboss:jboss","/opt/jboss/wildfly/standalone/deployments/"]
USER jboss
and a maven folder contains your application as a war file. From this point on, you could also use the docker image and push it to the official docker hub or another private repository. There's not special magic in it. Find all the configuration options in the extensive docker-maven plugin manual.

Fabric8 - Docker and Kubernetes Are Usable Now
Fabric8’s aim is to help any developer, team and organisation that wants to work with containers. Nobody really wants to use a command line to push and start containers. Plus, there's a lot more to it: Keeping them running, moving them around on hosts, monitoring, and and and. Don't even think about microservices right now, but those need even more. More fine grained control, more teams, more CI/CD and auto-discovery features. And all this is Fabric8. It can create a complete CI/CD pipeline with approvals and code quality insurance. If you want to see a complete example, have a look at what James Rawlings wrote up a couple of days ago. So, what does that mean for my Java EE project and how to deploy it to OpenShift now? Read up a little about how to run an application on OpenShift with the nice overview post by Arun Gupta. It also includes a pointer to the OpenShift life-cycle. You basically need to create an OpenShift project and include a json file, which describes your application including all the links to the docker images. Doable. For sure. But Fabric8 can do better. There is another Maven plugin available, which takes all this burden off you and just let's you deploy your application. Exactly, like I as a Java EE developer expected it to be. Let's add the plugin to your project and configure it a bit:

    <plugin>
                <groupId>io.fabric8</groupId>
                <artifactId>fabric8-maven-plugin</artifactId>
                <version>${fabric8.version}</version>
                <executions>
                    <execution>
                        <id>json</id>
                        <phase>generate-resources</phase>
                        <goals>
                            <goal>json</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>attach</id>
                        <phase>package</phase>
                        <goals>
                            <goal>attach</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
This does little more, than just bind it to the different execution phases. You can skip this for this example, because we're going to execute it manually anyway. The additional configurations do happen in Maven properties again:

<!-- Defining the Service Name for Fabric8 -->
<fabric8.service.name>myfear-wildfly-test</fabric8.service.name>
<!-- Defining the internal service port -->
<fabric8.service.port>9101</fabric8.service.port>
<!-- the expsed container port -->
<fabric8.service.containerPort>8080</fabric8.service.containerPort>
<!-- the component label, as shown in the console -->
<fabric8.label.component>${project.artifactId}</fabric8.label.component>
<!-- the container label -->
<fabric8.label.container>wildfly</fabric8.label.container>
<!-- the application group label -->
<fabric8.label.group>myfears</fabric8.label.group>
<!-- the domain were working in -->
<fabric8.domain>vagrant.f8</fabric8.domain>
<!-- We don't want to upload images, but want OpenShift to pull them automatically -->
 <fabric8.imagePullPolicy>IfNotPresent</fabric8.imagePullPolicy>
Ok, that's about it. Most of it are naming, labels and configurations which are a one-time thing to figure out. All we really need from here on, is the Kubernetes JSON file. So, type:

mvn fabric8:json fabric8:apply
What didn't work locally with my installation is, that my hosts file got updated with the new routing. So, you might need to add the domain-name mapping manually:

172.28.128.4 myfear-wildfly-test.vagrant.f8
After a couple of seconds, the new pod is created and you can access your application via http://myfear-wildfly-test.vagrant.f8/. This runs your application on OpenShift.


Try docker ps again and see, if you can spot your container. In my case:

c329f2e0f63b        myfear/wildfly-test:latest
If you struggle with something and your app doesn't come up as expected, there are some ways to get closer to the problem. First is, to run the image locally against your Docker daemon. There's a handy command, mvn fabric8:create-env to figure out the env vars for you so that you can run docker images outside of kubernetes as if they are inside (in terms of service discovery and environment variables defined in the kubernetes json). If that's not an option, you can also get a bash from your running container:

docker exec -i -t c329f2e0f63b bash
Just replace the container id, with the real one from the ps command. That's about it. Now you can totally start over. I'm going to walk you through the consoles a bit.

Access The OpenShift Console
First things first. You can spot your application on the OpenShift console. http://vagrant.f8:8443 brings you to the OpenShift console. Select the "default" space and see the Docker Registry, some elasticsearch inststances, some other and finally your instance:


You can also browse the individual pods and services. More about this maybe in a later blogpost

The Fabric8 Console
The one magical thing, we're really interested in is the Fabric8 Console. http://fabric8.vagrant.f8/ brings you there and the "Kubernetes" tab displays all the running apps for you. This also includes you own application:

As you can see in this screenshot, I already scaled the app from one (default) to two pods. Clicking on the little pod icon on the far right (not in this screenshot) let's you adjust the number of pods running. If you click on the "diagram" view, you see a complete overview of your infrastructure:
There's a lot more to explore and I am going to show you more in subsequent blog-posts. Now, that we got everything up and running, this will be even more entertaining. Let me know, what you want to read about in particular.

Monday, July 20, 2015

Monitoring DevOps Style With WildFly 9 And Jolokia

14:30 Monday, July 20, 2015 Posted by Markus Eisele
, , ,
DevOps is among the hottest topic these days. And the wide range of topics around it makes it hard to actually find a complete description or something that covers everything on a decent granularity. One thing is for sure: One of the most important parts is to deliver the correct metrics and and information for monitoring of the application.

Java EE and JMX
The standard way of monitoring Java EE servers is JMX. This is possible with tools like JConsole, VisualVM or the Oracle Mission-Control Suite. There are a bunch of advantages to this approach and most of the operation teams actually used this a lot in the past. But it doesn't exactly works the DevOps-way. It is a separate tooling and the DevOps-teams don't have a good way to actually script this without having all the tooling and operational systems (Nagios, etc.) installed. Today it feels a lot more natural and is easier to use to have http endpoints which expose configuration and runtime information.

Jolokia - JMX To HTTP With JSON
A very convenient way to do this for JMX is to use Jolokia. Jolokia is a JMX-HTTP bridge giving an alternative to JSR-160 connectors. It is an agent based approach with support for many platforms. In addition to basic JMX operations it enhances JMX remoting with unique features like bulk requests and fine grained security policies. It comes bundled with a lot of JBoss projects lately (e.g. WIldFly-Camel subsystem) and can be easily used in your own applications.

A Simple Java EE 7 App Equipped With Jolokia
Just create a simple Java EE 7 project (maybe with Adam Bien's maven artifact) and add one dependency to it:
<dependency>
     <groupId>org.jolokia</groupId>
     <artifactId>jolokia-core</artifactId>
     <version>1.3.1</version>
 </dependency>
The next step is to configure the Jolokia AgentServlet in your web.xml and map it to a pattern which suits your needs:
  <servlet>
        <servlet-name>jolokia-agent</servlet-name>
        <servlet-class>org.jolokia.http.AgentServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>jolokia-agent</servlet-name>
        <url-pattern>/metrics/*</url-pattern>
    </servlet-mapping>
Build your application as usual and access the relevant metrics as you need them. The complete .Jolokia reference explains the different operations and types.

Deploy Your Application To WildFly 9
Download and unzip WildFly 9 to a folder of your choice. Startup with bin/standalone.xml.

Example Metrics
While you can access every JMX MBean, that is defined in the server, here is a list of metrics, that might help you out of the box.

Heap memory usage:
http://localhost:8080/javaee-devops/metrics/read/java.lang:type=Memory/HeapMemoryUsage
{
    "request": {
        "mbean": "java.lang:type=Memory",
        "attribute": "HeapMemoryUsage",
        "type": "read"
    },
    "value": {
        "init": 67108864,
        "committed": 241696768,
        "max": 477626368,
        "used": 141716336
    },
    "timestamp": 1437392335,
    "status": 200
}
Overview over your server environment:
http://localhost:8080/javaee-devops/metrics/read/jboss.as:core-service=server-environment

You could not only read JMX attributes but also execute operations, like accessing the latest 10 lines of the server.log file:
http://localhost:8080/javaee-devops/metrics/exec/jboss.as.expr:subsystem=logging/readLogFile/server.log/UTF-8/10/0/true


Securing The Endpoint
As you would have expected, the AgentServlet is accessible like your application is. In order to prevent this, you will have to secure it. Good news is, that this is possible with basic authentication and the application realm in WildFly. Fist step is to add a user to the application realm. This can be done with the bin/add-user.sh|bat script. Make sure to add the role "SuperUser". Now add the following to your web.xml:
    <security-constraint>
        <display-name>Metrics Pages</display-name>
        <web-resource-collection>
            <web-resource-name>Protected Metrics Site</web-resource-name>
            <description>Protected Metrics Site</description>
            <url-pattern>/metrics/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <description/>
            <role-name>SuperUser</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>NONE</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
     <login-config>
        <auth-method>BASIC</auth-method>
        <realm-name>ApplicationRealm</realm-name>
    </login-config>
    <security-role> 
        <role-name>SuperUser</role-name> 
    </security-role> 
One last thing to do here is to add a file to WEB-INF/ called jboss-web.xml. This will just contain three lines:
<jboss-web>
    <security-domain>other</security-domain>
</jboss-web>
Whenever you try to access the metrics endpoint the server now challenges you with a basic authentication request.

Looking For More?
This is just a simple example for now based on the standard JMX metrics, which WildFly exposes. You can for sure register your own MBeans or expand this by aggregating the individual calls into one single. Another option is, to use hawt.io as a ready to use, extensible UI which already provides all kinds of metrics for WildFly and many other subsystems. But this is a very straight forward way. Next major release of Jolokia might feature some more to make the DevOps ride a lot more convenient.

Friday, July 17, 2015

Using JPA And CDI Beans With Camel on WildFly

13:30 Friday, July 17, 2015 Posted by Markus Eisele
, ,
I didn't really plan for it, but with a conference free month, I had the chance to dig around a little more and show you even more of the Camel on WildFly magic, that the WildFly-Camel subsystem provides.

The Business Background
The demo is derived from one on JBoss Demo-Central by Christina Lin. She demonstrates the use of File and JDBC connectors in Camel and also added the use of Spilt pattern and Exception handling method. The scenario of the demo is to mimic the transaction process between bank accounts. The input is a batch XML file which contains a bunch of transactions. Those can either be cash deposit, cash withdraw or transfer information of bank accounts. Depending on the type of transaction, they are spilt up and each transaction retrieves relevant information from a database, does the transaction and calculates the transaction fee before placing them back into the database. You can find the full original source code on GitHub.

Why Did I Touch It
Some reasons: I actually don't want to think about new business cases. And don't just want to show you something in nitty-gritty details on a technical level. So, I thought it is a quick win to just take the scenario from Christina. Second of all, she is doing everything in Fuse, based on Karaf and using the XML DSL for the route definitions. I am just a poor Java guy, and learned to hate XML. Plus, she is using a couple of components, which I wouldn't in a Java EE context.

Prerequisites - Getting The App Deployed
Before you begin, playing around with the demo, please make sure to have WildFly 8.2.0.Final installed together with the WildFly-Camel subsystem patch 2.2.0. Now feel free to fork the demo repository on my github account into a directory of your choice. It is nothing more than a maven Java EE 7 project with some additional dependencies. Just do a
mvn clean install
and deploy the resulting target/javaee-bankdemo-1.0-SNAPSHOT.war to your WildFly server.
There isn't any UI in this example, so you basically have to watch the logfile and copy an xml file around. The src\main\in-data folder contains a bank.xml, which you need to copy over to your standalone\data\inbox folder. The second this is done, camel starts it's magic.

The CustomerStatus
Everything begins with the standard Java EE app. The Entity CustomerStatus holds account information (ID, VipStatus, Balance). It also has some NamedQueries on it. Doesn't look Camel specific at all. The in-memory H2 database, which WildFly uses as the default db, get's pre-populated with the help of three scripts which are configured as schema-generation properties in the persistance.xml. I'm working with two customers here, named A01 and A02.

Camel And Java EE
The Camel bootstrapping is quite simple in this case. The BankRouteBuilder has a @ContextName("cdi-context") annotation and is itself an application scoped startup-bean which contains all the needed routes for the little demo. Feel free to re-read and learn about other potential options to deploy / configure routes. The hawt.io console (http://localhost:8080/hawtio/) displays all of them nicely. The application has five routes.
ReadFile is the first one, which basically only ready the xml file and pushes the individual entries (split by an xPath expression) to the processTransaction route.
This one decides on whether it is a "Cash" transaction or a "Transfer" transaction. Respectively ending in "direct:doTransfer" or "direct:processCash". I left all of the original xml route definitions in the BankRouteBilder as comments. Might be helpful, if you search for a particular solution.

Differences To The Fuse Demo
Christina used the Camel JDBC component a lot. It does all the heavy lifting and even the initial database setup. This is nothing we want to do anywhere, but especially not in a Java EE environment where we have all the JPA magic ready to use. In fact, there is a Camel JPA componente, but it is very limited and doesn't really support NamedQueries or alike.
A very powerful way to work around that is to use the Camel Bean component with all the bean binding and the cdi component, which is already integrated. All the database access is managed via the CustomerStatusService. Which is basically a @Named bean which get's an EntityManager injected and knows how to load CustomerStatus entities. It get's injected into the RouteBuilder by simply referencing it in the bean endpoint:
.to("bean:customerService?method=loadCustomer")
I agree, that there is a lot of magic happening behind the scenes and the fact, that the CustomerStatusService depends on Camel classes is another thing, that I dislike. But this could be easily worked around by just @Inject-ing the service into the route and referencing it alike. I decided to not do this, because I wanted to keep the initial flow of Christina's demo alive. She is working with the Exchanges a lot and relies on them. So, I stayed closer to her example.

A Word On Transactions
I am actually using an extended persistent context in this example and marked the updateCustomer method in the service as @Transactional. This is a very simple way of merging complete and updated CustomerStatus entities back into the database. The whole doTransfer route isn't transactional right now. Even if the second customer isn't in the system, the amount would still be withdrawn from the first customer account. I want to cover this at a later stage and a separate blog-post.

That's it for now. Enjoy your weekend and playing with Camel and the WildFly Camel subsystem. Happy to receive your ideas or questions via @myfear or as a comment on the blog-post.

Tuesday, July 14, 2015

Sending JMS Messages From WildFly 8 To WebLogic 12 with Camel

08:43 Tuesday, July 14, 2015 Posted by Markus Eisele
, , ,
System integration is a nice challenge. Especially, when you're looking for communication standards and reliable solutions. In today's microservices world, everybody talks about REST services and http-based protocols. As a matter of fact, this will never be enough for most enterprise projects which typically tend to have a much more complex set of requirements. A reasonable solution is a Java Message Service based integration. And while we're not looking at centralized infrastructures and ESBs anymore, we want point to point based integration for defined services. Let's see if we can make this work and send messages between JBoss WildFly and Oracle WebLogic Server.

Business Case - From Java EE To Microservices
But I want to step back a bit first: Why should someone? I think, one of the main motivations behind such a scenario is a slow migration path. Coming down all the way from monolithic, single platform applications we want to be flexible enough to shell out individual services from those giant installations and make them available as a service. Assuming, that this is even possible and the legacy application has a decent design. Or we want to advance individual services, let's say from a technical perspective. In this particular example, we can't wait to get Java EE 7 features into our application and WebLogic is still mostly stuck on EE 6. We could do this with REST services or even WebServices, but we might want more. And this is, where the JMS specification comes in.

Oracle JMS Client Libraries in WildFly
In order to send messages between two different servers, you need to have the individual client libraries integrated into the sending end. For WebLogic this is WebLogic JMS Thin Client (wljmsclient.jar). provides Java EE and WebLogic JMS functionality using a much smaller client footprint than a WebLogic Install or Full client, and a somewhat smaller client footprint than a Thin T3 client. As a matter of fact, it contains Java EE JMS APIs and implementations which will directly collide with the ones provided by WildFly. To use them, we'll have to package them as a module and and configure a JMS Bridge in HornetQ to use exactly this. First thing is to add the new module. Change folder to wildfly-8.2.0.Final\modules\system\layers\base and create a new folder structure: custom\oracle\weblogic\main underneath it. Copy the wlthint3client.jar from the %MW_HOME%\server\lib folder here. Now you have to add a module descriptor file, module.xml:
<module xmlns="urn:jboss:module:2.0" name="custom.oracle.weblogic">
    <resources>
        <resource-root path="wlthint3client.jar">
            <filter>
                <exclude-set>
                    <path name="javax.ejb"/>
                    <path name="javax.ejb.spi"/>
                    <path name="javax.transaction"/>
                    <path name="javax.jms"/>
                    <path name="javax.xml"/>
                    <path name="javax.xml.stream"/>
                </exclude-set>
            </filter>
        </resource-root>
    </resources>

    <dependencies>
        <module name="javax.api"/>
        <module name="sun.jdk" export="false" services="import">
            <exports>
                <include-set>
                    <path name="sun/security/acl"/>
                    <path name="META-INF/services"/>
                </include-set>
            </exports>
        </module>
        <module name="com.sun.xml.bind" />
        <module name="org.omg.api"/>
        <module name="javax.ejb.api" export="false"   />
        <module name="javax.transaction.api"  export="false" />
        <module name="javax.jms.api"  export="false" />
        <module name="javax.xml.stream.api" export="false"  />
        <module name="org.picketbox" optional="true"/>
        <module name="javax.servlet.api" optional="true"/>
        <module name="org.jboss.logging" optional="true"/>
        <module name="org.jboss.as.web" optional="true"/>
        <module name="org.jboss.as.ejb3" optional="true"/>
        <module name="org.hornetq" />
    </dependencies>
</module>
This file defines all the required resources and dependencies together with the relevant excludes. If this is done, we finally need the message bridge.

The HornetQ JMS Message Bridge
The function of a JMS bridge is to consume messages from a source JMS destination, and send them to a target JMS destination. Typically either the source or the target destinations are on different servers. The bridge can also be used to bridge messages from other non HornetQ JMS servers, as long as they are JMS 1.1 compliant. Open the standalone-full.xml and add the following configuration to the messaging subsystem:
<jms-bridge name="wls-bridge" module="custom.oracle.weblogic">
                <source>
                    <connection-factory name="java:/ConnectionFactory"/>
                    <destination name="java:/jms/sourceQ"/>
                </source>
                <target>
                    <connection-factory name="jms/WFMessagesCF"/>
                    <destination name="jms/WFMessages"/>
                    <context>
                        <property key="java.naming.factory.initial"
                              value="weblogic.jndi.WLInitialContextFactory"/>
                        <property key="java.naming.provider.url" 
                              value="t3://127.0.0.1:7001"/>
                    </context>
                </target>
                <quality-of-service>AT_MOST_ONCE</quality-of-service>
                <failure-retry-interval>2000</failure-retry-interval>
                <max-retries>10</max-retries>
                <max-batch-size>500</max-batch-size>
                <max-batch-time>500</max-batch-time>
                <add-messageID-in-header>true</add-messageID-in-header>
            </jms-bridge>
As you can see, it references the module directly and has a source and a target definition. The source is the WildFly local message queue which is defined in the messaging subsystem:
   <jms-queue name="sourceQ">
       <entry name="java:/jms/sourceQ"/>
   </jms-queue>
And the target is the remote queue plus connection factory, which are defined in WebLogic Server. I assume, that you know how to do that, if not, please refer to this documentation. That's pretty much it. Now we need to send a message to our local queue and this is going to be send via the bridge over to the WebLogic queue.

Testing The Bridge - With Camel
Deploy a message driven bean to WebLogic (Yes, you'll have to package it as an ejb jar into an ear and all of this). This particular sample just dumps the message text out to the logger.
@MessageDriven(mappedName = "jms/WFMessages", activationConfig = {
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})

public class LogMessageBean implements MessageListener {
    private final static Logger LOGGER = Logger.getLogger(LogMessageBean.class.getName());

    public LogMessageBean() {
    }

    @Override
    public void onMessage(Message message) {
        TextMessage text = (TextMessage) message;
        try {
            LOGGER.log(Level.INFO, text.getText());
        } catch (JMSException jmxe) {
            LOGGER.log(Level.SEVERE, jmxe.getMessage());
        }
    }
}
Now we need a producer on the WildFly server. Do do this, I am actually using the WildFly-Camel JMS integration.
@Startup
@ApplicationScoped
@ContextName("jms-camel-context")
public class JMSRouteBuilder extends RouteBuilder {

    @Override
    public void configure() throws Exception {
        // Initial Context Lookup
        Context ic = new InitialContext();
        ConnectionFactory cf = (ConnectionFactory) ic.lookup("/ConnectionFactory");
        // Create the JMS Component
        JmsComponent component = new JmsComponent();
        component.setConnectionFactory(cf);
        getContext().addComponent("jms", component);
        // Build A JSON Greeting
        JsonObject text = Json.createObjectBuilder()
                 .add("Greeting", "From WildFly 8").build();
        // Send a Message from timer to Queue
        from("timer://sendJMSMessage?fixedRate=true&period=10000")
                .transform(constant(text.toString()))
                .to("jms:queue:sourceQ")
                .log("JMS Message sent");
    }
}
That's the whole magic. A timer sends a JSON Text message to the local queue which is bridged over to WebLogic.


Some More Hints
If you want to test the WebLogic Queue without the bridge, you will have to include the wljmsclient into your project. As this isn't available in a Maven repository (AFAIK), you can simply install it locally:
mvn install:install-file -Dfile=%MW_HOME%/wlserver/server/lib/wlthint3client.jar -DgeneratePom=true -DgroupId=custom.com.oracle -DartifactId=wlthint3client -Dversion=12.1.3 -Dpackaging=jar
Another important thing is, that you will run into classloading issues on WildFly, if you try to use the custom module in any other scope than the bridge. So, pay close attention, that you don't use it somewhere else.
The bridge has a comparibly large failure-retry-interval and max-retries configured. This is a workaround. If WildFly startup is too fast and the bridge tries to access the local sourceQ before the queue is actually configured, it'll lead to an exception.
Find the complete source-code in my GitHub account.

Friday, July 10, 2015

Using Camel Routes In Java EE Components

15:00 Friday, July 10, 2015 Posted by Markus Eisele
, ,
I've been working with Camel since a while now and I really like it's simplicity. Using it on top of Java EE always was a little bit of a challenge and one of the recent talks I gave about how to do this and the different methods of bootstrapping Camel in Java EE actually proposes to use the WildFly-Camel Subsystem. In an ongoing series I am going to explore the different ways of doing this and provide a bunch of examples which are still missing from the talk. I'm happy to receive your feedback and requests in the comments or via @myfear on twitter.

Getting Started With Camel On WildFly 8.2 
The Wildfly-Camel Subsystem provides Apache Camel integration with the WildFly Application Server. It allows you to add Camel Routes as part of the WildFly configuration. Routes can be deployed as part of Java EE applications. Java EE components can access the Camel Core API and various Camel Component APIs. Your Enterprise Integration Solution can be architected as a combination of Java EE and Camel functionality.
Remark: Latest WildFly 9 is expected to be supported by the 3.x release of WildFly-Camel.

Getting Ready 
Download and unzip WildFly 8.2.0.Final to a folder of your choice. Download and unzip the wildfly-camel patch (2.3.0) to the wildfly folder.  Start WildFly with
bin/standalone[.bat|.sh] -c standalone-camel.xml
One of the fastest ways to get up and running is with Docker and the WildFly Camel image. This image comes bundled with WildFly 8.1 and the Camel subsystem already installed.
Defining And Using A Camel Context
The CamelContext represents a single Camel routing rulebase. You use the CamelContext in a similar way to the Spring ApplicationContext. It contains all the routes for your application. You can have as many CamelContexts as necessary, as long as they have different names. WildFly-Camel let's you define them as a) in the standalone-camel.xml and domain.xml as part of the subsystem definition itself and b) or deploy them in a supported deployment artifact which contains a -camel-context.xml suffixed file and c) it can be provided as together with it's routes via a RouteBilder and the CDI integration.
A defined CamelContext can be consumed in two different ways: a) @Injected via Camel-CDI or b) accessed from the JNDI tree.

The Example Context And Route
For the following examples I use a context with an associated route which is provided via CDI and a RouteBuilder. It is an application scoped bean which is automatically started with the application start. The @ContextName annotation gives a specific name to the CamelContext.
@ApplicationScoped
@Startup
@ContextName("cdi-context")
public class HelloRouteBuilder extends RouteBuilder {

    @Inject
    HelloBean helloBean;

    @Override
    public void configure() throws Exception {
        from("direct:start").transform(body().prepend(helloBean.sayHello()).append(" user."));
    }
}
The route itself isn't exactly challenging. It takes an empty message body from direct:start and prepends the output from a CDI bean-method "sayHello" and appends the string " user." to it. For reference, the complete code can be found on my GitHub account. So, all we need to find out next is, how to use this route in the various Java EE component specifications.

Using Camel From CDI
Camel supports CDI since version 2.10. Before and outside the subsystem, it needed to be bootstrapped. This is no longer necessary and you can just use a deployed or defined CamelContext in a @Named CDI bean by simply @Injecting it by name:
@Inject
    @ContextName("cdi-context")
    private CamelContext context;

Using Camel From JSF, JAX-RS and EJBs
With the knowledge about how to use a CamelContext in CDI, you would assume, that it is easy to just do the same from JSF and alike. This is not true. You actually can't inject it into either ManagedBeans or even CDI Beans which are bound to a JSF component. Plus it's not working in EJBs. I haven't looked into it detailed, but assume it has something to do with scopes. A reasonable workaround and in fact, a better application design is to put the complete Camel logic into a separate CDI bean and just inject this.
@Named
public class HelloCamel {

    @Inject
    @ContextName("cdi-context")
    private CamelContext context;

    private final static Logger LOGGER = Logger.getLogger(HelloCamel.class.getName());

    public String doSomeWorkFor(String name) {

        ProducerTemplate producer = context.createProducerTemplate();
        String result = producer.requestBody("direct:start", name, String.class);
        LOGGER.log(Level.INFO, result);
        return result;
    }
}
The ProducerTemplate interface allows you to send message exchanges to endpoints in a variety of different ways to make it easy to work with Camel Endpoint instances from Java code. In this particular case, it just starts the route and puts a String into the body which represents the name of the component I'm using it from.
The CDI Bean, which acts as a backing-bean for the component just uses it:
@Inject
    HelloCamel helloCamel;

    public String getName() {
        return helloCamel.doSomeWorkFor("JSF");
    }
The return String is "Hello JSF user." Which also is written to the WildFly server log. The same approach is the best for all the other Java EE components.

Using Camel From EJBs
If you're using EJBs as your man application component model, it is also very reasonable to just use the JNDI approach:
 CamelContext camelctx = 
                (CamelContext) inicxt.lookup("java:jboss/camel/context/cdi-context");

Hawtio - A Camel Console
Another hidden gem in the subsystem is the bundling of the Hawtio console. It is a modular web console for managing your Java stuff and has an Apache Camel plugin which visualizes your contexts and routes. Remember, that it is automatically configured for security and you need to add a management user before you're able to access it.


Further Reading & Help
Talk to the Developers on Freenode
WildFly-Camel Subystem Documentation
WildFly Camel On GitHub
Apache Camel Website

Monday, June 8, 2015

Docker Compose on Windows with Python And Babon

14:30 Monday, June 8, 2015 Posted by Markus Eisele
, , ,
Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. It is the only tool in the Docker tool-chain, which doesn't have a native binary for Windows in place right now and to get it up and running on Windows requires quite some work.

Using Babon and Python
The official compose documentation implies, that there is a python only way on not supported platforms. As a matter of fact, this is not totally true. Even the Python package relies on POSIX based commands which aren't available on Windows. If you try to go down this road you will get surprisingly far, but will not finish. The only way to make it work is to use CygWin. For those of you, who don't like it (like I don't), there is a decent alternative called Babun. Babun is a turn-key CygWin distribution for developers and is very easy to install and maintain.
  • Download the installer ZIP archive from the Babun homepage. (~280MB)
  • Unzip the archive to a temporary folder.
  • Change to the unzipped folder and start the install.bat (this might take a while) When you're finished, you can safely delete the temp folder.
  • The babun shell is now open, run the command: "babun update"
  • Change the default shell from zsh to bash if you prefer that by running the command: "babun shell /bin/bash".
  • Edit ~/.bashrc to activate loading of ~/.bash_aliases. (scroll down a bit until you find the line: "#Aliases" and un-comment the if statement.
  • Install additional Python essentials:
    pact install python-setuptools 
    pact install libxml2-devel libxslt-devel libyaml-devel
    curl -skS https://bootstrap.pypa.io/get-pip.py | python
    pip install virtualenv
    curl -skS https://raw.githubusercontent.com/mitsuhiko/pipsi/master/get-pipsi.py | python
    
This installed a bunch of python packages and the pipsi package manager to your Babun installation. Now you're ready to actually install the docker compose python package:
pip install -U docker-compose
After everything got downloaded and installed, you can now use compose from Babun:
{ ~ }  » docker-compose --version                                                            
docker-compose 1.2.0
With the mapped directories it is easy to change to a temp folder on your windows drive (e.g. /d/temp/) and use compose. Make sure you have everything you need in your PATH variable (Hint: that is different now, e.g. just use:  PATH=$PATH\:/d/path/to/docker/exe ; export PATH ) and make sure to set your environment properly:
eval "$(docker-machine env)"
Now, you can go ahead and just use a very simple docker-compose.yml file, like the one Arun blogged about and you have a bunch of instances up and running without any further configuration or command line hacks.
Find the complete reference to the compose file format on the official Docker Website.

Using the Docker Image Workaround
If you want to, you can try to use the (unofficial) Docker Compose image and run it as a container locally. While this seems to be a solution, I couldn't get this to work on plain Windows. Any pointer and ideas appreciated.

A Two Minute Babun Screencast
Have a look at a 2 minutes long screencast about Babon by @tombujok.