Java EE and general Java platforms.
You'll read about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies.

Wednesday, July 29, 2015

WebLogic Server 12.1.3 on Kubernetes

10:31 Wednesday, July 29, 2015 Posted by Markus Eisele
, , ,
Most of you recall, that I have some little history in Oracle WebLogic server. Coming all the way back from BEA times. I don't want to comment on recent developments or features or standard support, but one thing is for sure: It is out there and having the chance to run it containerized is something many customers will appreciate. Maybe this is the one thing, that will make a real difference in our industry with the ongoing progress in the containerization field. We actually can manage heterogeneous infrastructure components on a common base. If we have the right tools at hand. And this is true for the operations side with OpenShift and of course for all the developers out there, who will appreciate, what Fabric8 can do for them.

License Preface
Whatever happens in this blog-post only happens on my local developer laptop. And I strongly believe, that with regards to Oracle technologies this is absolutely covered by the so called OTN Free Developer License Agreement and the Oracle Binary Code License Agreement for Java SE.
I'm dead sure, that a production environment needs a bunch of licenses. But I'm not a specialist. So, don't ask me. If you want to use RHEL 7, please learn about the Red Hat Subscriptions.

Ok, WebLogic - Where's Your Image?
Not there. I assume for licensing reasons. But, Bruno did a great job in pushing relevant Dockerfiles and scripts to the official Oracle GitHub account. So. the first step in running WebLogic on Kubernetes is to actually build a docker image with it. Go,

git clone https://github.com/oracle/docker

and navigate to the OracleWebLogic folder. In fact, you can delete everything else beside this one. First step is to download the WebLogic ZIP installer and the correct JDK to be used.
Go to the Oracle Website, accept the OTN License (if you feel like it) and download the platform independent ZIP installer (wls1213_dev_update2.zip).
Now browse to the JDK download website, do the same license thing and download the 8u51 JDK as rpm (jdk-8u51-linux-x64.rpm). Place both into the OracleWebLogic\dockerfiles\1.2.1.3 folder. If you're running on a unix like OS yourself, feel free to check back with the official documentation and use the provided scripts. This didn't work for me on Windows, so you get a step-by-step walk-through. Go and rename the Dockerfile.developer to Dockerfile and delete all the other ones.

mv Dockerfile.developer Dockerfile
rm Dockerfile.*

Now you open the Dockerfile and change a couple of things. Base it on RHEL 7:

FROM rhel7 

And comment out the other base, that's in there ... And because, we want to run a decently patched and available Java version, we're going to change the environment variable accordingly

ENV JAVA_RPM jdk-8u51-linux-x64.rpm

Time to build our image. And before you start, let's reuse the fabric8 vagrant installer, that I've been using for the last two blog-posts already. So, bring your vagrant instance with OpenShift up first. Now it's time to build the WebLogic image. Sit back and relax, because this is going to take a couple of more minutes. Do have some housekeeping to do? This might be the right time!

docker build --force-rm=true --tag="vagrant.f8:5000/oracle/weblogic:12.1.3-dev" .

Done? Check if everything is where we expect it to be: (docker images)

vagrant.f8:5000/oracle/weblogic       12.1.3-dev          68f1ea788bba        About a minute ago   2.05 GB

Because this image only contains the server binaries, we now need to build an image which has a configured WLS domain in it. Thankfully, there are some more scripts in samples\12c-domain folder. So, go check, if the Dockerfile and all scripts in container-scripts have the correct UNIX line-ending. Sometimes, Git can mess them up, if you're on Windows. And if you're already there, make sure to change some ports according to your needs. I had to change the admin port to 8011 (do this in the Dockerfile and add-machine.py script. Another thing, we want to do is, to run the instance in development mode. This allows us to just copy our Java EE app into the ./autodeployment folder and have it deployed, when started. You can just changing the attribute in the following line from prod to dev:

setOption('ServerStartMode','dev')

Now, you're ready to go ahead with building the development domain image:

docker build --force-rm=true --tag="vagrant.f8:5000/myfear/weblogic:12.1.3-dev" 

And, after another couple of cups of coffee, we're ready to check if this image made it into our repository (docker images)

vagrant.f8:5000/myfear/weblogic      12.1.3-dev          77a3ec07d176        9 minutes ago       2.052 GB

Before going any further, make sure to give it a shot and see, if the Weblogic instance comes up.

docker run -it myfear/weblogic:12.1.3-dev

If that worked, you're ready to build your third image today. Which will contain your application.

NetBeans And Fabric8 - Take WebLogic Into Greek Heaven
Start NetBeans and create a nice, simple and lean Java EE 6 project from a maven archetype of your choice. Add all the fabric8 and docker-maven plugin dependencies to it, like I've shown you before in the first blog post of the series. Let's tweak the properties to our needs and just name the image: myfear/weblogic-test:latest. Most importantly, you have to map the container port to the Kubernetes service correctly:

<!-- Kubernetes Service Port // Mapped via the HARouter-->
<fabric8.service.port>9050</fabric8.service.port>

<!-- The exposed container port -->
<fabric8.service.containerPort>8011</fabric8.service.containerPort>

<!-- because, I'm working with the remote registry here, base it on the remote image -->
<docker.from>vagrant.f8:5000/myfear/weblogic:12.1.3-dev</docker.from>

<!-- Just cosmetics, changing the container label -->
<fabric8.label.container>weblogic</fabric8.label.container>

Don't forget to use Java EE 6 as dependency, and change both user and deployment base in the docker-maven plugin to:

<user>oracle:oracle:oracle</user>
<basedir>/u01/oracle/weblogic/user_projects/domains/base_domain/autodeploy/</basedir>

Time to build the third and last docker image:

mvn clean install docker:build

And if that finished correctly, we're going to deploy everything to OpenShift with the Fabric8 tooling:

mvn fabric8:json fabric8:apply

And don't forget to add the host-name mapping to your hosts file.

172.28.128.4 myfear-weblogic-test.vagrant.f8

A request to http://myfear-weblogic-test.vagrant.f8/sample shows the application after you waited a couple of minutes (at least, I had to; Looks like my laptop wasn't quick enough.).


Some Further Remarks
This isn't exactly production ready. WLS knows managed servers and node managers and there are a bunch of ports for external communication, that need to be opened. This basically did nothing more than to deploy a teensy application onto the AdminServer. There are a couple of whitepapers and further ideas about how to tweak the domain scripts to fit your needs. I didn't want to do that for obvious reasons. So, consider this a proof of concept.


Monday, July 27, 2015

Scaling and Load Balancing WildFly on OpenShift v3 With Fabric8

15:00 Monday, July 27, 2015 Posted by Markus Eisele
, ,
Did you enjoy the first ride with Fabric8 and OpenShift v3? There's more a lot more to come. After we got the first WildFly container up and running on Kubernetes, without having to deal with all it's inherent complexity, I think it is about time to start to scale and load balance WildFly.

Prerequisites
Make sure, you have the complete Vagrant, Fabric8, OpenShift v3, Kubernetes environment running. I walked you through the installation on Windows in my earlier blog post, but you can also give it a try on Google Container Engine or on OpenShift v3.

The Basics
What we did yesterday was to take our local Java EE 7 application and dockerize it based on latest jboss/wildfly:9.0.1.Final image. After that was done, we build the new myfear/wildfly-test:latest custom image and pushed it to the docker registry running on the vagrant image. The Fabric8 Maven plugin created the Kubernetes JSON for us and pushed it out to OpenShift for us. All this with the help of a nice and easy to use web-console. This post is going to re-use the same Java EE 7 example which you can grep from my github account.

Scaling Your Java EE Application on OpenShift With Fabric
One of the biggest features of Java EE application servers is scaling. Running high load on Kubernetes doesn't exactly match to how this was typically done in the past. With Kubernetes, you can scale Nodes and Minions, Replication Controllers and Pods according to your needs. Instead of launching new JVMs, you launch new container instances. And, we have learned, that Fabric8 is a very handy administration tool for Kubernetes, so we're going to scale our application a bit.
So, build the docker image of your Java EE 7 application and deploy it to Fabric8 with the following maven commands:

mvn clean install docker:build
mvn fabric8:json fabric8:apply

If that succeeded, you can access your application via http://myfear-wildfly-test.vagrant.f8/. The HelloWorld Servlet shows the hostname and the POD ID

No matter how often you hit refresh at this point, there is never going to be another pod id in this response. Of course not, we're only running one instance until now. Let's switch to the Fabric 8 console and scale up the pods. Switch to the "Apps" tab and click on the little green icon on the lower right to your application. In the overlay change the number of pods from one to three.


After a while, the change is reflected in your console and the pods go from downloading to green in a couple of seconds

Let's go back to our web-interface and hit refresh a couple of times. Nothing changes? What happened? What is wrong? Let me walk you through the architecture a little:

Overall Architecture
Did yesterdays blog post left your wondering? How did all the parts work together? Here's a little better overview for you. Spoiler alert: This is overly simplifying the OpenShift architecture. Please dig into the details on your own. I just want to give you a very focused view on scaling and load balancing with Fabric8 and OpenShift.

Everything relies on the OpenShift routing and management of the individual pods. Ports are exposed by containers and mapped through services. And this goes back to back from client to the running instance. And the central component, which does the routing is the HAProxy obviously. Which is a normal pod with one little exception: It has a public IP address. Let's see, what this thing does on OpenShift and how it is configured.

HAProxy As Standard Router On OpenShift
The default router implementation on OpenShift is HAProxy. It uses sticky sessions based on http-keep-alive. In addition, the router plug-in provides the service name and namespace to the underlying implementation. This can be used for more advanced configuration such as implementing stick-tables that synchronize between a set of peers.
The HAProxy router exposes a web listener for the HAProxy statistics. You can view the statistics in our example, by accessing http://vagrant.f8:1936/. It's a little tricky to find out the administrator password. This password and port are configured during the router installation, but they can be found by viewing the haproxy.conf file on the container. All you need find out is the pod, log-in to it, find the configuration file and read the password. In my case it was "6AtZV43YUk".

oc get pods
oc exec -it -p <POD_ID> bash
less haproxy.config

Now, that we found out about this, things got clearer. Once, we have an open connection to one of our instances, this is not going to be released again in the standard configuration. But we can check that the routes are in place by looking at the statistics.


And if you really want to see, that it actually does work, you need to trick out the stickiness with a little curl magic. If you have Mysysgit installed on Windows, you can run the little batch script in my repository. It curl's a REST endpoint which puts out the POD ID which is serving the request:

{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-4oxjj"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-pku0c"}
{"name":"myfear","environment":"sample-web-4oxjj"}
{"name":"myfear","environment":"sample-web-jruh5"}
{"name":"myfear","environment":"sample-web-pku0c"}

The first five requests always return the same POD ID until the new PODs come up and the HAProxy starts to dispatch the requests round-robin. If you want to influence this behavior, you can do this. Just read more about administration of the router in the OpenShift Administration documentation. And here is a complete reference about the "oc" command line interface. If you need some ideas how to use the oc client to find out about different objects and types, there is a complete set of batch scripts in the fabric8/bin folder on github.

It's about time to diver deeper into the developer tooling of Fabric8. Stay curious for more details in the next blog posts.

Saturday, July 25, 2015

Running WildFly on Kubernetes. On Windows. Fabric8!

19:32 Saturday, July 25, 2015 Posted by Markus Eisele
, , , ,
Have you ever dreamed about running WildFly on OpenShift and leverage the latest Kubernetes features: On Windows? Sounds like blasphemy: Everything about those technologies is screaming GO and Linux. Windows doesn't seem to be the right fit. But I know, that there are many developers out there, being stuck on Windows. Corporate laptops, easy management and whatever reasons the different employers come up with. The good news is, there is a small and brave group of people, who won't let those  Windows users down. And I have to admit, that running a Windows operating system while working for Red Hat is a challenge.
We're a Linux company and an open source company and everything Windows simply feels wrong.
As my fellow colleague Grant stated in a blog-post a couple of weeks ago:
"That being said, I have decided to use Windows as my primary operating system in order to ensure that OpenShift has a great developer experience for Windows users. "
So, I tried to get Kubernetes and OpenShift running on Windows for a while, natively not possible right now. On the other hand, I really want to get my hand on latest developments and look into fancy stuff.  But there is a solution: Vagrant and Fabric8.
And Fabric8 only because, I am a Java developer. In fact if you are a Java developer wanting to work with Kubernetes Fabric8 really is the easiest and quickest way to get going. So, let's setup OpenShift and Fabric8 on a Windows machine.

Prerequisites
Download and install Vagrant (don't worry, it's MIT licensesed). Done with that? Restart your machine (You know, why it's Windows.) You will need to install an additional Vagrant plugin. Switch to a cmd line and type:

$vagrant plugin install vagrant-hostmanager-fabric8

Vagrant-hostmanager is a Fabric8 Vagrant 1.1+ plugin that manages the /etc/hosts file on guest machines (and optionally the host). Its goal is to enable resolution of multi-machine environments deployed with a cloud provider where IP addresses are not known in advance.
The only other thing you need to have installed and ready is VirtualBox (GPL licensed!)
Go and clone the Fabric8 installer git repository and cd into the openshift/latest folder:

$ git clone https://github.com/fabric8io/fabric8-installer.git
$ cd fabric8-installer/vagrant/openshift/latest

The next steps are needed for proper routing from the host to OpenShift services which are exposed via routes. Unfortunately for Windows no automatic routing for new services is possible.
You have to add new routes manually to %WINDIR%\System32\drivers\etc\hosts.
For your convenience, a set of routes for default Fabric8 applications will be pre-added when you start up vagrant
For new services look for the following line and add your new routes (<service-name>.vagrant.f8) to this file on a new line like this:

## vagrant-hostmanager-start id: 9a4ba3f3-f5e4-4ad4-9e80-b4045c6cf2fc
172.28.128.4  vagrant.f8 fabric8.vagrant.f8 jenkins.vagrant.f8 .....
172.28.128.4 myfear-wildfly-test.vagrant.f8
## vagrant-hostmanager-end

Now startup the Vagrant VM:

vagrant up

If you want to tweak the settings for the vm you have to edit the Vagrantfile. The startup including downloads takes a couple of minutes (Good time for #coffee++). While you're waiting, jump ahead and install the OpenShift client for windows. Download the one for your os from the origin project on github. The windows build has 55 MB. Next is to unpack it into a folder of your choice. Make sure
to add this folder to your PATH environment variable.

set PATH=%PATH%;"D:\Program Files (x86)\openshift-origin-v1.0.3"

While you're at it, add some more environment variables:

set KUBERNETES_DOMAIN=vagrant.f8
set DOCKER_HOST=tcp://vagrant.f8:2375

Assuming, you haven't changed the default routes added to your hosts file by the vagrant start.
The first one allows your OpenShift cli to use the right Kubernetes domain and the second one allows you to re-use the same Docker daemon, which is already running inside your Fabric8 vagrant image. Please make sure to NOT define any of the other docker env vars like DOCKER_CERT_PATH or DOCKER_TLS_VERIFY!
It is probably a good idea to add this into your system environment variables or put it into a batch-script.
Note: Make sure to use the Docker 1.6 client Windows (exe download). The latest 1.7 version doesn't work yet.
After the vagrant box is created and docker images are downloaded, the fabric8 console should appear at http://fabric8.vagrant.f8/.
Your browser will complain about an insecure connection, because the certificate is self signed. You know how to accept this, don't you?
Enter admin and admin as username and password.  Now you see all the already installed fabric8 apps. Learn more about Apps and how to build them in the documentation.

Now, let's see if we can use the docker daemon in the vagrant image :

docker ps

and see the full list of images running (just an excerpt here):

CONTAINER ID        IMAGE                                            COMMAND                CREATED              STATUS                  PORTS                                                             NAMES
d97e438222d1        docker.io/fabric8/kibana4:4.1.0                  "/run.sh"              7 seconds ago        Up Less than a second                                                                      k8s_kibana.7abf1ad4_kibana-4gvv6_default_500af2d1-32b8-11e5-8481-080027bdffff_4de5764e                                   
eaf419a177d6        fabric8/fluentd-kubernetes:1.0                   "fluentd"              About a minute ago   Up About a minute                                                                          k8s_fluentd-elasticsearch.920b947c_fluentd-elasticsearch-172.28.128.4_default_9957562ee416ea2e083f45adb9b6edd0_676633bf  
c4111cea4474        openshift/origin-docker-registry:v1.0.3          "/bin/sh -c 'REGISTR   3 minutes ago        Up 3 minutes                                                                                                              

One last thing to check, login to OpenShift via the command line tool:

oc login https://172.28.128.4:8443

use admin and admin again as username and password. Now check, which services are already running:

oc get services
Now you're ready for the next steps. Let's spin up a WildFly instance on OpenShift with Fabric8.

Dockerizing Your Java EE Application 
Ok, how does that work? OpenShift is build on top of Docker and Kubernetes. And Fabric8 gives the normal developer a reasonable abstraction on top of all those infrastructure issues. Where do we start? Let's start with a simple Java EE 7 project. It's a really simple one in this case. An html page and a HelloWorld servlet. First step is to dockerize it. There is a wonderful plugin out there, which is part of the Fabric8 ecosystem of tools named docker-maven-plugin. Simply add this to your pom.xml and define how the image should look like. The magic is in the plugin configuration:

 <configuration>
                    <images>
                        
                    </images>
 </configuration>
Running a

mvn clean install docker:build
Builds your application and creates your docker image. Plus, this image is going to be uploaded to the docker registry running on your OpenShift instance. This is configured with two additional maven properties

 <docker.host>tcp://vagrant.f8:2375</docker.host>
 <docker.registry>vagrant.f8:5000</docker.registry>
There's one more properties to look after:

<docker.assemblyDescriptorRef>artifact</docker.assemblyDescriptorRef>
It defines which parts of the build will be copied over to the Docker image.
The resulting Dockerfile looks like this:

FROM jboss/wildfly:9.0.1.Final
MAINTAINER markus at jboss.org
COPY maven /opt/jboss/wildfly/standalone/deployments/
USER root
RUN ["chown", "-R", "jboss:jboss","/opt/jboss/wildfly/standalone/deployments/"]
USER jboss
and a maven folder contains your application as a war file. From this point on, you could also use the docker image and push it to the official docker hub or another private repository. There's not special magic in it. Find all the configuration options in the extensive docker-maven plugin manual.

Fabric8 - Docker and Kubernetes Are Usable Now
Fabric8’s aim is to help any developer, team and organisation that wants to work with containers. Nobody really wants to use a command line to push and start containers. Plus, there's a lot more to it: Keeping them running, moving them around on hosts, monitoring, and and and. Don't even think about microservices right now, but those need even more. More fine grained control, more teams, more CI/CD and auto-discovery features. And all this is Fabric8. It can create a complete CI/CD pipeline with approvals and code quality insurance. If you want to see a complete example, have a look at what James Rawlings wrote up a couple of days ago. So, what does that mean for my Java EE project and how to deploy it to OpenShift now? Read up a little about how to run an application on OpenShift with the nice overview post by Arun Gupta. It also includes a pointer to the OpenShift life-cycle. You basically need to create an OpenShift project and include a json file, which describes your application including all the links to the docker images. Doable. For sure. But Fabric8 can do better. There is another Maven plugin available, which takes all this burden off you and just let's you deploy your application. Exactly, like I as a Java EE developer expected it to be. Let's add the plugin to your project and configure it a bit:

    <plugin>
                <groupId>io.fabric8</groupId>
                <artifactId>fabric8-maven-plugin</artifactId>
                <version>${fabric8.version}</version>
                <executions>
                    <execution>
                        <id>json</id>
                        <phase>generate-resources</phase>
                        <goals>
                            <goal>json</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>attach</id>
                        <phase>package</phase>
                        <goals>
                            <goal>attach</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
This does little more, than just bind it to the different execution phases. You can skip this for this example, because we're going to execute it manually anyway. The additional configurations do happen in Maven properties again:

<!-- Defining the Service Name for Fabric8 -->
<fabric8.service.name>myfear-wildfly-test</fabric8.service.name>
<!-- Defining the internal service port -->
<fabric8.service.port>9101</fabric8.service.port>
<!-- the expsed container port -->
<fabric8.service.containerPort>8080</fabric8.service.containerPort>
<!-- the component label, as shown in the console -->
<fabric8.label.component>${project.artifactId}</fabric8.label.component>
<!-- the container label -->
<fabric8.label.container>wildfly</fabric8.label.container>
<!-- the application group label -->
<fabric8.label.group>myfears</fabric8.label.group>
<!-- the domain were working in -->
<fabric8.domain>vagrant.f8</fabric8.domain>
<!-- We don't want to upload images, but want OpenShift to pull them automatically -->
 <fabric8.imagePullPolicy>IfNotPresent</fabric8.imagePullPolicy>
Ok, that's about it. Most of it are naming, labels and configurations which are a one-time thing to figure out. All we really need from here on, is the Kubernetes JSON file. So, type:

mvn fabric8:json fabric8:apply
What didn't work locally with my installation is, that my hosts file got updated with the new routing. So, you might need to add the domain-name mapping manually:

172.28.128.4 myfear-wildfly-test.vagrant.f8
After a couple of seconds, the new pod is created and you can access your application via http://myfear-wildfly-test.vagrant.f8/. This runs your application on OpenShift.


Try docker ps again and see, if you can spot your container. In my case:

c329f2e0f63b        myfear/wildfly-test:latest
If you struggle with something and your app doesn't come up as expected, there are some ways to get closer to the problem. First is, to run the image locally against your Docker daemon. There's a handy command, mvn fabric8:create-env to figure out the env vars for you so that you can run docker images outside of kubernetes as if they are inside (in terms of service discovery and environment variables defined in the kubernetes json). If that's not an option, you can also get a bash from your running container:

docker exec -i -t c329f2e0f63b bash
Just replace the container id, with the real one from the ps command. That's about it. Now you can totally start over. I'm going to walk you through the consoles a bit.

Access The OpenShift Console
First things first. You can spot your application on the OpenShift console. http://vagrant.f8:8443 brings you to the OpenShift console. Select the "default" space and see the Docker Registry, some elasticsearch inststances, some other and finally your instance:


You can also browse the individual pods and services. More about this maybe in a later blogpost

The Fabric8 Console
The one magical thing, we're really interested in is the Fabric8 Console. http://fabric8.vagrant.f8/ brings you there and the "Kubernetes" tab displays all the running apps for you. This also includes you own application:

As you can see in this screenshot, I already scaled the app from one (default) to two pods. Clicking on the little pod icon on the far right (not in this screenshot) let's you adjust the number of pods running. If you click on the "diagram" view, you see a complete overview of your infrastructure:
There's a lot more to explore and I am going to show you more in subsequent blog-posts. Now, that we got everything up and running, this will be even more entertaining. Let me know, what you want to read about in particular.

Monday, July 20, 2015

Monitoring DevOps Style With WildFly 9 And Jolokia

14:30 Monday, July 20, 2015 Posted by Markus Eisele
, , ,
DevOps is among the hottest topic these days. And the wide range of topics around it makes it hard to actually find a complete description or something that covers everything on a decent granularity. One thing is for sure: One of the most important parts is to deliver the correct metrics and and information for monitoring of the application.

Java EE and JMX
The standard way of monitoring Java EE servers is JMX. This is possible with tools like JConsole, VisualVM or the Oracle Mission-Control Suite. There are a bunch of advantages to this approach and most of the operation teams actually used this a lot in the past. But it doesn't exactly works the DevOps-way. It is a separate tooling and the DevOps-teams don't have a good way to actually script this without having all the tooling and operational systems (Nagios, etc.) installed. Today it feels a lot more natural and is easier to use to have http endpoints which expose configuration and runtime information.

Jolokia - JMX To HTTP With JSON
A very convenient way to do this for JMX is to use Jolokia. Jolokia is a JMX-HTTP bridge giving an alternative to JSR-160 connectors. It is an agent based approach with support for many platforms. In addition to basic JMX operations it enhances JMX remoting with unique features like bulk requests and fine grained security policies. It comes bundled with a lot of JBoss projects lately (e.g. WIldFly-Camel subsystem) and can be easily used in your own applications.

A Simple Java EE 7 App Equipped With Jolokia
Just create a simple Java EE 7 project (maybe with Adam Bien's maven artifact) and add one dependency to it:
<dependency>
     <groupId>org.jolokia</groupId>
     <artifactId>jolokia-core</artifactId>
     <version>1.3.1</version>
 </dependency>
The next step is to configure the Jolokia AgentServlet in your web.xml and map it to a pattern which suits your needs:
  <servlet>
        <servlet-name>jolokia-agent</servlet-name>
        <servlet-class>org.jolokia.http.AgentServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>jolokia-agent</servlet-name>
        <url-pattern>/metrics/*</url-pattern>
    </servlet-mapping>
Build your application as usual and access the relevant metrics as you need them. The complete .Jolokia reference explains the different operations and types.

Deploy Your Application To WildFly 9
Download and unzip WildFly 9 to a folder of your choice. Startup with bin/standalone.xml.

Example Metrics
While you can access every JMX MBean, that is defined in the server, here is a list of metrics, that might help you out of the box.

Heap memory usage:
http://localhost:8080/javaee-devops/metrics/read/java.lang:type=Memory/HeapMemoryUsage
{
    "request": {
        "mbean": "java.lang:type=Memory",
        "attribute": "HeapMemoryUsage",
        "type": "read"
    },
    "value": {
        "init": 67108864,
        "committed": 241696768,
        "max": 477626368,
        "used": 141716336
    },
    "timestamp": 1437392335,
    "status": 200
}
Overview over your server environment:
http://localhost:8080/javaee-devops/metrics/read/jboss.as:core-service=server-environment

You could not only read JMX attributes but also execute operations, like accessing the latest 10 lines of the server.log file:
http://localhost:8080/javaee-devops/metrics/exec/jboss.as.expr:subsystem=logging/readLogFile/server.log/UTF-8/10/0/true


Securing The Endpoint
As you would have expected, the AgentServlet is accessible like your application is. In order to prevent this, you will have to secure it. Good news is, that this is possible with basic authentication and the application realm in WildFly. Fist step is to add a user to the application realm. This can be done with the bin/add-user.sh|bat script. Make sure to add the role "SuperUser". Now add the following to your web.xml:
    <security-constraint>
        <display-name>Metrics Pages</display-name>
        <web-resource-collection>
            <web-resource-name>Protected Metrics Site</web-resource-name>
            <description>Protected Metrics Site</description>
            <url-pattern>/metrics/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <description/>
            <role-name>SuperUser</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>NONE</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
     <login-config>
        <auth-method>BASIC</auth-method>
        <realm-name>ApplicationRealm</realm-name>
    </login-config>
    <security-role> 
        <role-name>SuperUser</role-name> 
    </security-role> 
One last thing to do here is to add a file to WEB-INF/ called jboss-web.xml. This will just contain three lines:
<jboss-web>
    <security-domain>other</security-domain>
</jboss-web>
Whenever you try to access the metrics endpoint the server now challenges you with a basic authentication request.

Looking For More?
This is just a simple example for now based on the standard JMX metrics, which WildFly exposes. You can for sure register your own MBeans or expand this by aggregating the individual calls into one single. Another option is, to use hawt.io as a ready to use, extensible UI which already provides all kinds of metrics for WildFly and many other subsystems. But this is a very straight forward way. Next major release of Jolokia might feature some more to make the DevOps ride a lot more convenient.

Friday, July 17, 2015

Using JPA And CDI Beans With Camel on WildFly

13:30 Friday, July 17, 2015 Posted by Markus Eisele
, ,
I didn't really plan for it, but with a conference free month, I had the chance to dig around a little more and show you even more of the Camel on WildFly magic, that the WildFly-Camel subsystem provides.

The Business Background
The demo is derived from one on JBoss Demo-Central by Christina Lin. She demonstrates the use of File and JDBC connectors in Camel and also added the use of Spilt pattern and Exception handling method. The scenario of the demo is to mimic the transaction process between bank accounts. The input is a batch XML file which contains a bunch of transactions. Those can either be cash deposit, cash withdraw or transfer information of bank accounts. Depending on the type of transaction, they are spilt up and each transaction retrieves relevant information from a database, does the transaction and calculates the transaction fee before placing them back into the database. You can find the full original source code on GitHub.

Why Did I Touch It
Some reasons: I actually don't want to think about new business cases. And don't just want to show you something in nitty-gritty details on a technical level. So, I thought it is a quick win to just take the scenario from Christina. Second of all, she is doing everything in Fuse, based on Karaf and using the XML DSL for the route definitions. I am just a poor Java guy, and learned to hate XML. Plus, she is using a couple of components, which I wouldn't in a Java EE context.

Prerequisites - Getting The App Deployed
Before you begin, playing around with the demo, please make sure to have WildFly 8.2.0.Final installed together with the WildFly-Camel subsystem patch 2.2.0. Now feel free to fork the demo repository on my github account into a directory of your choice. It is nothing more than a maven Java EE 7 project with some additional dependencies. Just do a
mvn clean install
and deploy the resulting target/javaee-bankdemo-1.0-SNAPSHOT.war to your WildFly server.
There isn't any UI in this example, so you basically have to watch the logfile and copy an xml file around. The src\main\in-data folder contains a bank.xml, which you need to copy over to your standalone\data\inbox folder. The second this is done, camel starts it's magic.

The CustomerStatus
Everything begins with the standard Java EE app. The Entity CustomerStatus holds account information (ID, VipStatus, Balance). It also has some NamedQueries on it. Doesn't look Camel specific at all. The in-memory H2 database, which WildFly uses as the default db, get's pre-populated with the help of three scripts which are configured as schema-generation properties in the persistance.xml. I'm working with two customers here, named A01 and A02.

Camel And Java EE
The Camel bootstrapping is quite simple in this case. The BankRouteBuilder has a @ContextName("cdi-context") annotation and is itself an application scoped startup-bean which contains all the needed routes for the little demo. Feel free to re-read and learn about other potential options to deploy / configure routes. The hawt.io console (http://localhost:8080/hawtio/) displays all of them nicely. The application has five routes.
ReadFile is the first one, which basically only ready the xml file and pushes the individual entries (split by an xPath expression) to the processTransaction route.
This one decides on whether it is a "Cash" transaction or a "Transfer" transaction. Respectively ending in "direct:doTransfer" or "direct:processCash". I left all of the original xml route definitions in the BankRouteBilder as comments. Might be helpful, if you search for a particular solution.

Differences To The Fuse Demo
Christina used the Camel JDBC component a lot. It does all the heavy lifting and even the initial database setup. This is nothing we want to do anywhere, but especially not in a Java EE environment where we have all the JPA magic ready to use. In fact, there is a Camel JPA componente, but it is very limited and doesn't really support NamedQueries or alike.
A very powerful way to work around that is to use the Camel Bean component with all the bean binding and the cdi component, which is already integrated. All the database access is managed via the CustomerStatusService. Which is basically a @Named bean which get's an EntityManager injected and knows how to load CustomerStatus entities. It get's injected into the RouteBuilder by simply referencing it in the bean endpoint:
.to("bean:customerService?method=loadCustomer")
I agree, that there is a lot of magic happening behind the scenes and the fact, that the CustomerStatusService depends on Camel classes is another thing, that I dislike. But this could be easily worked around by just @Inject-ing the service into the route and referencing it alike. I decided to not do this, because I wanted to keep the initial flow of Christina's demo alive. She is working with the Exchanges a lot and relies on them. So, I stayed closer to her example.

A Word On Transactions
I am actually using an extended persistent context in this example and marked the updateCustomer method in the service as @Transactional. This is a very simple way of merging complete and updated CustomerStatus entities back into the database. The whole doTransfer route isn't transactional right now. Even if the second customer isn't in the system, the amount would still be withdrawn from the first customer account. I want to cover this at a later stage and a separate blog-post.

That's it for now. Enjoy your weekend and playing with Camel and the WildFly Camel subsystem. Happy to receive your ideas or questions via @myfear or as a comment on the blog-post.

Tuesday, July 14, 2015

Sending JMS Messages From WildFly 8 To WebLogic 12 with Camel

08:43 Tuesday, July 14, 2015 Posted by Markus Eisele
, , ,
System integration is a nice challenge. Especially, when you're looking for communication standards and reliable solutions. In today's microservices world, everybody talks about REST services and http-based protocols. As a matter of fact, this will never be enough for most enterprise projects which typically tend to have a much more complex set of requirements. A reasonable solution is a Java Message Service based integration. And while we're not looking at centralized infrastructures and ESBs anymore, we want point to point based integration for defined services. Let's see if we can make this work and send messages between JBoss WildFly and Oracle WebLogic Server.

Business Case - From Java EE To Microservices
But I want to step back a bit first: Why should someone? I think, one of the main motivations behind such a scenario is a slow migration path. Coming down all the way from monolithic, single platform applications we want to be flexible enough to shell out individual services from those giant installations and make them available as a service. Assuming, that this is even possible and the legacy application has a decent design. Or we want to advance individual services, let's say from a technical perspective. In this particular example, we can't wait to get Java EE 7 features into our application and WebLogic is still mostly stuck on EE 6. We could do this with REST services or even WebServices, but we might want more. And this is, where the JMS specification comes in.

Oracle JMS Client Libraries in WildFly
In order to send messages between two different servers, you need to have the individual client libraries integrated into the sending end. For WebLogic this is WebLogic JMS Thin Client (wljmsclient.jar). provides Java EE and WebLogic JMS functionality using a much smaller client footprint than a WebLogic Install or Full client, and a somewhat smaller client footprint than a Thin T3 client. As a matter of fact, it contains Java EE JMS APIs and implementations which will directly collide with the ones provided by WildFly. To use them, we'll have to package them as a module and and configure a JMS Bridge in HornetQ to use exactly this. First thing is to add the new module. Change folder to wildfly-8.2.0.Final\modules\system\layers\base and create a new folder structure: custom\oracle\weblogic\main underneath it. Copy the wlthint3client.jar from the %MW_HOME%\server\lib folder here. Now you have to add a module descriptor file, module.xml:
<module xmlns="urn:jboss:module:2.0" name="custom.oracle.weblogic">
    <resources>
        <resource-root path="wlthint3client.jar">
            <filter>
                <exclude-set>
                    <path name="javax.ejb"/>
                    <path name="javax.ejb.spi"/>
                    <path name="javax.transaction"/>
                    <path name="javax.jms"/>
                    <path name="javax.xml"/>
                    <path name="javax.xml.stream"/>
                </exclude-set>
            </filter>
        </resource-root>
    </resources>

    <dependencies>
        <module name="javax.api"/>
        <module name="sun.jdk" export="false" services="import">
            <exports>
                <include-set>
                    <path name="sun/security/acl"/>
                    <path name="META-INF/services"/>
                </include-set>
            </exports>
        </module>
        <module name="com.sun.xml.bind" />
        <module name="org.omg.api"/>
        <module name="javax.ejb.api" export="false"   />
        <module name="javax.transaction.api"  export="false" />
        <module name="javax.jms.api"  export="false" />
        <module name="javax.xml.stream.api" export="false"  />
        <module name="org.picketbox" optional="true"/>
        <module name="javax.servlet.api" optional="true"/>
        <module name="org.jboss.logging" optional="true"/>
        <module name="org.jboss.as.web" optional="true"/>
        <module name="org.jboss.as.ejb3" optional="true"/>
        <module name="org.hornetq" />
    </dependencies>
</module>
This file defines all the required resources and dependencies together with the relevant excludes. If this is done, we finally need the message bridge.

The HornetQ JMS Message Bridge
The function of a JMS bridge is to consume messages from a source JMS destination, and send them to a target JMS destination. Typically either the source or the target destinations are on different servers. The bridge can also be used to bridge messages from other non HornetQ JMS servers, as long as they are JMS 1.1 compliant. Open the standalone-full.xml and add the following configuration to the messaging subsystem:
<jms-bridge name="wls-bridge" module="custom.oracle.weblogic">
                <source>
                    <connection-factory name="java:/ConnectionFactory"/>
                    <destination name="java:/jms/sourceQ"/>
                </source>
                <target>
                    <connection-factory name="jms/WFMessagesCF"/>
                    <destination name="jms/WFMessages"/>
                    <context>
                        <property key="java.naming.factory.initial"
                              value="weblogic.jndi.WLInitialContextFactory"/>
                        <property key="java.naming.provider.url" 
                              value="t3://127.0.0.1:7001"/>
                    </context>
                </target>
                <quality-of-service>AT_MOST_ONCE</quality-of-service>
                <failure-retry-interval>2000</failure-retry-interval>
                <max-retries>10</max-retries>
                <max-batch-size>500</max-batch-size>
                <max-batch-time>500</max-batch-time>
                <add-messageID-in-header>true</add-messageID-in-header>
            </jms-bridge>
As you can see, it references the module directly and has a source and a target definition. The source is the WildFly local message queue which is defined in the messaging subsystem:
   <jms-queue name="sourceQ">
       <entry name="java:/jms/sourceQ"/>
   </jms-queue>
And the target is the remote queue plus connection factory, which are defined in WebLogic Server. I assume, that you know how to do that, if not, please refer to this documentation. That's pretty much it. Now we need to send a message to our local queue and this is going to be send via the bridge over to the WebLogic queue.

Testing The Bridge - With Camel
Deploy a message driven bean to WebLogic (Yes, you'll have to package it as an ejb jar into an ear and all of this). This particular sample just dumps the message text out to the logger.
@MessageDriven(mappedName = "jms/WFMessages", activationConfig = {
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})

public class LogMessageBean implements MessageListener {
    private final static Logger LOGGER = Logger.getLogger(LogMessageBean.class.getName());

    public LogMessageBean() {
    }

    @Override
    public void onMessage(Message message) {
        TextMessage text = (TextMessage) message;
        try {
            LOGGER.log(Level.INFO, text.getText());
        } catch (JMSException jmxe) {
            LOGGER.log(Level.SEVERE, jmxe.getMessage());
        }
    }
}
Now we need a producer on the WildFly server. Do do this, I am actually using the WildFly-Camel JMS integration.
@Startup
@ApplicationScoped
@ContextName("jms-camel-context")
public class JMSRouteBuilder extends RouteBuilder {

    @Override
    public void configure() throws Exception {
        // Initial Context Lookup
        Context ic = new InitialContext();
        ConnectionFactory cf = (ConnectionFactory) ic.lookup("/ConnectionFactory");
        // Create the JMS Component
        JmsComponent component = new JmsComponent();
        component.setConnectionFactory(cf);
        getContext().addComponent("jms", component);
        // Build A JSON Greeting
        JsonObject text = Json.createObjectBuilder()
                 .add("Greeting", "From WildFly 8").build();
        // Send a Message from timer to Queue
        from("timer://sendJMSMessage?fixedRate=true&period=10000")
                .transform(constant(text.toString()))
                .to("jms:queue:sourceQ")
                .log("JMS Message sent");
    }
}
That's the whole magic. A timer sends a JSON Text message to the local queue which is bridged over to WebLogic.


Some More Hints
If you want to test the WebLogic Queue without the bridge, you will have to include the wljmsclient into your project. As this isn't available in a Maven repository (AFAIK), you can simply install it locally:
mvn install:install-file -Dfile=%MW_HOME%/wlserver/server/lib/wlthint3client.jar -DgeneratePom=true -DgroupId=custom.com.oracle -DartifactId=wlthint3client -Dversion=12.1.3 -Dpackaging=jar
Another important thing is, that you will run into classloading issues on WildFly, if you try to use the custom module in any other scope than the bridge. So, pay close attention, that you don't use it somewhere else.
The bridge has a comparibly large failure-retry-interval and max-retries configured. This is a workaround. If WildFly startup is too fast and the bridge tries to access the local sourceQ before the queue is actually configured, it'll lead to an exception.
Find the complete source-code in my GitHub account.

Friday, July 10, 2015

Using Camel Routes In Java EE Components

15:00 Friday, July 10, 2015 Posted by Markus Eisele
, ,
I've been working with Camel since a while now and I really like it's simplicity. Using it on top of Java EE always was a little bit of a challenge and one of the recent talks I gave about how to do this and the different methods of bootstrapping Camel in Java EE actually proposes to use the WildFly-Camel Subsystem. In an ongoing series I am going to explore the different ways of doing this and provide a bunch of examples which are still missing from the talk. I'm happy to receive your feedback and requests in the comments or via @myfear on twitter.

Getting Started With Camel On WildFly 8.2 
The Wildfly-Camel Subsystem provides Apache Camel integration with the WildFly Application Server. It allows you to add Camel Routes as part of the WildFly configuration. Routes can be deployed as part of Java EE applications. Java EE components can access the Camel Core API and various Camel Component APIs. Your Enterprise Integration Solution can be architected as a combination of Java EE and Camel functionality.
Remark: Latest WildFly 9 is expected to be supported by the 3.x release of WildFly-Camel.

Getting Ready 
Download and unzip WildFly 8.2.0.Final to a folder of your choice. Download and unzip the wildfly-camel patch (2.3.0) to the wildfly folder.  Start WildFly with
bin/standalone[.bat|.sh] -c standalone-camel.xml
One of the fastest ways to get up and running is with Docker and the WildFly Camel image. This image comes bundled with WildFly 8.1 and the Camel subsystem already installed.
Defining And Using A Camel Context
The CamelContext represents a single Camel routing rulebase. You use the CamelContext in a similar way to the Spring ApplicationContext. It contains all the routes for your application. You can have as many CamelContexts as necessary, as long as they have different names. WildFly-Camel let's you define them as a) in the standalone-camel.xml and domain.xml as part of the subsystem definition itself and b) or deploy them in a supported deployment artifact which contains a -camel-context.xml suffixed file and c) it can be provided as together with it's routes via a RouteBilder and the CDI integration.
A defined CamelContext can be consumed in two different ways: a) @Injected via Camel-CDI or b) accessed from the JNDI tree.

The Example Context And Route
For the following examples I use a context with an associated route which is provided via CDI and a RouteBuilder. It is an application scoped bean which is automatically started with the application start. The @ContextName annotation gives a specific name to the CamelContext.
@ApplicationScoped
@Startup
@ContextName("cdi-context")
public class HelloRouteBuilder extends RouteBuilder {

    @Inject
    HelloBean helloBean;

    @Override
    public void configure() throws Exception {
        from("direct:start").transform(body().prepend(helloBean.sayHello()).append(" user."));
    }
}
The route itself isn't exactly challenging. It takes an empty message body from direct:start and prepends the output from a CDI bean-method "sayHello" and appends the string " user." to it. For reference, the complete code can be found on my GitHub account. So, all we need to find out next is, how to use this route in the various Java EE component specifications.

Using Camel From CDI
Camel supports CDI since version 2.10. Before and outside the subsystem, it needed to be bootstrapped. This is no longer necessary and you can just use a deployed or defined CamelContext in a @Named CDI bean by simply @Injecting it by name:
@Inject
    @ContextName("cdi-context")
    private CamelContext context;

Using Camel From JSF, JAX-RS and EJBs
With the knowledge about how to use a CamelContext in CDI, you would assume, that it is easy to just do the same from JSF and alike. This is not true. You actually can't inject it into either ManagedBeans or even CDI Beans which are bound to a JSF component. Plus it's not working in EJBs. I haven't looked into it detailed, but assume it has something to do with scopes. A reasonable workaround and in fact, a better application design is to put the complete Camel logic into a separate CDI bean and just inject this.
@Named
public class HelloCamel {

    @Inject
    @ContextName("cdi-context")
    private CamelContext context;

    private final static Logger LOGGER = Logger.getLogger(HelloCamel.class.getName());

    public String doSomeWorkFor(String name) {

        ProducerTemplate producer = context.createProducerTemplate();
        String result = producer.requestBody("direct:start", name, String.class);
        LOGGER.log(Level.INFO, result);
        return result;
    }
}
The ProducerTemplate interface allows you to send message exchanges to endpoints in a variety of different ways to make it easy to work with Camel Endpoint instances from Java code. In this particular case, it just starts the route and puts a String into the body which represents the name of the component I'm using it from.
The CDI Bean, which acts as a backing-bean for the component just uses it:
@Inject
    HelloCamel helloCamel;

    public String getName() {
        return helloCamel.doSomeWorkFor("JSF");
    }
The return String is "Hello JSF user." Which also is written to the WildFly server log. The same approach is the best for all the other Java EE components.

Using Camel From EJBs
If you're using EJBs as your man application component model, it is also very reasonable to just use the JNDI approach:
 CamelContext camelctx = 
                (CamelContext) inicxt.lookup("java:jboss/camel/context/cdi-context");

Hawtio - A Camel Console
Another hidden gem in the subsystem is the bundling of the Hawtio console. It is a modular web console for managing your Java stuff and has an Apache Camel plugin which visualizes your contexts and routes. Remember, that it is automatically configured for security and you need to add a management user before you're able to access it.


Further Reading & Help
Talk to the Developers on Freenode
WildFly-Camel Subystem Documentation
WildFly Camel On GitHub
Apache Camel Website

Monday, June 8, 2015

Docker Compose on Windows with Python And Babon

14:30 Monday, June 8, 2015 Posted by Markus Eisele
, , ,
Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. It is the only tool in the Docker tool-chain, which doesn't have a native binary for Windows in place right now and to get it up and running on Windows requires quite some work.

Using Babon and Python
The official compose documentation implies, that there is a python only way on not supported platforms. As a matter of fact, this is not totally true. Even the Python package relies on POSIX based commands which aren't available on Windows. If you try to go down this road you will get surprisingly far, but will not finish. The only way to make it work is to use CygWin. For those of you, who don't like it (like I don't), there is a decent alternative called Babun. Babun is a turn-key CygWin distribution for developers and is very easy to install and maintain.
  • Download the installer ZIP archive from the Babun homepage. (~280MB)
  • Unzip the archive to a temporary folder.
  • Change to the unzipped folder and start the install.bat (this might take a while) When you're finished, you can safely delete the temp folder.
  • The babun shell is now open, run the command: "babun update"
  • Change the default shell from zsh to bash if you prefer that by running the command: "babun shell /bin/bash".
  • Edit ~/.bashrc to activate loading of ~/.bash_aliases. (scroll down a bit until you find the line: "#Aliases" and un-comment the if statement.
  • Install additional Python essentials:
    pact install python-setuptools 
    pact install libxml2-devel libxslt-devel libyaml-devel
    curl -skS https://bootstrap.pypa.io/get-pip.py | python
    pip install virtualenv
    curl -skS https://raw.githubusercontent.com/mitsuhiko/pipsi/master/get-pipsi.py | python
    
This installed a bunch of python packages and the pipsi package manager to your Babun installation. Now you're ready to actually install the docker compose python package:
pip install -U docker-compose
After everything got downloaded and installed, you can now use compose from Babun:
{ ~ }  » docker-compose --version                                                            
docker-compose 1.2.0
With the mapped directories it is easy to change to a temp folder on your windows drive (e.g. /d/temp/) and use compose. Make sure you have everything you need in your PATH variable (Hint: that is different now, e.g. just use:  PATH=$PATH\:/d/path/to/docker/exe ; export PATH ) and make sure to set your environment properly:
eval "$(docker-machine env)"
Now, you can go ahead and just use a very simple docker-compose.yml file, like the one Arun blogged about and you have a bunch of instances up and running without any further configuration or command line hacks.
Find the complete reference to the compose file format on the official Docker Website.

Using the Docker Image Workaround
If you want to, you can try to use the (unofficial) Docker Compose image and run it as a container locally. While this seems to be a solution, I couldn't get this to work on plain Windows. Any pointer and ideas appreciated.

A Two Minute Babun Screencast
Have a look at a 2 minutes long screencast about Babon by @tombujok.

Wednesday, June 3, 2015

NoSQL with Hibernate OGM - Part three: Building a REST application on WildFly

13:00 Wednesday, June 3, 2015 Posted by Markus Eisele
, , ,
Welcome back to our tutorial series "NoSQL with Hibernate OGM"! Thanks to Gunnar Morling (@gunnarmorling) for creating this tutorial. In this part you will learn how to use Hibernate OGM from within a Java EE application running on the WildFly server. Using the entity model you already know from the previous parts of this tutorial, we will build a small REST-based application for managing hikes. In case you haven't read the first two installments of this series, you can find them here:


In the following you will learn how to prepare WildFly for using it with Hibernate OGM, configure a JPA persistence unit, create repository classes for accessing your data and providing REST resources on top of these. In this post we will primarily focus on the aspects related to persistence, so some basic experience with REST/JAX-RS may help. The complete source code of this tutorial is hosted on GitHub.

Preparing WildFly
The WildFly server runtime is based on the JBoss Modules system. This provides a modular class-loading environment where each library (such as Hibernate OGM) is its own module, declaring the list of other modules it depends on and only "seeing" classes from those other dependencies. This isolation provides an escape from the dreaded "classpath hell".
ZIP files containing all the required modules for Hibernate OGM are provided on SourceForge. Hibernate OGM 4.2 - which we released yesterday - supports WildFly 9, so download hibernate-ogm-modules-wildfly9-4.2.0.Final.zip for that. If you are on WildFly 8, use Hibernate OGM 4.1 and get hibernate-ogm-modules-wildfly8-4.1.3.Final.zip instead.
Unzip the archive corresponding to your WildFly version into the modules directory of the application server. If you prefer that the original WildFly directories remain unchanged, you also can unzip the Hibernate OGM modules archive to any other folder and configure this as the "module path" to be used by the server. To do so, export the following two environment variables, matching your specific environment:
export JBOSS_HOME=/path/to/wildfly
export JBOSS_MODULEPATH=$JBOSS_HOME/modules:/path/to/ogm/modules
In case you are working with the Maven WildFly plug-in, e.g. to launch WildFly during development, you'd achieve the same with the following plug-in configuration in your POM file:
...
<plugin>
    <groupId>org.wildfly.plugins</groupId>
    <artifactId>wildfly-maven-plugin</artifactId>
    <version>1.1.0.Alpha1</version>
    <configuration>
        <jboss-home>/path/to/wildfly</jboss-home>
        <modules-path>/path/to/ogm/modules</modules-path>
    </configuration>
</plugin>
...

Setting up the project
Start by creating a new Maven project using the "war" packaging type. Add the following to your pom.xml:
...
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.hibernate.ogm</groupId>
            <artifactId>hibernate-ogm-bom</artifactId>
            <type>pom</type>
            <version>4.2.0.Final</version>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
...
This makes sure you get matching versions of Hibernate OGM's modules and any (optional) dependencies. Then add the dependency to the Java EE 7 API and one of the Hibernate OGM backend modules, e.g. Infinispan, JBoss' high-performance, distributed key/value data grid (any other such as hibernate-ogm-mongodb or the brand-new hibernate-ogm-cassandra module would work as well):
...
<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>7.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.hibernate.ogm</groupId>
        <artifactId>hibernate-ogm-infinispan</artifactId>
        <scope>provided</scope>
    </dependency>
</dependencies>
...
The provided scope makes these dependencies available for compilation but prevents them from being added to the resulting WAR file. That it because the Java EE API is part of WildFly already, and Hibernate OGM will be contributed through the modules you unzipped before.
Just adding these modules to the server doesn't cut it, though. They also need to be registered as a module dependency with the application. To do so, add the file src/main/webapp/WEB-INF/jboss-web.xml with the following contents:
<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure
    xmlns="urn:jboss:deployment-structure:1.2"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <deployment>
        <dependencies>
            <module name="org.hibernate" slot="ogm" services="import" />
            <module name="org.hibernate.ogm.infinispan" services="import" />
            <module name="org.hibernate.search.orm" services="import" />
        </dependencies>
    </deployment>
</jboss-deployment-structure>
This will make Hibernate OGM core and the Infinispan backend as well as Hibernate Search available to your application. The latter will be used to run JP-QL queries in a bit.

Adding entity classes and repositories
With the basic project infrastructure in place, it's time to add the entity classes and repository classes for accessing them. The entity types are basically the same as seen in part 1, only now they are annotated with @Indexed in order to allow them to be queried via Hibernate Search and Lucene:
@Entity
@Indexed
public class Person {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String firstName;
    private String lastName;

    @OneToMany(
        mappedBy = "organizer",
        cascade = { CascadeType.PERSIST, CascadeType.MERGE },
        fetch = FetchType.EAGER
    )
    private Set<Hike> organizedHikes = new HashSet<>();

    // constructors, getters and setters...
}
@Entity
@Indexed
public class Hike {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String description;
    private Date date;
    private BigDecimal difficulty;

    @ManyToOne
    private Person organizer;

    @ElementCollection(fetch = FetchType.EAGER)
    @OrderColumn(name = "sectionNo")
    private List<HikeSection> sections;

    // constructors, getters and setters...
}
@Embeddable
public class HikeSection {

    private String start;
    private String end;

    // constructors, getters and setters...
}
In order to use these entities, a JPA persistence unit must be defined. To do so, create the file src/main/resources/META-INF/persistence.xml:
<?xml version="1.0" encoding="utf-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
    version="1.0">

    <persistence-unit name="hike-PU" transaction-type="JTA">
        <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>

            <class>org.hibernate.ogm.demos.ogm101.part3.model.Person</class>
            <class>org.hibernate.ogm.demos.ogm101.part3.model.Hike</class>

            <properties>
                <property name="hibernate.ogm.datastore.provider" value="INFINISPAN" />
                <property name="hibernate.ogm.datastore.database" value="hike_db" />
                <property name="hibernate.ogm.datastore.create_database" value="true" />
            </properties>
    </persistence-unit>
</persistence>
Here we define a persistence unit named "hike-PU". Infinispan is a fully transactional datastore, and using JTA as transaction type allows the persistence unit to participate in container-managed transactions. Specifying HibernateOgmPersistence as the provider class enables Hibernate OGM (instead of Hibernate ORM), which is configured with some properties for the setting backend (INFINISPAN in this case), database name etc.
Note that it actually should not be required to specify the entity types in persistence.xml when running in a Java EE container such as WildFly. Instead they should be picked up automatically. When using Hibernate OGM this unfortunately is needed at the moment. This a known limitation (see OGM-828) which we hope to fix soon.
The next step is to implement repository classes for accessing hike and organizer data. As an example, the following shows the PersonRepository class:
@ApplicationScoped
public class PersonRepository {

    @PersistenceContext
    private EntityManager entityManager;

    public Person create(Person person) {
        entityManager.persist( person );
        return person;
    }

    public Person get(String id) {
        return entityManager.find( Person.class, id );
    }

    public List<Person> getAll() {
        return entityManager.createQuery( "FROM Person p", Person.class ).getResultList();
    }

    public Person save(Person person) {
        return entityManager.merge( person );
    }

    public void remove(Person person) {
        entityManager.remove( person );
        for ( Hike hike : person.getOrganizedHikes() ) {
            hike.setOrganizer( null );
        }
    }
}
The implementation is straight-forward; by means of the @ApplicationScoped annotation, the class is marked as application-scoped CDI bean (i.e. one single instance of this bean exists throughout the lifecycle of the application). It obtains a JPA entity manager through dependency injection and uses the same to implement some simple CRUD methods (Create, Read, Update, Delete).
Note how the getAll() method uses a JP-QL query to return all person objects. Upon execution this query will be transformed into an equivalent Lucene index query which will be run through Hibernate Search.
The hike repository looks very similar, so it's omitted here for the sake of brevity. You can find its source code on GitHub.

Exposing REST services
JAX-RS makes building REST-ful web services a breeze. It defines a declarative programming model where you annotate plain old Java classes to provide implementations for the GET, POST, PUT etc. operations of an HTTP endpoint.
Describing JAX-RS in depth is beyond the scope of this tutorial, e.g. refer to the Java EE 7 tutorial if you would like to learn more. Let's just have a look at the some methods of a resource class for managing persons as an example:
@Path("/persons")
@Produces("application/json")
@Consumes("application/json")
@Stateless
public class Persons {

    @Inject
    private PersonRepository personRepository;

    @Inject
    private ResourceMapper mapper;

    @Inject
    private UriMapper uris;

    @POST
    @Path("/")
    public Response createPerson(PersonDocument request) {
        Person person = personRepository.create( mapper.toPerson( request ) );
        return Response.created( uris.toUri( person ) ).build();
    }

    @GET
    @Path("/{id}")
    public Response getPerson(@PathParam("id") String id) {
        Person person = personRepository.get( id );
        if ( person == null ) {
            return Response.status( Status.NOT_FOUND ).build();
        }
        else {
            return Response.ok( mapper.toPersonDocument( person ) ).build();
        }
    }

    @GET
    @Path("/")
    public Response listPersons() { … }

    @PUT
    @Path("/{id}")
    public Response updatePerson(PersonDocument request, @PathParam("id") String id) { … }

    @DELETE
    @Path("/{id}")
    public Response deletePerson(@PathParam("id") String id) { … }
}
The @Path, @Produces and @Consumes annotations are defined by JAX-RS. They bind the resource methods to specific URLs, expecting and creating JSON based messages. @GET, @POST, @PUT and @DELETE configure for which HTTP verb each method is responsible.
The @Stateless annotation defines this POJO as a stateless session bean. Dependencies such as the PersonRepository can be obtained via @Inject-based dependency injection. Implementing a session bean gives you the comfort of transparent transaction management by the container. Invocations of the methods of Persons will automatically be wrapped in a transaction, and all the interactions of Hibernate OGM with the datastore will participate in the same. This means that any changes you do to managed entities - e.g. by persisting a new person via PersonRepository#create() or by modifying a Person object retrieved from the entity manager - will be committed to the datastore after the method call returns.

Mapping models
Note that the methods of our REST service do not return and accept the managed entity types themselves, but rather specific transport structures such as PersonDocument:
public class PersonDocument {

    private String firstName;
    private String lastName;
    private Set<URI> organizedHikes;

    // constructors, getters and setters...
}
The reasoning for that is to represent the elements of associations (Person#organizedHikes, Hike#organizer) in form of URIs, which enables a client to fetch these linked resources as required. E.g. a GET call to http://myserver/ogm-demo-part3/hike-manager/persons/123 may return a JSON structure like the following:
{
    "firstName": "Saundra",
    "lastName": "Johnson",
    "organizedHikes": [
        "http://myserver/ogm-demo-part3/hike-manager/hikes/456",
        "http://myserver/ogm-demo-part3/hike-manager/hikes/789"
    ]
}
The mapping between the internal model (e.g. entity Person) and the external one (e.g. PersonDocument) can quickly become a tedious and boring task, so some tool-based support for this is desirable. Several tools exist for this job, most of which use reflection or runtime byte code generation for propagating state between different models.
Another approach for this is pursued by MapStruct, which is a spare time project of mine and generates bean mapper implementations at compile time (e.g. with Maven or in your IDE) via a Java annotation processor. The code it generates is type-safe, fast (it's using plain method calls, no reflection) and dependency-free. You just need to declare Java interfaces with mapping methods for the source and target types you need and MapStruct will generate an implementation as part of the compilation process:
@Mapper(
    // allows to obtain the mapper via @Inject
    componentModel = "cdi",

    // a hand-written mapper class for converting entities to URIs; invoked by the generated
    // toPersonDocument() implementation for mapping the organizedHikes property
    uses = UriMapper.class
)
public interface ResourceMapper {

    PersonDocument toPersonDocument(Person person);

    List<PersonDocument> toPersonDocuments(Iterable<Person> persons);

    @Mapping(target = "date", dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
    HikeDocument toHikeDocument(Hike hike);

    // other mapping methods ...
}
The generated implementation can then be used in the Persons REST resource to map from the internal to the external model and vice versa. If you would like to learn more about this approach for model mappings, check out the complete mapper interface on GitHub or the MapStruct reference documentation.

Wrap-up
In this part of our tutorial series you learned how to add Hibernate OGM to the WildFly application server and use it to access Infinispan as the data storage for a small REST application.
WildFly is a great runtime environment for applications using Hibernate OGM, as it provides most of the required building blocks out of the box (e.g. JPA/Hibernate ORM, JTA, transaction management etc.), tightly integrated and ready to use. Our module ZIP allows to put the Hibernate OGM modules into the mix very easily, without the need for re-deploying them each time with your application. With WildFly Swarm there is also support for the micro-services architectural style, but we'll leave it for another time to show how to use Hibernate OGM with Wildfly Swarm (currently JPA support is still lacking from WildFly Swarm).
You can find the sources of the project on GitHub. To build the project run mvn clean install (which executes an integration test for the REST services using Arquillian, an exciting topic on its own). Alternatively, the Maven WildFly plug-in can be used to fire up a WildFly instance and deploy the application via mvn wildfly:run, which is great for manual testing e.g. by sending HTTP requests through curl or wget.
If you have any questions, let us know in the comments below or send us a Tweet to @Hibernate. Also your wishes for future parts of this tutorial are welcome. Stay tuned!