Java EE and general Java platforms.
You'll read about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies.

Monday, October 12, 2015

Scaling Java EE Microservices on OpenShift

14:44 Monday, October 12, 2015 Posted by Markus Eisele
, , ,
The first two parts of this little series introduced you build a tiny JAX-RS service with WildFly Swarm and package it into a Docker image. You learned how to deploy this example to OpenShift and now it is time to scale it up a bit.

Why Scaling Is Important
One of the key aspects of microservices based architectures is decomposition into highly performant individual services which scale on demand and technically easy. Applications are now being built to scale and infrastructure is transparently assisting where necessary. While Java EE developers have done this a lot in the past with the standard horizontal scaling by putting more physical boxes next to each other or limit vertical scaling by spinning up more instances on the same host. Microservices allow for different scaling approaches. A much more complete definition of the different variations for scaling can be found in the book The Art Of Scalability. I'm going to dig into different approaches with future blog-posts. To make the entry into scaling a little bit easier, we're going to scale our tiny little app vertically today by spinning up more pods for it.

What Is A Pod
A pod (as in a pod of whales or pea pod) is a Kubernetes object which corresponds to a colocated group of applications running with a shared context. In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. In a pre-container world, they would have executed on the same physical or virtual host. So, that's what we want to scale in this example. The pod, that is already running.

What Did We Do So Far?
When you first deployed the JAX-RS example, OpenShift created a bunch of resources. Namely:
  • Imagestream 
    An image stream is similar to a Docker image repository in that it contains one or more Docker images identified by tags. OpenShift stores complete metadata about each image (e.g., command, entrypoint, environment variables, etc.). Images in OpenShift are immutable. OpenShift components such as builds and deployments can watch an image stream and receive notifications when new images are added, reacting by performing a build or a deployment, for example.
  • Service
    A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them.
  • DeploymentConfig
    Building on replication controllers, OpenShift adds expanded support for the software development and deployment lifecycle with the concept of deployments. OpenShift deployments also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.
So, a service proxies our request to the pods, and a deploymentconfig is build on top of the Kubernetes replication controller, which controls the number of pods. We're getting closer!

Scale My Microservice now, please!
Just a second longer, so: while services provide routing and load balancing for pods which may blink in and out of existence, ReplicationControllers (RC) are used to specify and enforce the number of pods (replicas) that should be in existence. RCs can be thought of to live at the same level as Services but they provide different functionality above pods.  RCs are a Kubernetes object.OpenShift provides a “wrapper” object on top of the RC called a Deployment Configuration (DC). DCs not only include the RC but they also allow you to define how transitions between images occur as well as post­deploy hooks and other deployment actions.
We finally know where to look at. Let's seem what the DeploymentConfig looks like, that we created when we started our swarm-sample image.
oc get dc swarm-sample
NAME           TRIGGERS                    LATEST VERSION
swarm-sample   ConfigChange, ImageChange   1
Even though RCs control the scaling of the pods, they are wrapped in a higher construct, DeploymentConfig, which also manages when, where, and how these Pods/RCs will be deployed. We can still see the underlying RC: (note: output truncated)
oc get rc swarm-sample-1
CONTROLLER       CONTAINER(S)   IMAGE(S)                                         REPLICAS 
swarm-sample-1   swarm-sample[...]    1 
And now we need to know if whatever scaling we're going to do is actually working. I did push a little curl script, which outputs the result from the JAX-RS endpoint and sleeps for 2 seconds before it is requesting the output again. Start it up and watch the result returning the same hostname environment variable all over until you execute the following command:
oc scale dc swarm-sample --replicas=3
Now everything changes and after a while you see three different hostnames being returned. It might take a while (depending on your machine and how quickly OpenShift can spin up the new pods.You can also see the change in the admin console, where three pods are now displayed.

We can revert the behavior by setting the replicas count back to 1.
oc scale dc swarm-sample --replicas=1
That was easy. And not exactly considered best-practice. Because all of the pods share the same context, they should never run on the same physical machine. Instead, it would be better to run a complete microservice (frontend, backend, database) on three pods within the same RC. But this is a topic for more blog-posts to come. Now you learned, how to scale pods on OpenShift and we can continue to evolve our example application further and do more scaling examples later.

Friday, October 9, 2015

Deploying Java EE Microservices on OpenShift

11:38 Friday, October 9, 2015 Posted by Markus Eisele
, ,
I blogged about the simple JAX-RS microservice with WildFly Swarm yesterday. You learned how to build a so called "fat-jar" with Maven and also used the Maven Docker plugin to dockerize our microservice and run it locally on Docker Machine. This was a nice way to test things locally. What was missing so far is to put this into production. Let's look what steps are necessary to run yesterdays example on OpenShift Origin.

Why Should An Enterprise Java Developer Care?
But first of all, let's briefly look into why an Enterprise Java developer should even care about all of this. There is something about the recent hypes and buzzes, that let me wonder a bit. For sure, they make an interesting playing-field and you can spend hours of downloading container images and running them on your laptop. But bringing them into production was a challenge so far. Nigel has a really nice blog-post up about a deprecated feature in Docker. And it has another gem in it: A paragraph called: "Enterprise Impact". The main quote is:

"I’m sure doing this kind of stuff is done all the time in cool hipster companies [...] But it’s absolutely not done in rusty old enterprises [...]".
(Nigel Poulton) 

And I can absolutely second that. Enterprise Developers, Architects and Project Managers are taking a much slower and conservative approach to adopting all those technologies. And they are looking for ways to successfully manage infrastructures and projects. All those technologies will find their way into our daily work life, but they will come in a more manageable way. So, we're just doing our homework with educating ourselves about all of this and evaluating solutions that will help us with that. But enough of general thoughts; Let's start to deploy and scale a Java EE application.

Install and run OpenShift Origin and follow the steps to build a WildFly Swarm JAX-RS Microservice in a Docker Container. Because this is the example, I'm going to deploy and scale further on.
(NOTE: I am using both, the all-in-on-vm from the OpenShift project and the Vagrant image delivered by the Fabric8 project interchangeable. They work pretty much the same and both rely on OpenShift Origin. If you see URLs ending on .f8, e.g. https://vagrant.f8:8443 in one of the codes or examples, you can use localhost or other host-mappings interchangeable. )

What Is OpenShift Origin?
OpenShift Origin is the upstream open source version of Red Hat's distributed application system, OpenShift. We launched this project to provide a platform in which development teams could build and manage cloud native applications on top of Docker and Kubernetes. You can find the source code on Github and we know you've got great ideas for improving OpenShift Origin. So roll up your sleeves and come join us in the community.
There is a lot to know to master all the integrated technologies. But the community is working hard to make this as understandable and manageable as possible for us, the enterprise Java developers. To give you a brief overview of OpenShift, this is a simple diagram of how everything works:

You see a lot of common parts here, if you've been following the latest buzz around Docker and Kubernetes. A request comes in via a client and ends in the routing layer. It get's dispatched to a service and hit's a pod which is running one of our Docker images in a container. The pods are controlled by replication controllers. There is a lot more to it, of course, but this should be all you need to understand for now to get a first idea about the whole thing.
Another, more detailed overview gives you a more precise idea about the parts, that we are going to work with today.

Especially the integrated Docker registry, the image stream, deployment configuration and routing to our services are of interest for now.

The Basics - Administration UI and Client Tools
After you setup your vagrant box and have fired it up, you can access the web-based administration by browsing to: https://localhost:8443. The all-in-one-vm comes without configured security. This means, that the "Allow All" identity provider is used. You can login with any non-empty user name and password. The "admin" user is the administration user with all rights. Login in with "admin/admin" gives you full power on Origin. The web-based administration is good for looking at logfiles and the overall picture. It is (not yet) fully featured and doesn't allow you to tweak or change things. First and foremost, you need to use the command line tool: "oc". And similar to the web-adminstration, you also have to login to OpenShift:
oc login https://localhost:8443
You are also prompted for a username and password (admin/admin) and presented with a list of projects:
Authentication required for https://vagrant.f8:8443 (openshift)
Username: admin
Login successful.

Using project "default".
You have access to the following projects and can switch between them with 'oc project <projectname>':
  * default (current)
  * openshift
  * openshift-infra
Now you're ready for some administration in OpenShift.

Exposing the Internal Docker Registry
If we want to run a dockerized application in OpenShift, which isn't available in the docker-hub, we need to push it to the OpenShift Docker Registry. By default it isn't externally exposed, so first thing to do is to expose the build in OpenShift Docker Registry via a Route.
oc create -f registry-route.json
The json file contains the definition for the route and is checked into my Github repository. Make sure to adjust the host name in Line 8 to your needs. For this example to work, I added the following mapping to my hosts file on Windows: registry.vagrant.f8
When the route is successfully created, all you have to do is to set your environment accordingly (you will have done this already, when you followed my intro blog-posts. So this is just a reminder):
set DOCKER_HOST=tcp://vagrant.f8:2375
Creating A Project And A User
Let's create a new project for our example. Because of namespace reasons, we will name the project exactly after the user and image name: In this example, "myfear".
oc new-project myfear --description="WildFly Swarm Docker Image on OpenShift v3" --display-name="WildFly Swarm Project"
The description and display name are optional, but make it better looking in the web-ui.

Let's create a user "myfear" by simply logging in as:
c login https://vagrant.f8:8443 -u myfear

Tweaking The Example
We need to change some parts of the pom.xml from yesterdays example. First of all, we need to tell the Docker Maven Plugin, that it should use a private registry running at registry.vagrant.f8:80. Wondering, why this isn't port 5000? Because, we exposed the service via OpenShift and the HAProxy did it via port 80. Uncomment the two lines in the pom.xml:
And get the login token for the user myfear via the oc client tools:
$oc whoami -t
which will output something like this:
Now update the token in the <authConfig> element of the pom. That's basically it.

Build And Push The Image
The image has been build in my earlier blog-post already, but let's just do it again here:
mvn clean install docker:build
Now push the image to our OpenShift Docker Registry:
mvn docker:push
Which will output the process of pushing the image to registry.vagrant.f8:80/myfear/swarm-sample.

Run A Docker Image On OpenShift
Now we just use the regular way to spin up a new Docker image on OpenShift:
oc new-app --docker-image=myfear/swarm-sample:latest
And watch what is happening: OpenShift actually created several resources behind the scenes in order to handle deploying and running this Docker image. First, it made a Service, which identifies a set of pods that it will proxy and load balance. Services assign an IP address and port pair that, when accessed, redirect to the appropriate back end The reason you care about services is they basically act as a proxy/load balancer between your pods and anything that needs to use the pods that is running inside the OpenShift environment. Get a complete description of what OpenShift created from our image by using the describe command:
oc describe service swarm-sample
Which outputs:
Name:                   swarm-sample
Namespace:              myfear
Labels:                 app=swarm-sample
Selector:               app=swarm-sample,deploymentconfig=swarm-sample
Type:                   ClusterIP
Port:                   8080-tcp        8080/TCP
Session Affinity:       None
No events.
The one thing, we're missing so far is the external mapping via a route. You recall what we did for the Docker Registry? This is the next and last step so far.

oc expose service swarm-sample --hostname=wildfly-swarm.vagrant.f8
And as you might have guessed, we also need to map the hostname in the hosts file:    wildfly-swarm.vagrant.f8
And we're done. Why I didn't use a json file to create the route? Because I wanted to show you, that it can be easier, as long the image uses the correct EXPOSE definitions for the ports, the oc expose command does this job without having to mess around with json. It is the same result.

Browse to: http://wildfly-swarm.vagrant.f8/rs/customer and see the output:
{"text":"WildFly Swarm Docker Application on OpenShift at http://wildfly-swarm.vagrant.f8/rs/ - Hostname: swarm-sample-1-7mmd7"}
The hostname is the pod, on which our container is running on.

Next up in this series is scaling and load-balancing our little example. Stay tuned for more! Have questions, or ideas about more Java EE and Docker and OpenShift? Please let me know and follow me on Twitter @myfear

Thursday, October 8, 2015

A WildFly Swarm JAX-RS Microservice in a Docker Container

17:19 Thursday, October 8, 2015 Posted by Markus Eisele
, , ,
Everybody is talking about microservices these days. And there are plenty of opinions and ideas and very few examples about how to apply those principles on an enterprise level. One thing is for sure, even at conferences just a couple of days ago, I rarely found anyone who was running a Docker container in production. At least a reasonable amount of hands went up when I asked about first experiences and if someone had played around with it. And looking at all the operational level knowledge (OS, Networking, etc.) that is required to run a containerized infrastructure, I can understand all this. A lot has to be done to make this easier for Enterprise Java developers. There are indeed some ways we can work with day to day tools and combine them with latest technologies to educate ourselves. One of them is WildFly Swarm as a lightweight and easy way to build fully contained Java EE applications. And this blog post is going to show you how to run this locally on Docker.

What is WildFly Swarm?
WildFly is a light weight, flexible, feature rich, Java EE 7 compliant application server. WildFly 9 even introduced a 27MB Servlet-only distribution. Both are solid foundations for your Java Enterprise projects. The most recent version WildFly 10.CR2 will be the foundation for Red Hat's next supported Java EE server offering, the Enterprise Application Platform 7.
WildFly Swarm moves away from the static bundling of various profiles and allows you to build your own, custom feature Java EE runtime. But WildFly Swarm isn't just about a customized application server; it is about bundling your application including the relevant application server components together in a single executiable. This is also called a "fat-jar" which can simply be run using java -jar. And while we're talking about it: Microservices usually bring the complete application plus their stack into it, so you can think of every WildFly Swarm application as an independent and fully contained  microservice.

Turning A Java EE Application into A Fat-Jar
A Java EE application can be packaged as WildFly Swarm fat jar by adding a Maven dependency and a plugin. The complete source code for this simple JAX-RS sample is available at The application itself exposes an endpoint /rs/customer which just outputs some text. The real magic is put into the pom.xml file. We're walking now through it.
First of all, the dependency for the Java EE 7 API and after that it's


A WildFly Swarm plugin takes care for the packaging of the application:


That's about all the magic. You can build the application with "mvn package". You will find the war file itself and an additional attachment "swarm-sample-1.0-SNAPSHOT-swarm.jar" in the target folder. If you open that, you can find a m2repo folder with all the dependent libraries and your app itself bundled in the _bootstrap\ folder. You can directly run it from the command line in your maven project (Windows users might run into this issue):
java -jar target/swarm-1.0-SNAPSHOT-swarm.jar
Redirecting the browser to http://localhost:8080/rs/customer will show you some json
{"text":"WildFly Swarm Docker Application on OpenShift at - Hostname: 093ca3279a43"}

Dockerizing WildFly Swarm
The WildFly Swarm project has some Docker examples up on github. Mostly bash scripts and some wrappers to dockerize your project. But there is something even better: The Docker Maven Plugin by Roland Huss. I used it a couple of times before already, and it is also used in this example. All you have to do is to add the plugin to your pom.xml.
The configuration is a bit more tricky. (Thanks to Roland for all the email support he gave me over the last couple of days!). First of all, the basics are easy. Add an image to the plugin configuration and name it accordingly. I inherit from jboss/jdk-base:8 and the image gets the name and tag myfear/swarm-sample:latest (Lines 77ff). The build configuration exposes the port 8080 and defines the relevant entry point (the command to start java with the -jar parameter). The assembly of the image needs to include project attachments and include the attachment as dependency. Make sure, that the output service mapping and the basedir match the entry point argument.

Let's give it a Test-Run
Make sure you have docker-machine setup on your host. Create a dev machine and configure your environment variables accordingly. Now you can run:
mvn clean install docker:build docker:start -Ddocker.follow
(NOTE: A bug in the 10.3.5 Docker Maven Plugin actually can't pull the base image right now. You need to manually execute a 'docker pull jboss/jdk-base:8' before doing the maven run.)
The project is build and a container is started from the image.

Congratulations, now you have a running Java EE microservice in your local Docker instance. The next blog will actually look into how to take this image and run it on OpenShift Origin and scale it to your needs.

Monday, October 5, 2015

Quick Tip: Running WildFly Docker Image on OpenShift Origin

11:31 Monday, October 5, 2015 Posted by Markus Eisele
, ,
On to a new week. There's been plenty of travel for me recently, and it don't stop soon. But I have some time to try out OpenShift Origin and run it on my Windows environment. There is an entry level blog-post how to setup everything from a couple of days ago. Now it was about time to just run a vanilla Docker image on it.

Get your Origin installation up and running. And make sure to also install the OpenShift binaries locally. The OpenShift team released the all in one vm on a separate, developer friendly and good looking website a couple of days after my post. So, all you need to remember is this address:

Get your OpenShift Environment Up
This is a single vagrant up command. If that succeeded, you should be able to access the local admin-console via your browser at http://localhost:8443/ and also login with the client tools from the command line:
oc login http://localhost:8443
Use admin/admin as username/password.

Create A Project And Run WildFly
First thing to do is to create a new OpenShift project. We want to separate this a bit from the default. At the end, think of it as a namespace in which we can just play around a bit:
oc new-project wildfly-tests --description="WildFly and Docker on OpenShift v3" --display-name="WildFly Test Project"
OpenShift doesn't directly expose a Docker daemon. So, you need to use the oc command line tool to run an image. There are some (unsupported) JBoss community images available and listed on I am interested in running latest WildFly 9 for this test.
oc new-app --docker-image=jboss/wildfly
If you watch the web-console, you will see, that a deployment is running and the Docker image get's pulled.

Depending on your connection, this might take some time. But when it's finished, you will see a green bar that states "Running" and also shows an IP-address. Let's see, if everything went well and the WildFly instance is up and running. We do need to see the logs for our pod. Let's list them:
oc get pods
NAME              READY     STATUS    RESTARTS   AGE
wildfly-1-jzvsj   1/1       Running   0          11m
and see the logs:
oc logs wildfly-1-jzvsj
Note, that the pod name will most likely be different in your environment. The command should output the WildFly logs as you are used to them. For now, we have a pod running. Now we need to expose this pod's port via a service to the external world. But first of all we need to decide, via which domain-name we want to expose it. Add/change your hosts file with the following entry: wildfly.openshiftdev.local
And execute the following command to add an external route to the service:
oc expose service wildfly --hostname=wildfly.openshiftdev.local
Browse to the services tab in the console and see, that the route was created for the service.

The only thing left to do now is to change the port-forwarding rules in the VirtualBox console. Add the port 80 from the host to the guest.
Now you can access the WildFly instance via http://wildfly.openshiftdev.local/. Congratulations!

Trouble Shooting
If you're running anything else than the all-in-on-vm, for example the fabric8 vagrant image, you will need to change the security settings in OpenShift. Ssh into the instance, login via the oc command line and edit the security settings:
oc edit scc restricted
Change the runAsUser.Type strategy to RunAsAny. This allows images to run as the root UID if no USER is specified in the Dockerfile.

Friday, September 25, 2015

WildFly 10 CR 2 Released - Java EE 7, Java 8, Hibernate 5, JavaScript Support with Hot Reloading

08:20 Friday, September 25, 2015 Posted by Markus Eisele
, ,
Yesterday the WildFly team released the latest version of WildFly 10. The CR2 will most likely be the last before the final release which is expected in October. Many new features made it into this release, even if the mainly supported Java EE specification is 7 as with WildFly 8 and WildFly 9 which now makes three server versions, which implement the Java EE 7 Full and Web Profile standards. Ultimately WildFly 10 will lead to Red Hat JBoss Enterprise Application Platform (EAP) 7, the supported Java EE offering of Red Hat.
Learn more about JBoss EAP 7 in the Summit presentation (PDF) by Bilge Ozpeynirci  (Sr. Product Manager) and Dimitris Andreadis (Sr. Engineering Manag)

New Features At A Glance
  • Java 7 support has been discontinued allowing for deeper integration with the Java 8 runtime. While Java 9 is still in development, this release runs on the current development snapshots.
  •  WildFly 10 CR2 includes the ActiveMQ Artemis project as its JMS broker, and due to the protocol compatibility, it fully replaces the HornetQ project.
  • In addition to the offline CLI support (WildFly 9) for standalone mode, you can now launch a host-controller locally within the CLI. 
  • WildFly 10 includes the Undertow JS project, which allows you to write server side scripts that can pull in CDI beans and JPA Entity Beans. Learn more in this blog-post by Stuard Douglas.
  • WildFly 10 adds the ability to deploy a given application as a "singleton deployment" with automatic failover to another node in case of failure.
  •  HA Singleton MDBs and MDB Delivery Groups.
  • WildFly now pools stateless session beans by default, using a pool size that is computed relative to the size of the IO worker pool, which is itself auto-tuned to match system resources. 
  • Migration Operations for old subsystems such as jbossweb (AS 7.1), jacorb (WildFly 8), and hornetq (WildFly 9)
  • Hibernate 5 included
Getting Started
Download WildFly CR2 from the download site. Unpack into a folder of your choice and unzip the distribution. Change to the bin directory and type:
Which will start WildFly lightning fast:
08:09:58,353 INFO  [] (Controller Boot Thread) Full 10.0.0.CR2 (WildFly Core 2.0.0.CR5) started in 3686ms
Access the main page with your browser at http://localhost:8080 and see the new admin console at http://localhost:9990

Please give it a try with all your latest projects and let the team know, what you need or missing. Reach out to them via:

Wednesday, September 23, 2015

50% Off for Top WildFly Books

12:27 Wednesday, September 23, 2015 Posted by Markus Eisele
I do some reviews from time to time on this blog and as a reward for my readers, I was offered a 50% code from Packt Publishing for any of the following books. Please keep in mind, that the code is only valid until 07th October 2015. extended until October 30, 2015!

WildFly Configuration, Deployment, and Administration - Second Edition
The book starts with an explanation of the installation of WildFly and application server configuration. Then, it moves on to the configuration of enterprise services and also explores the new web container Undertow. It then covers domain configuration, application deployment, and application server management. By the end of the book, you will have a firm grasp of all the important aspects of clustering, load balancing, and WildFly security. This guide is invaluable for anyone who works with or is planning to switch to WildFly.
Find the complete book review on my blog.
Get 50% off with the code MULTIFOUR50 when ordering.

WildFly Performance Tuning
This practical book explores how to tune one of the leading open source application servers in its latest reincarnation. In this book, you will learn what performance tuning is and how it can be performed on WildFly and the JVM using solely free and open source tools and utilities.
Find the complete book review on my blog.
Get 50% off with the code MULTIFOUR50 when ordering.

WildFly Cookbook
With practical and accessible material, you will begin by learning to set up your WildFly runtime environment, and progress to selecting appropriate operational models, managing subsystems, and conquering the CLI. You will then walk through the different balancing and clustering techniques, simultaneously learning about role-based access control and then developing applications targeting WildFly and Docker.
Get 50% off with the code MULTIFOUR50 when ordering.

Tuesday, September 22, 2015

Running OpenShift Origin on Windows

09:56 Tuesday, September 22, 2015 Posted by Markus Eisele
OpenShift is the most interesting PaaS offering these days for me. Not only because it is part of the Red Hat family of products, but because it holds everything I expect from a modern PaaS. It supports image based deployments (with Docker-Images), abstracts operational complexity (e.g. networking, storage and health checks) and greatly supports DevOps with an integrated tooling stack. On tiny drawback for now is, that the latest v3 isn't available as a free online service. If you want to play around with it, you can set it up on AWS yourself or run it locally. As usual, most of the documentation available only covers Linux based systems. So, I am going to walk you through the first steps in getting OpenShift v3 Origin up on your local machine.

Install the latest versions of Vagrant and Virtualbox. You'll need both and they will make your life easier. Also, please install the OpenShift client for windows. Download the one for your os from the origin project on github. The windows build has 16 MB. Next is to unpack it into a folder of your choice. Make sure to add this folder to your PATH environment variable.
set PATH=%PATH%;"D:\Program Files (x86)\openshift-origin-v1.0.3"

Method One: Fabric 8 Vagrant All In One
The Fabric 8 team has a complete Vagrant based all-in-one box ready for you to run. It also contains Fabric8 but you get a fully operational OpenShift Origin too. All you have to do is to clone the fabric8 installer git repository:
$ git clone
$ cd fabric8-installer/vagrant/openshift
You need to install an additional vagrant plugin:
vagrant plugin install vagrant-hostmanager-fabric8
Unfortunately for Windows no automatic routing for new services is possible. You have to add new routes manually to %WINDIR%\System32\drivers\etc\hosts. For your convenience, a set of routes for default Fabric8 applications has been pre-added. If you expose new routes, you will have to add them manually to your hosts file. Now you're ready to start vagrant:
$ vagrant up
If you do that for the first time, a bunch of Docker images will get pulled. So prepare for a little coffee+++ break. When that is done, point your browser to http://vagrant.f8:8443 and use any user/password combination to access the OpenShift console.
Login with the oc command line tool and see, if that works, too:
$oc login https://vagrant.f8:8443

Method Two: Use the pre-built Vagrant Box 
Using the pre build vagrant box from the v3developer training is probably the most convenient way to get everything up and running. The following is part of the complete v3 Hands-On-Lab and there will be a more polished version available soon, hopefully.
Go to: and change to the BinariesAndVagrantFile folder. Download the (Attention 4.5 GB!) and the Vagrant file.
Rename the .box file to using your file manager and edit the Vagrant File with notepad and change all references from openshift3­bootstrap to openshift and then save the changes. Now you need to add the box:
$vagrant box add openshift
And you're ready to bring up the vagrant box:
$ vagrant up
When that is done, point your browser to http://localhost:8443 and use any user/password combination to access the OpenShift console.
Login with the oc command line tool and see, if that works, too:
$oc login https://localhost:8443

Method three and four: Build from Source and Docker Container
The OpenShift documentation mentions two other methods of getting OpenShift Origin to run locally. Either as a docker container or by building in locally in a vagrant box. I couldn't make any of them work on my Windows 7.

This was just a little exercise to lay some groundwork for the upcoming blog-posts. I am going to show you more about how to build your Java EE projects with OpenShift's source-to-image technology and how to run and scale Docker containers.

Sunday, August 30, 2015

Coding in a cloud-based enterprise - designing for distributed architectures

14:28 Sunday, August 30, 2015 Posted by Markus Eisele
, ,
One of my recent writings got posted on O'Reilly's Radar. It explores the future of development projects and teams while using all the new xPaaS services offered by today's clouds. While the article is more a thought leadership piece based on a vision Mark Little sketched earlier, it also has some aspects in it, which you can find in projects as of today already.

xPaaS - All The PaaS Offerings
Gartner uses the term xPaaS to describe the whole spectrum of specialized middleware services that can be offered as PaaS. Red Hat decided to also use xPaaS as a description for their offerings, because it is meant to encompass much more than what PaaS has typically come to be associated with (aPaaS, or Application PaaS, is a component of xPaaS). In many ways we've been talking about xPaaS for a couple of years and particularly how technologies and methodologies such as SOA or integration must play within the Cloud and between users of the Cloud.

How Will This Change Development?
This is the real question, the article answers. With the advent of DevOps and various Platform-as-a-Service (PaaS) environments, many complex business requirements need to be met within a much shorter timeframe. The Internet of Things (IoT) is also changing how established applications and infrastructures are constructed. As a result of these converging trends, the enterprise IT landscape is becoming increasingly distributed, and the industry is starting to map how all the various components — from networking and middleware platforms, to ERP systems and microservices — will come together to create a new development paradigm that exists solely in the cloud.

Read it online for free and I am happy to hear about your thoughts and comments about what you think, that development will look like in the Future.

Tuesday, August 18, 2015

Devoxx Morocco: What's in it for you!

21:21 Tuesday, August 18, 2015 Posted by Markus Eisele
Last year was my first year at Devoxx Morocco. Oh, I forgot. Back in the days, it was still called JMaghreb. And I was really curious to go there, because of a couple of reasons. First of all, I never visited Morocco before. But second of all, my dear friend Badr said, that it is an amazing conference and I just had to come. It din't work out before and I felt guilty to not have gone. So, I decided to do my best to make up for it. And first of all had the pleasure to attend JMaghreb 2014. It has been a very warm welcome to me and I really enjoyed all the new impressions this country had waiting for me. Everybody was friendly and I got to see a little of Casablanca, which was the city where the conference was held.

Content, Content Content for a Hungry Audience
The content was amazing. A lot of well known speakers and topics which were spot on. And even I had packed one of my most relevant presentations about Developer Security Awareness. Every session was packed. A little different to from what I was used to, the strong presence of comparably young people, which have been very hungry for information and latest updates. As a speaker you usually get to answer a couple of questions and maybe find the time to have a coffee or water with another three attendees afterwards which have some more detailed questions. And believe it or not, I felt like a rockstar in Morocco: 10+ people wanted to talk about all aspects of my session afterwards and we've been hanging out and talking for at least an hour longer than usual. And this was more a general feedback from all the speakers; the audience was very interested and ready to fire their questions.

Casablanca - A Beautiful City
The venue was a little away from the center and cabs are a rare thing to get hands on. At least the ones I trust (not judging, just telling you that I am German). But we managed to get back and forth to the venue and beside that, there is some great history in Casablanca. I had to see Rick's Cafe and the Hassan II Mosque. And the local markets and .. and ... and.

A Trip To Marrakesh
We've not been done after the conference. Badr invited his speakers on an extended weekend trip over to Marrakesh. Which is another just wonderful piece of history. The hospitality in the country is still a big part for me to remember and I can only briefly tell you about all the historic sites in between the pulsing life, that is waiting for you there. A trip, I'll never forget.

JMaghreb turned Devoxx Morocco
But coming back and adopting to the really cold weather in Germany in November wasn't the only surprise waiting for me. Very soon it turned out, that this amazing conference is joining the Devoox family and after I was asked to join the program committee, I was just left with a simple: yes, I want to! And that is, what I spend a reasonable amount of time on: Selecting the best talks for the upcoming inaugural edition of the most southern Devoxx conference ever.
And here are some of the already selected speakers. I can only highly recommend on going if you have a chance to attend. There is still plenty of time to register!

Wednesday, July 29, 2015

WebLogic Server 12.1.3 on Kubernetes

10:31 Wednesday, July 29, 2015 Posted by Markus Eisele
, , ,
Most of you recall, that I have some little history in Oracle WebLogic server. Coming all the way back from BEA times. I don't want to comment on recent developments or features or standard support, but one thing is for sure: It is out there and having the chance to run it containerized is something many customers will appreciate. Maybe this is the one thing, that will make a real difference in our industry with the ongoing progress in the containerization field. We actually can manage heterogeneous infrastructure components on a common base. If we have the right tools at hand. And this is true for the operations side with OpenShift and of course for all the developers out there, who will appreciate, what Fabric8 can do for them.

License Preface
Whatever happens in this blog-post only happens on my local developer laptop. And I strongly believe, that with regards to Oracle technologies this is absolutely covered by the so called OTN Free Developer License Agreement and the Oracle Binary Code License Agreement for Java SE.
I'm dead sure, that a production environment needs a bunch of licenses. But I'm not a specialist. So, don't ask me. If you want to use RHEL 7, please learn about the Red Hat Subscriptions.

Ok, WebLogic - Where's Your Image?
Not there. I assume for licensing reasons. But, Bruno did a great job in pushing relevant Dockerfiles and scripts to the official Oracle GitHub account. So. the first step in running WebLogic on Kubernetes is to actually build a docker image with it. Go,

git clone

and navigate to the OracleWebLogic folder. In fact, you can delete everything else beside this one. First step is to download the WebLogic ZIP installer and the correct JDK to be used.
Go to the Oracle Website, accept the OTN License (if you feel like it) and download the platform independent ZIP installer (
Now browse to the JDK download website, do the same license thing and download the 8u51 JDK as rpm (jdk-8u51-linux-x64.rpm). Place both into the OracleWebLogic\dockerfiles\ folder. If you're running on a unix like OS yourself, feel free to check back with the official documentation and use the provided scripts. This didn't work for me on Windows, so you get a step-by-step walk-through. Go and rename the Dockerfile.developer to Dockerfile and delete all the other ones.

mv Dockerfile.developer Dockerfile
rm Dockerfile.*

Now you open the Dockerfile and change a couple of things. Base it on RHEL 7:

FROM rhel7 

And comment out the other base, that's in there ... And because, we want to run a decently patched and available Java version, we're going to change the environment variable accordingly

ENV JAVA_RPM jdk-8u51-linux-x64.rpm

Time to build our image. And before you start, let's reuse the fabric8 vagrant installer, that I've been using for the last two blog-posts already. So, bring your vagrant instance with OpenShift up first. Now it's time to build the WebLogic image. Sit back and relax, because this is going to take a couple of more minutes. Do have some housekeeping to do? This might be the right time!

docker build --force-rm=true --tag="vagrant.f8:5000/oracle/weblogic:12.1.3-dev" .

Done? Check if everything is where we expect it to be: (docker images)

vagrant.f8:5000/oracle/weblogic       12.1.3-dev          68f1ea788bba        About a minute ago   2.05 GB

Because this image only contains the server binaries, we now need to build an image which has a configured WLS domain in it. Thankfully, there are some more scripts in samples\12c-domain folder. So, go check, if the Dockerfile and all scripts in container-scripts have the correct UNIX line-ending. Sometimes, Git can mess them up, if you're on Windows. And if you're already there, make sure to change some ports according to your needs. I had to change the admin port to 8011 (do this in the Dockerfile and script. Another thing, we want to do is, to run the instance in development mode. This allows us to just copy our Java EE app into the ./autodeployment folder and have it deployed, when started. You can just changing the attribute in the following line from prod to dev:


Now, you're ready to go ahead with building the development domain image:

docker build --force-rm=true --tag="vagrant.f8:5000/myfear/weblogic:12.1.3-dev" 

And, after another couple of cups of coffee, we're ready to check if this image made it into our repository (docker images)

vagrant.f8:5000/myfear/weblogic      12.1.3-dev          77a3ec07d176        9 minutes ago       2.052 GB

Before going any further, make sure to give it a shot and see, if the Weblogic instance comes up.

docker run -it myfear/weblogic:12.1.3-dev

If that worked, you're ready to build your third image today. Which will contain your application.

NetBeans And Fabric8 - Take WebLogic Into Greek Heaven
Start NetBeans and create a nice, simple and lean Java EE 6 project from a maven archetype of your choice. Add all the fabric8 and docker-maven plugin dependencies to it, like I've shown you before in the first blog post of the series. Let's tweak the properties to our needs and just name the image: myfear/weblogic-test:latest. Most importantly, you have to map the container port to the Kubernetes service correctly:

<!-- Kubernetes Service Port // Mapped via the HARouter-->

<!-- The exposed container port -->

<!-- because, I'm working with the remote registry here, base it on the remote image -->

<!-- Just cosmetics, changing the container label -->

Don't forget to use Java EE 6 as dependency, and change both user and deployment base in the docker-maven plugin to:


Time to build the third and last docker image:

mvn clean install docker:build

And if that finished correctly, we're going to deploy everything to OpenShift with the Fabric8 tooling:

mvn fabric8:json fabric8:apply

And don't forget to add the host-name mapping to your hosts file. myfear-weblogic-test.vagrant.f8

A request to http://myfear-weblogic-test.vagrant.f8/sample shows the application after you waited a couple of minutes (at least, I had to; Looks like my laptop wasn't quick enough.).

Some Further Remarks
This isn't exactly production ready. WLS knows managed servers and node managers and there are a bunch of ports for external communication, that need to be opened. This basically did nothing more than to deploy a teensy application onto the AdminServer. There are a couple of whitepapers and further ideas about how to tweak the domain scripts to fit your needs. I didn't want to do that for obvious reasons. So, consider this a proof of concept.