Enterprise grade Java.
You'll read about Conferences, Java User Groups, Java, Integration, Reactive, Microservices and other technologies.

Wednesday, January 20, 2016

Running Any Docker Image On OpenShift Origin

16:07 Wednesday, January 20, 2016 Posted by Unknown No comments:
, ,
I've been using OpenShift since a while now. For many reasons. First of all, I don't want to build my own Docker and Kubernetes environment on Windows and second of all, because I like the simple installation. After the Christmas holidays I decided to upgrade my machine to Windows 10. While I like the look and feel, it broke quite a bit of networking and container installments including the Docker and OpenShift environments. Now that I have everything up and running again, it is time to follow the microserivces path a little more. The first thing is to actually get OpenShift up and running and get a development environment setup in which we can simply push Docker images to it without having to use any of the Source-2-Image or OpenShift build mechanisms.

Installing the all-in-one-VM
Download the all-in-one-vm image and import it into the vagrant box. This image is based off of OpenShift Origin and is a fully functioning OpenShift instance with an integrated Docker registry. The intent of this project is to allow Web developers and other interested parties to run OpenShift V3 on their own computer. Given the way it is configured, the VM will appear to your local machine as if it was running somewhere off the machine. Which is exactly what I need to show you around in OpenShift and introduce some more features. If you need a little more assistance follow the method 2 in this earlier blog-post.
I also assume, that you have docker-machine running. You can install it via the Docker Toolbox.

First steps in OpenShift
Fire up the magazine via vagrant up and redirect you browser to https://localhost:8443/. Accept the certificate warning and enter admin/admin as login. You're now browsing through the admin console. Let's create a new project to play around with:
oc login https://localhost:8443
# enter admin/admin as the credentials

oc new-project myfear --description="Testing random Docker Images on OpenShift" --display-name="Myfears test project"

# Add admin role to user myfear
oc policy add-role-to-user myfear admin -n myfear
First thing to do is to actually get a MySql database up and running. I want to use that in subsequent blog-posts and it's a good test to see if everything is working. Get the two json files from the my github repository and execute the following commands:
oc create -f mysql-pod.json
oc create -f mysql-service.json
Go back to your browser and select the myfear project and see the mysql service up and running with one pod.

Using the OpenShift Registry
You just witnessed how OpenShift pulled the mysql image and started a pod with a container on it. Obviously this image came from the built in registry. But how can one actually upload a docker image to the internal OpenShift registry? Ssh into the vagrant machine and look around a bit:
vagrant ssh
docker ps
You see a lot of running containers and one of them is running the openshift/openshift-registry-proxy. This little gem actually forwards the port 5000 from the internal docker registry to the vagrant vm. Open Virtualbox and look at the port forwarding rules there. Another rule forwards port 5000 from the guest to the host. This means, the internal OpenShift Docker registry is already exposed by default. But how do we push something there? The docker client requires a docker host to work. The OpenShift Docker Daemon isn't exposed externally and you can't just point your docker client to it.
This means, you need another docker host on your machine which is configured to access the OpenShift docker registry as external registry. I'm using docker-machine here, because it is extremely easy to create new docker hosts with it.
docker-machine create -d virtualbox dev
After a couple of seconds your "dev" vm is created and started. Now we need to find out, what the host system's IP address is from the dev box. Ssh into the machine and get the ip of the default gateway:
docker-machine ssh dev
$ ip route | grep default

> 10.0.0.2
Now we need to stop the machine and add the ip address we found to the insecure registry part of the configuration:
docker-machine stop dev
edit  ~/.docker/machine/machines/default/config.json 
# Add the found ip address plus the registry port to the HostOptions => EngineOptions =>  InsecureRegistry array
Afterwards it should look like this:
 "InsecureRegistry": [
                "10.0.2.2:5000"
   ]
time to re-start the dev vm and get the docker client configured for it:
docker-machine start dev
FOR /f "tokens=*" %i IN ('docker-machine env dev --shell cmd') DO %i
That's it for now. One important thing is, that the internal registry is secured and we need to login to it. Get the login token for the user "myfear" with the following commands:
oc login -u myfear
oc whoami -t
This will return something cryptic like: dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY. Now login to the registry:
docker login -u myfear -p dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY -e markus@someemail.org 10.0.2.2:5000
Make sure to use the correct username and token. You get a success message with and your login credentials are being saved in the central config.json.

Build and push the custom image
Time to finally build the custom image and push it. I am using Roland's docker maven plugin again.
If you want to learn more about it, there is an older blog-post about it. Also find the code in this github repository. Compare the pom.xml and make sure to update the docker.host and docker.registry properties
  <docker.host>tcp://192.168.99.101:2376</docker.host>
  <docker.registry>10.0.2.2:5000</docker.registry>
and the <authConfig> section with the correct credentials. Build the image with:
mvn clean install docker:build docker:push
If you ran into an issue with the maven plugin not being able to build the image, you may need to pull the jboss/base-jdk:8 image manually first:
docker pull jboss/base-jdk:8
Let's check, if the image is successfully uploaded by using the console and navigating to the overview => image streams page.
And in fact, the image is listed. Now, we need to start a container with it and expose the service to the world:
oc new-app swarm-sample-discovery:latest --name=swarm
oc expose service swarm --hostname=swarm-sample-discovery.local
Please make sure to add the hostname mapping to your hosts or dns configuration (to 127.0.0.1). As you can see, I am no longer using the docker image tag but the image stream name. OpenShift converted this internally.
Time to access the example via the browser http://swarm-sample-discovery.local:1080/rs/customer.
If you're wondering about the port go back to the Virtualbox configuration and check the nat section. The all on one vm actually assumes, that you have something running on port 80 already and maps the vm ports to the 1080 host port.
The application does very little for now, but I will use this in subsequent blog-posts to dig a little into service discovery options.
The console overview shows the two services with one pod each.


That's it for today. Thanks again to Roland for his help with this. Let me know, if you run into issues and if you want to know something else about OpenShift and custom images.

Tuesday, January 12, 2016

A Refresher - Top 5 Java EE 7 Frontend

19:57 Tuesday, January 12, 2016 Posted by Unknown No comments:
, ,
The series continues. After the initial overview and Arjan's post about the most important backend features, I am now very happy to have Ed Burns (@edburns) finish the series with his favorite Java EE 7 frontend features.

Thanks to Markus Eisele for giving me the opportunity to guest post on his very popular blog. Markus and I go way back to 2010 or so, but I've not yet had the pleasure of guest posting.  Markus asked me to cover the Java EE 7 Web Tier.  Since EE 7 is a mature release of a very mature
platform, much has already been published about it.  Rather than rehash what has come before, I'm going to give my opinion about what I think are the important bits and a very high level overview of each.

If you're interested in learning more about this first-hand, please consider attending my full day training at JavaLand 2016.  I'm giving the training with modern finance and HTML5 expert Oliver Szymanski.  For details, please visit the javaland website.

First, a bit of historical perspective.  Markus asked me to write about the Java EE 7 Web Tier.  Let's take a look at that term, "web tier" or "presentation tier" as it is also called.  If one is to believe the hype surrounding newer ideas such as microservices, the term itself is starting to sound a bit dated because it implies a three tier architecture, with the other two tiers being "business logic" and
"persistence".  Surely three tiers is not micro enough, right?  Well, the lines between these tiers are becoming more and more blurred over time as enterprises tinker with the allocation of responsibilities in pursuit of delivering the most business value with their software.  In any case, Java EE has always been a well integrated collection of enterprise technologies for the Java platform, evolved using a consensus based open development practice (the Java Community Process or JCP) with material participation from leading stake holders.  The "web tier" of this platform is really just the set of technologies that one might find useful when developing the "web tier" of your overall solution.  This is a pretty big list:

WebSocket 1.0 JSR-356
JavaServer Faces 2.2 JSR-344
Servlet 3.1 JSR-340
JSON Processing 1.0 JSR-353
REST (JAX-RS) 2.0 JSR 339
Bean Validation 1.1 JSR-349
Contexts and Dependency Injection 1.1 JSR-346
Dependency Injection for Java 1.0 JSR-330
Concurrency Utilities for Java EE 1.0 JSR-236
Expression Language 3.0 JSR-341

For the purposes of this blog entry, let's take a look at the first five: WebSocket, JSF, Servlet, JSON, and JAX-RS.  While the second five are surely essentail for a professional web tier, it is beyond the scope of this blog entry to look at them.

WebSocket
JSF and WebSocket are the only two Java EE 7 specs that have a direct connection to the W3C HTML5 specification.  In the case of WebSocket, there are actually three different standards bodies in play.  WebSocket, the network protocol, is specified by RFC-6455 from the IETF.  WebSocket
the JavaScript API is specified as a sub-spec of HTML5 from the W3C. WebSocket the Java API is specified by JCP under JSR-356.  In all aspects of WebSocket, the whole point is to provide a message based reliable full-duplex client-server connection.

JSR-356 lets you use WebSocket in both client and server capacities from your Java SE and EE applications.

On the server side, it allows you to expose a WebSocket endpoint such that browsers can connect to it using their existing support for the WebSocket JavaScript API and network protocol.  You declare your endpoints to the system either by annotating some POJOS, or by imperatively calling bootstrapping APIs from java code, say from a ServletContextListener.  Once the connection is established, the server can send and receieve messages from/to any number of clients that happen
to be connected at the same time.  The runtime automatically handles connection setup and tear down.

The WebSocket java client API allows java SE applications to talk to WebSocket endpoints (Java or otherwise) by providing a Java analog to the W3C JavaScript WebSocket API.

Java Server Faces (JSF)
In JSF 2.2 we added many new features but I will only cover three of them here.

HTML5 Friendly Markup enables writing your JSF pages in almost pure HTML (must be well formed), without the need for the XML namespaces that some see as clumsy and difficult to understand.  This is possible because the underlying HTML Basic JSF RenderKit (from JSF 1.0) provides all the necessary primitives to adopt mapping conventions from an arbitrary
piece of HTML markup to a corresponding JSF UIComponent.  For example, this is a valid JSF form

        <form jsf:id="form">
           <input jsf:id="name" type="tel" jsf:value="#{complex.name}" />
           <progress jsf:id="progress" max="3" value="#{complex.progress}" />
        </form>

The only catch is the need to flag the element as a JSF component by use of a namespaced attribute.  This means you must declare at least one namespace in the <html> tag:

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:jsf="http://xmlns.jcp.org/jsf">

Faces Flows is a standardization of the page flow concept from ADF Task Flows and Spring Web Flow.  Flows gives you the ability to group pages together that have some kind of logical connection and need to share state.  A flow defines a logical scope that becomes active when the the flow is entered and made available for garbage collection when the flow is exited.  There is a rich syntax for describing flows, how they are entered, exited, relate to each other, pass parameters to each other,
and more.  There are many conveniences provided thanks to the Flows feature being implemented on top of Contexts and Dependency Injection (CDI).  Flows can be packaged as jar files and included in your web application, enabling modularization of sub-sections of your web app.

Just as Flows enable modularizing behavior, Resource Library Contracts (RLC) enable modularizing appearance.  RLC provides a very flexible skinning system that builds on Facelets and lets you package skins in jar files, effectively allowing modularizing appearance.

Servlet
The most important new feature in Servlet 3.1 is the additional support for non-blocking IO.  This builds on top of the major feature of Servlet 3.0 (from Java EE 6): async io.  The rapid rise of reactive programming indicates that Java apps can no longer afford to block for IO, ever. The four concerns of reactive programming: responsiveness, elasticity, resiliency, and event basis are founded on this premise.  Prior to non-blocking IO in Servlet 3.1, it was very difficult to avoid blocking in Servlet apps.

The basic idea is to allow the Servlet runtime to call your application back when IO can be done safely without blocking.  This is accomplished by virtue of new listener interfaces, ReadListener and WriteListener, instances of which can be registered with methods on ServletInputStream and ServletOutputStream.

When you add this feature to the async-io feature added in Servlet 3.0, it is possible to write Servlet based apps that can proudly sport the "We Are Reactive" banner.

JSON
From the outside perspective, the ability to parse and generate JSON in Java is certainly nothing new.  Even before Java EE 7, there were many solutions to this basic need.  Hewing close to the principle that standards are not for innovation, but to confer special status upon existing ideas, the JSON support in Java EE 7 provides the capability to parse and generate JSON with a simple Java API.  Reading can be done in a streaming fashion, with JsonParser, or in a bulk fashion using JsonReader.  Writing can be done in a streaming fashion with JsonGenerator.  Writing can be done in a bulk style with JsonBuilderFactory and JsonWriter.

JAX-RS
It is hard to overstate the importance of REST to the practice of modern enterprise software development for non-end-user facing software.  I'd go so far as to say that gone are the days when people go to the javadoc (or JSDoc or appledoc etc) to learn how to use an API.  Nowadays if your
enterprise API is not exposed as a RESTful web service, you probably will not even be considered. JAX-RS is how REST is done in Java. JAX-RS has been a part of Java EE since Java EE 6, but it received the 2.0 treatment in Java EE 7.  The big ticket features in 2.0 include:

  •  Client support
    In my opinion, the most useful application of this feature is in using   JUnit to do automated testing of RESTful services without having to  resort to curl from continuous integration.  Of course, you could use it for service-to-service interaction as well.
  •  Seamless integration with JSON
    In most cases a simple @Produces("application/json") annotation on  your HTTP method endpoint is sufficient to output JSON.  Data arriving  at your service in JSON format is also automatically made available to  you in an easy to consume format from Java.
  •  Asynchronous support (Reactive again)
    This feature gives you the ability to hand off the processing required  to generate a response to another Thread, allowing the original thread to return immediately so no blocking happens.  The async thread is free to respond when it is ready.

Naturally, this only scratches the surface of the Java EE 7 web tier. For more details, a great place to start is the official Java EE 7 launch webinars.

I hope to see you at JavaLand!

Thank you Ed for taking the time to write this post. If you haven't now is the time to to play around with Java EE 7. Here are some resources to get you started with JBoss EAP 7 and WildFly:

Thursday, January 7, 2016

Get Up to Speed with Microservices in 8 hours

10:44 Thursday, January 7, 2016 Posted by Unknown No comments:
, ,
Everybody is talking microservices these days and Red Hat is doing some very cool developer events around the world. The latest one happened at the beginning of November last year. The amazing speaker lineup starts with special guest speaker, Tim Hockin from the Google Cloud Management team and technical lead and co-founder of Kubernetes, along with Red Hat's James Strachan and Claus Ibsen. James created the Groovy programming language and is also a member of the Apache Software Foundation and a co-founder of a number of other open source projects such as; Apache ActiveMQ, Apache Camel, Apache ServiceMix. Claus Ibsen works an open source integration projects such as Apache Camel, fabric8 and hawtio and author of Camel in Action books. Tim, James, Claus and many more talk on areas such as Kubernetes for Java developers, microservices with Apache Camel and mobile-centric architecture.

The complete 8 hour playlist is available for free on Youtube and I just want to pick out some of my personal favorites.

Microservices in the Real World by Christian Posta
Beyond the many technology challenges of introducing microservices, organizations need to also adapt their existing development and operations processes and workflows to reap the bigger benefits of a microservices architecture including continuous delivery style application delivery. This session reviews challenges a number of large enterprises have faced in looking to adopt microservices, and looks at how they’ve adapted on their on-going journey. This session also covers some of the end architectures these companies used as they incorporated these new architectural approaches and technologies with their existing people skills and processes.


WildFly Swarm : Microservices Runtime by Lance Ball
With lightweight microservices dominating the dev chatter these days, traditional Java EE developers have spent a lot of time looking in the mirror and asking themselves, "Does my application look fat in this container?" or "How can I leverage my existing Java EE bits in a lightweight microservice?" or "What if I had Just Enough App Server™ to leverage the power and standards of Java EE, but did it all with a slimmed down, self-contained runnable that is easy to deploy and manage?". Well, maybe not that last one.

Enter WildFly Swarm. WildFly Swarm deconstructs the WildFly Java Application Server into fine-grained parts, allowing the selective reconstitution of those parts, together with your application into a self-contained executable - an "uberjar". The resulting artifact contains just enough Java EE to support whatever subset of the traditional APIs your application requires.

This talk introduces WildFly Swarm, and show you how it helps you bring your existing Java EE investments into the world of lightweight, easily deployed microservices. As a bonus, it shows you how WildFly Swarm helps you easily take advantage of other state-of-the-art components such as Keycloak, Ribbon, and Hystrix, integrating them seamlessly in your application.


Logging and Management for Microservices by Jimmi Dyson
Logging is a key part to making microservices work for you. This session helps you look at logs in a different way in order to understand what your systems are doing and how they’re interacting, in order to fix problems quickly and improve performance. You will understand the problems in collecting logs from your distributed microservices and discuss how to centralize them to get real value out of this goldmine of information.


Microservices Workflow: CI/CD by James Rawlings
We all know that in the real world there is more to developing than writing lines of code. This session explores how fabric8 has evolved to provide a platform that supports not only the development of microservices but also working with them, taking an idea from inception right through to running in a live environment.

With popular trends such as DevOps, we know that it is more about the culture of an organization that will give you greater agility and chance of success. Being able to communicate effectively with your cross functional teams increases productivity, reduces social conflicts, and establishes the all important feedback loops.

We look at how fabric8 provides out-of-the-box integration for hosted git services in Gogs, as well as agile project management with Taiga and social tools such as Lets Chat and Slack, to enable intelligent, extendable automation using Hubot, while providing a platform that is designed for the new age microservices team.

It also cover the integration of popular logging and metric tools that are prerequisites to continuous improvement. We need to understand not only how the platform is operating but also greater visibility of how it’s being used. Being able to visualize how teams communication in and outside of their unit can be seen as first steps to measuring the culture within an organization. This can be extremely useful in identifying early signs of internal silos developing as well as learning from high performing teams.



Look at the complete playlist on Youtube and find out more about the event and the other sessions on the redhat.com microservices developer day website.

Monday, January 4, 2016

How DevOps And Microservices Accelerate Application Delivery

17:38 Monday, January 4, 2016 Posted by Unknown No comments:
, , ,
Devops und microservices accelerate application delivery
Happy new year everybody! While I'm officially still on vacation, I'd quickly like to point you to a recent DevOps and Microservices article in the German DOAG Business News. You can download your copy directly via this link (PDF, ~300 KB).

The Business News is a DOAG trade journal publication and is published four times a year. It covers basic business relevant topics from a semi technical perspective. Please be aware, that the linked PDF is in German. If you want to learn more I recommend the following articles: