Enterprise grade Java.
You'll read about Conferences, Java User Groups, Java, Integration, Reactive, Microservices and other technologies.

Monday, October 26, 2015

My Book: Modern Java EE Design Patterns

15:08 Monday, October 26, 2015 Posted by Unknown No comments:
, ,
Today is a very special day for me. I am pleased to announce that my thoughts around Enterprise Java development made it into a report and has been published by O'Reilly. The "Modern Java EE Design Patterns" mini-book is available for download as of today at developers.redhat.com. For free! I am covering a lot of ground in this, beginning from the overall Enterprise challenges and changes over the last couple of years all the way down to microservices pattern.  With plenty of further reading about relevant technologies and team considerations, you will find resources, ideas, and best practices. And I am very proud to include a foreword by Mark Little (VP of Red Hat Engineering).

Below I’ve included the full abstract for the mini-book but in a few quick highlights, you will learn:
  • Challenges of starting a greenfield development vs tearing apart an existing brownfield application into services
  • How to examine your business domain to see if microservices would be a good fit
  • Best practices for automation, high availability, data separation, and performance
  • How to align your development teams around business capabilities and responsibilities
A big thank you goes out to Danial Bryant, Arun Gupta and Mark Little. They have been the technical reviewers for the report and been helping me shape the content into a cohesive story.

Please drop by the Red Hat booth at JavaOne or Devoxx to grab a complimentary copy and I will be spending a lot of time in and around the booth answering any questions you may have.   Enjoy the report.

As promised here’s the full abstract:
Modern Java EE Design Patterns
Building Scalable Architecture for Sustainable Enterprise Development

With the ascent of DevOps, microservices, containers, and cloud-based development platforms, the gap between state-of-the-art solutions and the technology that enterprises typically support has greatly increased. But as Markus Eisele explains in this O’Reilly report, some enterprises are now looking to bridge that gap by building microservice-based architectures on top of Java EE.

Can it be done? Is it even a good idea? Eisele thoroughly explores the possibility and provides savvy advice for enterprises that want to move ahead. The issue is complex: Java EE wasn’t built with the distributed application approach in mind, but rather as one monolithic server runtime or cluster hosting many different applications. If you’re part of an enterprise development team investigating the use of microservices with Java EE, this book will help you:
  • Understand the challenges of starting a greenfield development vs tearing apart an existing brownfield application into services
  • Examine your business domain to see if microservices would be a good fit
  • Explore best practices for automation, high availability, data separation, and performance
  • Align your development teams around business capabilities and responsibilities
  • Inspect design patterns such as aggregator, proxy, pipeline, or shared resources to model service interactions

Friday, October 23, 2015

Docker for Java EE Developers - A Sneak Peak Into Our JavaOne HOL

12:47 Friday, October 23, 2015 Posted by Unknown No comments:
, ,
Instead of writing a blogpost, I should be cleaning and packing. Actually, JavaOne is just around the corner and beside my own two sessions about Apache Camel and the future of integration, I'm really looking forward to the Docker for Java EE developers Hands On Lab. A lot of preparations went into this one and it has been given some times before, but this is actually the first time that Rafael and I get a chance to run through the complete revamped version of it.
And to get you excited for it, we did a tiny little recording with some first Docker basics and demos from the lab yesterday. Sit back, relax and get a #coffee+++!



Docker for Java EE Developers [HOL7249]
Wednesday, Oct 28, 3:00 p.m. | Hilton—Franciscan Room B/C/D
Containers are enabling developers to package their applications in new ways that are portable and work consistently everywhere: on your machine, in production, in your data center, and in the cloud. And Docker has become the de facto standard for those portable containers in the cloud. This lab offers developers an intro-level hands-on session with Docker, from installation to exploring Docker Hub, to crafting their own images, to adding Java apps and running custom containers. This is a BYOL (bring your own laptop) session, so bring your Windows, OS X, or Linux laptop and be ready to dig into a tool that promises to be at the forefront of our industry for some time to come.

Rafael Benevides (@rafabene), Senior Software Engineer, Red Hat
Rafael is a Senior Software Engineer at Red Hat, working on JBoss open-source projects with emphasis on improving developer productivity. In his current role, he is the JBoss Developer Materials lead providing Quickstarts and tools to improve the developer's experience. He worked in several fields including application architecture and design. Besides that, he is also member of Apache DeltaSpike PMC - a Duke’s Choice Award winner project.

Markus Eisele (@myfear), Developer Advocate, Red Hat GmbH
Markus is a Developer Advocate at Red Hat and focuses on JBoss Middleware. He is working with Java EE servers from different vendors since more than 14 years and talks about his favorite topics around Java EE on conferences all over the world. He has been a principle consultant and worked with different customers on all kinds of Java EE related applications and solutions. Beside that he has always been a prolific blogger, writer and tech editor for different Java EE related books. He is an active member of the German DOAG e.V. and it's representative on the iJUG e.V. As a Java Champion and former ACE Director he is well known in the community.

Thursday, October 22, 2015

Learn more. Red Hat Mini-Theater Sessions at JavaOne.

07:00 Thursday, October 22, 2015 Posted by Unknown No comments:
,
And here is the schedule of all the mini-theater sessions we're going to run on our booth at JavaOne. No signup required. Just drop by and sit in. Make sure to be on time because space is limited.
And another plus for showing up early is that you can get one of the inspirational and motivational 20 years of Java t-shirts that we're giving out!
Want more? There may be stickers. And people to talk to. And cool demos and technologies! Come to our booth number 5101 in the JavaOne exhibition hall (Hilton San Francisco, Grand Ballroom) next to the Java Hub!
Get the latest updates and news on the official events website on developers.redhat.com. And if you can't make it to JavaOne, you can see the sessions being live streamed and recorded. I'm going to update this post as soon I have the recorded videos uploaded to youtube.

Did I mention, that there is going to be a party? 
Let's get things started right with friends old and new; our kickoff drink-up at Mikkeller bar (Sunday, October 25,  6:00 PM - 9:00 PM) gets us all together to welcome one another (back) to San Francisco and geek out. This is your first crack at catching some of our coders to see what we've been up to even before sessions start, and we'll supply the food and drink. Space is limited, so be sure to RSVP here and we'll prioritize you a place at the table!

Monday, 26th
Time Slot
Speaker
Title
10:15 - 11:00
Jason Porter (@lightguardjp)
Standardized Extension-Building in Java EE with CDI and JCA
11:45 - 12:30
Aslak Knutsen (@aslakknutsen), Alex Soto (@alexsotob) & Bartosz Majsak (@majson)
Taming Microservices Testing with Arquillian Cube
1:45 - 2:30
Rafael Benevides (@rafabene) & Markus Eisele (@myfear)
Docker for Java EE Developers
3:30 - 4:15
Christine H. Flood
Shenandoah: An Ultralow-Pause-Time Garbage Collector for OpenJDK

Tuesday, 27th
Time Slot
Speaker
Title
10:15 - 11:00
Rafael Benevides (@rafabene) & Antoine Sabot Durand (@antoine_sd)
Apache DeltaSpike, the CDI Toolbox
11:45 - 12:30
Christian Posta (@christianposta)
Microservices in the Real World
1:45 - 2:30
Sebastien Blanc (@sebi2706) & Bruno Oliveiria (@abstractj)
Securing Web Applications: A Practical Guide
3:30 - 4:15
Mark Little (@nmcl)
WildFly Swarm


Wednesday, 28th
Time Slot
Speaker
Title
10:15 - 11:00
Sanne Grinovero (@SanneGrinovero)
Apache Lucene for Java EE Developers
11:45 - 12:30
Ken Finnigan (@kenfinnigan)
Java EE 7 Applications as a Microservice with WildFly Swarm
1:45 - 2:30
Max Rydahl Andersen (@maxandersen)
Docker & OpenShift Tooling for Eclipse
3:30 - 4:15
Bruno Oliveira (@abstractj)
Java Cryptography Deep Dive: Taming the Beast

Learn more about Red Hat at JavaOne and make sure to sign up for the developers.redhat.com.

Tuesday, October 20, 2015

OpenShift Quick Tip: Port Forwarding with v3 and the All-In-One-VM

13:31 Tuesday, October 20, 2015 Posted by Unknown No comments:
,
Just a short tip today, but I was playing around with the all-in-one vm from the OpenShift team and wanted to use the port-forwarding feature for a quick check of a running database. You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod.

But whenever I tried to execute:
oc port-forward mysql-2-zjx6u 3306:3306
It looked like it worked until the very first time I tried to use the tunnel:
I1020 11:38:54.754799 8356 portforward.go:227] Forwarding from 127.0.0.1:3306 -> 3306
I1020 11:38:54.757299 8356 portforward.go:227] Forwarding from [::1]:3306 -> 3306
I1020 11:39:10.824839 8356 portforward.go:253] Handling connection for 3306
E1020 11:39:10.833340 8356 portforward.go:312] An error occurred forwarding 3306 -> 3306: Error forwarding port 3306 to pod mysql-2-zjx6u_myfear, uid : Unable to do port forwarding: socat not found.
Turns out, that the needed socat package isn't installed on the all-in-one-vm. In order to fix that, you have to ssh into the instance:
vagrant ssh
And install socat:
sudo /bin/yum install socat
After that you're able to use the tunnel and forward ports to your OpenShift pod.

Further Information:

Sunday, October 18, 2015

All About Red Hat At JavaOne 2015

13:05 Sunday, October 18, 2015 Posted by Unknown No comments:
,
A little under a week before I jump on a plane and make my way to San Francisco again. As you might have guessed it is not only because of the beautiful city but because: It's JavaOne time! And we're pretty excited to be there again as Red Hat. We planned a lot of exciting activities for the attendees and hope to make a real difference with content and our presence in the exhibition hall. This  post should be your guide to all the Red Hat content and activieties for the JavaOne week and I can't wait to see you getting excited about it. As usual for an overview, I might be missing things. Please feel free to reach out and I'll get the information added. If you haven't registered so far there is still plenty of time to consider a visit. Find more information on the offical JavaOne website and on the JavaOne blog.

Come To Our Booth
First and foremost: Come to our booth! Located in the exhibition hall (Hilton San Francisco, Grand Ballroom) next to the Java Hub. Number 5101! We're dressed in all red and white and there's swag and a mini-theater with some cool sessions from our engineers attending JavaOne. There will be plenty of people from the community around to talk to and answer all your questions. If you just want to come bye and say hello, you're more than welcome. You can register to become a Red Hat Developer and I'm not unveiling a lot if I tell you that we will have a very cool t-shirt to give a way. It is a special 20-years-of-java edition! You can't afford to let this go. And there will be another great thing to grab. You probably have heard of me writing a little book: A book about Java EE and Modern Design pattern. It is distributed as a e-book from the 10/25 and I will let you know the download link: But if you happen to be there, please stop by and get your personal hard-copy of it!

Register For Talks, Hands-On-Labs, BOFs and Tutorials By Red Hat Engineers
As usual, there are a lot of our engineers at JavaOne. Please feel free to reach out to them, give feedback, ask questions and network as this is a perfect opportunity to make yourself heard. I wouldn't be too surprised to see many if not all of them dropping by our booth regularly. That's your goto!
To make it easier for your, here is the complete list of sessions and speakers: (Add them via the schedule builder!)

CON,BOF,TUT,HOLSpeakerCo-Speaker
Apache DeltaSpike, the CDI Toolbox [CON2380]Rafael Benevides, Senior Software Engineer, Red HatAntoine Sabot-Durand, Senior Software Engineer, Red Hat
You’ve Got Microservices: Now Secure Them [CON7320]Steven Pousty, Developer Advocate, Red HatStian Thorgersen, Principal Software Engineer, Red Hat
 Apache Lucene for Java EE Developers [CON3538]Sanne Grinovero, Principal Software Engineer, Red Hat
Java EE 7 Applications as a Microservice with WildFly Swarm [CON7090]Kenneth Finnigan, Principal Software Engineer, Red HatMark Little, Vice President, Red Hat, Inc.
 CDI 2.0: What’s in the Works? [CON2391]José Paumard, CTO, JPEFIAntoine Sabot-Durand, Senior Software Engineer, Red Hat
 Shenandoah: An Ultralow-Pause-Time Garbage Collector for OpenJDK [CON1868]Christine Flood, Software Engineer, Red Hat, Inc.
What Would ESBs Look Like If They Were Done Today? [CON1716]Markus Eisele, Developer Advocate, Red Hat GmbH
Building Applications with JRuby 9000 [CON7272]Thomas Enebo, Senior Principal Software Engineer, Red HatCharles Nutter, Principal Software Engineer, Red Hat
Developing Java EE Applications with Security in Mind [CON1971]Mauricio Leal, LATAM Cloud/Mobile Subject Matter Expert (SME), Red Hat
Java EE to Microservices Automagically [CON7641]Alexandre Porcelli, Principal Software Engineer, Red Hat
Taming Microservices Testing with Docker and Arquillian Cube [CON7101]Aslak Knutsen, Senior Software Engineer, Red Hat IncBartosz Majsak, Software Engineer, Cambridge Technology Partners
Riding a Camel Through the JEEhara [CON1715]Markus Eisele, Developer Advocate, Red Hat GmbH
Standardized Extension-Building in Java EE with CDI and JCA [CON2385]Jason Porter, Senior Software Engineer, Red Hat Inc
What's the Best IDE for Java EE? [CON6699]Max Andersen, Consulting Engineer, Red Hat
Script Bowl 2015: The Emerging Languages Take Over [CON6946]Charles Nutter, Principal Software Engineer, Red Hat
Securing Web Applications: A Practical Guide [TUT5977]Sébastien Blanc, Senior Software Engineer, Red HatBruno Oliveira, Security Software Engineer, Red Hat, Inc.
Advanced CDI in Live Coding [TUT2376]Antoine Sabot-Durand, Senior Software Engineer, Red HatAntonin Stefanutti, Senior Software Engineer, Red Hat
Java Cryptography Deep Dive: Taming the Beast [TUT4468]Bruno Oliveira, Security Software Engineer, Red Hat, Inc.
Docker for Java EE Developers [HOL7249]Rafael Benevides, Senior Software Engineer, Red HatMarkus Eisele, Developer advocate, Red Hat GmbH
OpenJDK Adoption Group BOF [BOF3377]Dalibor Topic, Principal Product Manager, OracleMario Torre, Principal Software Engineer, Red Hat

An Even More
Not much longer and you will find an events page up on developers.redhat.com with even more information about the talks and speakers. I heard, there will be something like a party :-)
And while you're there make sure to join the Red Hat Developers Network. Share more. Learn more. Code more. Red Hat Developers delivers the resources and ecosystem of experts to help professional programmers to be more productive and get ahead of the curve as they build great applications.

And maybe even more. Follow @myfear and @jbossdeveloper on Twitter for latest news and happenings.

Saturday, October 17, 2015

For Java Developers - Akka HTTP In The Cloud With Maven and Docker

13:41 Saturday, October 17, 2015 Posted by Unknown No comments:
, , ,
From time to time it is necessary to just take the little effort to think outside the box. This is a good habit for every developer and even if you just spend 10% of your time, with new and noteworthy technology, you will gain experience and broaden your knowledge. I wanted to look into Scala and Akka since a while. Both well known old acquaintances on many conference agendas. But honestly, I've never felt the need to take a second look. This changed quite a bit when I started to look deeper into microservices and relevant concepts around it. Let's get started and see what's in there.

What Is Akka? And why Scala?
But first some sentences about what Akka is. The name AKKA is the a palindrome of letters A and K as in Actor Kernel.

"Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM."

It was built with the idea in mind to make writing concurrent, fault-tolerant and scalable applications easier. With the help of the so called "Actor" model the abstraction level for those applications has been re-defined and by adopting the "let it crash" concept you can build applications that self-heal and systems that stand very high workloads.  Akka is Open Source, available under the Apache 2 License and can be downloaded from http://akka.io/downloads. Learn more about it in the official Akka documentation. Akka comes in two flavors: With a Java and a Scala API. So, you're basically free to choose which version you want to use in your projects. I went for Scala in this blog post  because I couldn't find enough Akka Java examples out there.

Why Should A Java Developer Care?
I don't know a lot about you, but I was just curious about it and started to browse the documentation a bit. The "Obligatory Hello World" didn't lead me anywhere. Maybe because I was (am) still thinking too much in Maven and Java but we're here to open our minds a bit so it was about time to change that. Resilient and message driven systems seem to be the most promising way of designing microservices based applications. And even if there are things like Vert.x which are a lot more accessible for Java developers it never is bad to look into something new.  Because I didn't get anywhere close to where I wanted to be with the documentation, I gave Konrad `@ktosopl` Malawsk a ping and asked for help. He came up with a nice little Akka-HTTP example for me to take apart and learn. Thanks for your help!

Akka, Scala and now Akka-HTTP?
Another new name. The Akka HTTP modules implement a full server- and client-side HTTP stack on top of akka-actor and akka-stream. It's not a web-framework but rather a more general toolkit for providing and consuming HTTP-based services. And this is what I wanted to take a look at. Sick of reading? Let's get started:

Clone And Compile - A Smoke-Test
Git clone Konrad's example to a folder of choice and start to compile and run it:
git clone https://github.com/ktoso/example-akka-http.git
mvn exec:java
After downloading the internet point your browser to http://127.0.0.1:8080/ and try the "ping" link. You get a nice "PONG!" answer.
Congratulations. Now let's look at the code.

The Example Code
Looking at the pom.xml and the exec-maven-plugin configuration points us to the com.example.ExampleServer.scala class. It extends the ExampleRoutes.scala and obviously has some routes defined. Not surprisingly those map to the links you can use from the index page. It kinds of makes sense, even if you don't understand Scala. For the Java developers among us, Konrad was kind enough to add a Java Akka example (JavaExampleServer.java). If I compare both of them, I still like the Java example a lot better, but it is also probably also a little longer. Just choose what you like best.
There's one very cool thing in the example that you might want to check out. The line is emitting an Reactive Streams source of data which is pushed exactly as fast as the client can consume it, and it is only generated "on demand". Compare http://www.reactive-streams.org/ for more details.
The main advantage of the example is obviously that it provides a complete Maven based build for both languages and can be easily used to learn a lot more about Akka. A good jumping off point. And because there is not a lot more in this example from a feature perspective let's see if we can get this to run in the cloud.

Deploying Akka - In A Container
According to the documentation there are three different ways of deploying Akka applications:
  • As a library: used as a regular JAR on the classpath and/or in a web app, to be put into WEB-INF/lib
  • Package with sbt-native-packager which is able to build *.deb, *.rpm or docker images which are prepared to run your app.
  • Package and deploy using Typesafe ConductR.
I don't know anything about sbt and ConductR  so, I thought I just go with what I was playing around lately anyway: In a container. If it runs from a Maven build, I can easily package it as an image. Let's go. The first step is to add the Maven Shade Plugin to the pom.xml:
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-shade-plugin</artifactId>
   <version>2.4.1</version>
   <executions>
      <execution>
         <phase>package</phase>
         <goals>
            <goal>shade</goal>
         </goals>
         <configuration>
            <shadedArtifactAttached>true</shadedArtifactAttached>
            <shadedClassifierName>allinone</shadedClassifierName>
            <artifactSet>
               <includes>
                  <include>*:*</include>
               </includes>
            </artifactSet>
            <transformers>
               <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                  <resource>reference.conf</resource>
               </transformer>
               <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                  <main-Class>com.example.ExampleServer</main-Class>
               </transformer>
            </transformers>
         </configuration>
      </execution>
   </executions>
</plugin>
The shade plugin creates an Uber jar, which is exactly, what I want to have in this case. There are three little special cases in the configuration. We need to:
  1. attach the shaded artifact to access it from the Docker Maven Plugin
  2. use the AppendingTransformer, because the configuration file is concatenated and not a "pick one" thing during the build process.
  3. define the main class, that we want to run.
When this is done, it is about time to configure our Docker Maven Plugin accordingly:
<plugin>
   <groupId>org.jolokia</groupId>
   <artifactId>docker-maven-plugin</artifactId>
   <version>0.13.5</version>
   <configuration>
      <images>
         
      </images>
   </configuration>
</plugin>
Couple of things to notice:
  • the output file mapping, which needs to be the same in the entrypoint argument. 
  • the project attachment include "allinone" which we defined in the Maven shade plugin.
  • the user in the image assembly (needs to be one that has been defined in the base image. In this case jboss/base-jdk which only knows the user jboss.)
And while we're on it, we need to tweak the example a bit. The binding to localhost for the Akka-Http server is not really helpful on a containerized environment. So, we use the java.net library to find out about the actual IP of the container. And while we're at it: Comment out the post startup procedure. The new ExampleServer looks like this:
package com.example

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.stream.ActorMaterializer
import java.net._

object ExampleServer extends ExampleRoutes {
  implicit val system = ActorSystem("ExampleServer")
  import system.dispatcher
  implicit val materializer = ActorMaterializer()
  // settings about bind host/port
  // could be read from application.conf (via system.settings):
  val localhost = InetAddress.getLocalHost
  val interface = localhost.getHostAddress
  val port = 8080

  def main(args: Array[String]): Unit = {
    // Start the Akka HTTP server!
   // Using the mixed-in testRoutes (we could mix in more routes here)
    val bindingFuture = Http().bindAndHandle(testRoutes, interface, port)
    println(s"Server online at http://$interface:$port/\nON Docker...")
  }
} 
Let's build the Akka Application and the Dockerfile by executing:
mvn package docker:build
and give it a test-run:
docker run myfear/akka-sample
Redirecting your browser to http://192.168.99.100:32773/ (Note: IP and port will be different in your environment. Make sure to list the container port mapping with docker ps and get the ip of your boot2docker or docker-machine instance) will show your the working example again.

Some Final Thoughts
The next step in taking this little Akka example to the cloud would be to deploy it on a PaaS. Take OpenShift for example (compare this post on how to do it). As a fat-jar application, it can be easily packaged into an immutable container. I'm not going to compare Akka with anything else in the Java space but wanted to give you a starting point for your own first steps and encourage you always stay curious and educate yourself about some of the technologies out there.

Further Readings and Information:

Monday, October 12, 2015

Scaling Java EE Microservices on OpenShift

14:44 Monday, October 12, 2015 Posted by Unknown No comments:
, , ,
The first two parts of this little series introduced you build a tiny JAX-RS service with WildFly Swarm and package it into a Docker image. You learned how to deploy this example to OpenShift and now it is time to scale it up a bit.

Why Scaling Is Important
One of the key aspects of microservices based architectures is decomposition into highly performant individual services which scale on demand and technically easy. Applications are now being built to scale and infrastructure is transparently assisting where necessary. While Java EE developers have done this a lot in the past with the standard horizontal scaling by putting more physical boxes next to each other or limit vertical scaling by spinning up more instances on the same host. Microservices allow for different scaling approaches. A much more complete definition of the different variations for scaling can be found in the book The Art Of Scalability. I'm going to dig into different approaches with future blog-posts. To make the entry into scaling a little bit easier, we're going to scale our tiny little app vertically today by spinning up more pods for it.

What Is A Pod
A pod (as in a pod of whales or pea pod) is a Kubernetes object which corresponds to a colocated group of applications running with a shared context. In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. In a pre-container world, they would have executed on the same physical or virtual host. So, that's what we want to scale in this example. The pod, that is already running.

What Did We Do So Far?
When you first deployed the JAX-RS example, OpenShift created a bunch of resources. Namely:
  • Imagestream 
    An image stream is similar to a Docker image repository in that it contains one or more Docker images identified by tags. OpenShift stores complete metadata about each image (e.g., command, entrypoint, environment variables, etc.). Images in OpenShift are immutable. OpenShift components such as builds and deployments can watch an image stream and receive notifications when new images are added, reacting by performing a build or a deployment, for example.
  • Service
    A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them.
  • DeploymentConfig
    Building on replication controllers, OpenShift adds expanded support for the software development and deployment lifecycle with the concept of deployments. OpenShift deployments also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.
So, a service proxies our request to the pods, and a deploymentconfig is build on top of the Kubernetes replication controller, which controls the number of pods. We're getting closer!

Scale My Microservice now, please!
Just a second longer, so: while services provide routing and load balancing for pods which may blink in and out of existence, ReplicationControllers (RC) are used to specify and enforce the number of pods (replicas) that should be in existence. RCs can be thought of to live at the same level as Services but they provide different functionality above pods.  RCs are a Kubernetes object.OpenShift provides a “wrapper” object on top of the RC called a Deployment Configuration (DC). DCs not only include the RC but they also allow you to define how transitions between images occur as well as post­deploy hooks and other deployment actions.
We finally know where to look at. Let's seem what the DeploymentConfig looks like, that we created when we started our swarm-sample image.
oc get dc swarm-sample
NAME           TRIGGERS                    LATEST VERSION
swarm-sample   ConfigChange, ImageChange   1
Even though RCs control the scaling of the pods, they are wrapped in a higher construct, DeploymentConfig, which also manages when, where, and how these Pods/RCs will be deployed. We can still see the underlying RC: (note: output truncated)
oc get rc swarm-sample-1
CONTROLLER       CONTAINER(S)   IMAGE(S)                                         REPLICAS 
swarm-sample-1   swarm-sample   172.30.101.151:5000/myfear/swarm-sample@sha256:[...]    1 
And now we need to know if whatever scaling we're going to do is actually working. I did push a little curl script, which outputs the result from the JAX-RS endpoint and sleeps for 2 seconds before it is requesting the output again. Start it up and watch the result returning the same hostname environment variable all over until you execute the following command:
oc scale dc swarm-sample --replicas=3
Now everything changes and after a while you see three different hostnames being returned. It might take a while (depending on your machine and how quickly OpenShift can spin up the new pods.You can also see the change in the admin console, where three pods are now displayed.


We can revert the behavior by setting the replicas count back to 1.
oc scale dc swarm-sample --replicas=1
That was easy. And not exactly considered best-practice. Because all of the pods share the same context, they should never run on the same physical machine. Instead, it would be better to run a complete microservice (frontend, backend, database) on three pods within the same RC. But this is a topic for more blog-posts to come. Now you learned, how to scale pods on OpenShift and we can continue to evolve our example application further and do more scaling examples later.

Friday, October 9, 2015

Deploying Java EE Microservices on OpenShift

11:38 Friday, October 9, 2015 Posted by Unknown No comments:
, ,
I blogged about the simple JAX-RS microservice with WildFly Swarm yesterday. You learned how to build a so called "fat-jar" with Maven and also used the Maven Docker plugin to dockerize our microservice and run it locally on Docker Machine. This was a nice way to test things locally. What was missing so far is to put this into production. Let's look what steps are necessary to run yesterdays example on OpenShift Origin.

Why Should An Enterprise Java Developer Care?
But first of all, let's briefly look into why an Enterprise Java developer should even care about all of this. There is something about the recent hypes and buzzes, that let me wonder a bit. For sure, they make an interesting playing-field and you can spend hours of downloading container images and running them on your laptop. But bringing them into production was a challenge so far. Nigel has a really nice blog-post up about a deprecated feature in Docker. And it has another gem in it: A paragraph called: "Enterprise Impact". The main quote is:

"I’m sure doing this kind of stuff is done all the time in cool hipster companies [...] But it’s absolutely not done in rusty old enterprises [...]".
(Nigel Poulton) 

And I can absolutely second that. Enterprise Developers, Architects and Project Managers are taking a much slower and conservative approach to adopting all those technologies. And they are looking for ways to successfully manage infrastructures and projects. All those technologies will find their way into our daily work life, but they will come in a more manageable way. So, we're just doing our homework with educating ourselves about all of this and evaluating solutions that will help us with that. But enough of general thoughts; Let's start to deploy and scale a Java EE application.

Prerequisites
Install and run OpenShift Origin and follow the steps to build a WildFly Swarm JAX-RS Microservice in a Docker Container. Because this is the example, I'm going to deploy and scale further on.
(NOTE: I am using both, the all-in-on-vm from the OpenShift project and the Vagrant image delivered by the Fabric8 project interchangeable. They work pretty much the same and both rely on OpenShift Origin. If you see URLs ending on .f8, e.g. https://vagrant.f8:8443 in one of the codes or examples, you can use localhost or other host-mappings interchangeable. )

What Is OpenShift Origin?
OpenShift Origin is the upstream open source version of Red Hat's distributed application system, OpenShift. We launched this project to provide a platform in which development teams could build and manage cloud native applications on top of Docker and Kubernetes. You can find the source code on Github and we know you've got great ideas for improving OpenShift Origin. So roll up your sleeves and come join us in the community.
There is a lot to know to master all the integrated technologies. But the community is working hard to make this as understandable and manageable as possible for us, the enterprise Java developers. To give you a brief overview of OpenShift, this is a simple diagram of how everything works:


You see a lot of common parts here, if you've been following the latest buzz around Docker and Kubernetes. A request comes in via a client and ends in the routing layer. It get's dispatched to a service and hit's a pod which is running one of our Docker images in a container. The pods are controlled by replication controllers. There is a lot more to it, of course, but this should be all you need to understand for now to get a first idea about the whole thing.
Another, more detailed overview gives you a more precise idea about the parts, that we are going to work with today.

Especially the integrated Docker registry, the image stream, deployment configuration and routing to our services are of interest for now.

The Basics - Administration UI and Client Tools
After you setup your vagrant box and have fired it up, you can access the web-based administration by browsing to: https://localhost:8443. The all-in-one-vm comes without configured security. This means, that the "Allow All" identity provider is used. You can login with any non-empty user name and password. The "admin" user is the administration user with all rights. Login in with "admin/admin" gives you full power on Origin. The web-based administration is good for looking at logfiles and the overall picture. It is (not yet) fully featured and doesn't allow you to tweak or change things. First and foremost, you need to use the command line tool: "oc". And similar to the web-adminstration, you also have to login to OpenShift:
oc login https://localhost:8443
You are also prompted for a username and password (admin/admin) and presented with a list of projects:
Authentication required for https://vagrant.f8:8443 (openshift)
Username: admin
Password:
Login successful.

Using project "default".
You have access to the following projects and can switch between them with 'oc project <projectname>':
  * default (current)
  * openshift
  * openshift-infra
Now you're ready for some administration in OpenShift.

Exposing the Internal Docker Registry
If we want to run a dockerized application in OpenShift, which isn't available in the docker-hub, we need to push it to the OpenShift Docker Registry. By default it isn't externally exposed, so first thing to do is to expose the build in OpenShift Docker Registry via a Route.
oc create -f registry-route.json
The json file contains the definition for the route and is checked into my Github repository. Make sure to adjust the host name in Line 8 to your needs. For this example to work, I added the following mapping to my hosts file on Windows:
172.28.128.4 registry.vagrant.f8
When the route is successfully created, all you have to do is to set your environment accordingly (you will have done this already, when you followed my intro blog-posts. So this is just a reminder):
set DOCKER_HOST=tcp://vagrant.f8:2375
Creating A Project And A User
Let's create a new project for our example. Because of namespace reasons, we will name the project exactly after the user and image name: In this example, "myfear".
oc new-project myfear --description="WildFly Swarm Docker Image on OpenShift v3" --display-name="WildFly Swarm Project"
The description and display name are optional, but make it better looking in the web-ui.


Let's create a user "myfear" by simply logging in as:
c login https://vagrant.f8:8443 -u myfear

Tweaking The Example
We need to change some parts of the pom.xml from yesterdays example. First of all, we need to tell the Docker Maven Plugin, that it should use a private registry running at registry.vagrant.f8:80. Wondering, why this isn't port 5000? Because, we exposed the service via OpenShift and the HAProxy did it via port 80. Uncomment the two lines in the pom.xml:
<docker.host>tcp://vagrant.f8:2375</docker.host>
<docker.registry>registry.vagrant.f8:80</docker.registry>
And get the login token for the user myfear via the oc client tools:
$oc whoami -t
which will output something like this:
ykHRzUGGu-FAo_ZS5RJFcndjYw0ry3bskMgFjqK1SZk
Now update the token in the <authConfig> element of the pom. That's basically it.

Build And Push The Image
The image has been build in my earlier blog-post already, but let's just do it again here:
mvn clean install docker:build
Now push the image to our OpenShift Docker Registry:
mvn docker:push
Which will output the process of pushing the image to registry.vagrant.f8:80/myfear/swarm-sample.

Run A Docker Image On OpenShift
Now we just use the regular way to spin up a new Docker image on OpenShift:
oc new-app --docker-image=myfear/swarm-sample:latest
And watch what is happening: OpenShift actually created several resources behind the scenes in order to handle deploying and running this Docker image. First, it made a Service, which identifies a set of pods that it will proxy and load balance. Services assign an IP address and port pair that, when accessed, redirect to the appropriate back end The reason you care about services is they basically act as a proxy/load balancer between your pods and anything that needs to use the pods that is running inside the OpenShift environment. Get a complete description of what OpenShift created from our image by using the describe command:
oc describe service swarm-sample
Which outputs:
Name:                   swarm-sample
Namespace:              myfear
Labels:                 app=swarm-sample
Selector:               app=swarm-sample,deploymentconfig=swarm-sample
Type:                   ClusterIP
IP:                     172.30.25.44
Port:                   8080-tcp        8080/TCP
Endpoints:              172.17.0.5:8080
Session Affinity:       None
No events.
The one thing, we're missing so far is the external mapping via a route. You recall what we did for the Docker Registry? This is the next and last step so far.

oc expose service swarm-sample --hostname=wildfly-swarm.vagrant.f8
And as you might have guessed, we also need to map the hostname in the hosts file:
172.28.128.4    wildfly-swarm.vagrant.f8
And we're done. Why I didn't use a json file to create the route? Because I wanted to show you, that it can be easier, as long the image uses the correct EXPOSE definitions for the ports, the oc expose command does this job without having to mess around with json. It is the same result.

Browse to: http://wildfly-swarm.vagrant.f8/rs/customer and see the output:
{"text":"WildFly Swarm Docker Application on OpenShift at http://wildfly-swarm.vagrant.f8/rs/ - Hostname: swarm-sample-1-7mmd7"}
The hostname is the pod, on which our container is running on.

Next up in this series is scaling and load-balancing our little example. Stay tuned for more! Have questions, or ideas about more Java EE and Docker and OpenShift? Please let me know and follow me on Twitter @myfear

Thursday, October 8, 2015

A WildFly Swarm JAX-RS Microservice in a Docker Container

17:19 Thursday, October 8, 2015 Posted by Unknown No comments:
, , ,
Everybody is talking about microservices these days. And there are plenty of opinions and ideas and very few examples about how to apply those principles on an enterprise level. One thing is for sure, even at conferences just a couple of days ago, I rarely found anyone who was running a Docker container in production. At least a reasonable amount of hands went up when I asked about first experiences and if someone had played around with it. And looking at all the operational level knowledge (OS, Networking, etc.) that is required to run a containerized infrastructure, I can understand all this. A lot has to be done to make this easier for Enterprise Java developers. There are indeed some ways we can work with day to day tools and combine them with latest technologies to educate ourselves. One of them is WildFly Swarm as a lightweight and easy way to build fully contained Java EE applications. And this blog post is going to show you how to run this locally on Docker.

What is WildFly Swarm?
WildFly is a light weight, flexible, feature rich, Java EE 7 compliant application server. WildFly 9 even introduced a 27MB Servlet-only distribution. Both are solid foundations for your Java Enterprise projects. The most recent version WildFly 10.CR2 will be the foundation for Red Hat's next supported Java EE server offering, the Enterprise Application Platform 7.
WildFly Swarm moves away from the static bundling of various profiles and allows you to build your own, custom feature Java EE runtime. But WildFly Swarm isn't just about a customized application server; it is about bundling your application including the relevant application server components together in a single executiable. This is also called a "fat-jar" which can simply be run using java -jar. And while we're talking about it: Microservices usually bring the complete application plus their stack into it, so you can think of every WildFly Swarm application as an independent and fully contained  microservice.

Turning A Java EE Application into A Fat-Jar
A Java EE application can be packaged as WildFly Swarm fat jar by adding a Maven dependency and a plugin. The complete source code for this simple JAX-RS sample is available at https://github.com/myfear/WildFlySwarmDockerSample/. The application itself exposes an endpoint /rs/customer which just outputs some text. The real magic is put into the pom.xml file. We're walking now through it.
First of all, the dependency for the Java EE 7 API and after that it's

<dependency>
            <groupId>org.wildfly.swarm</groupId>
            <artifactId>wildfly-swarm-jaxrs</artifactId>
            <version>${version.wildfly-swarm}</version>
   </dependency>

A WildFly Swarm plugin takes care for the packaging of the application:

<plugin>
                <groupId>org.wildfly.swarm</groupId>
                <artifactId>wildfly-swarm-plugin</artifactId>
                <version>${version.wildfly-swarm}</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
    </plugin>

That's about all the magic. You can build the application with "mvn package". You will find the war file itself and an additional attachment "swarm-sample-1.0-SNAPSHOT-swarm.jar" in the target folder. If you open that, you can find a m2repo folder with all the dependent libraries and your app itself bundled in the _bootstrap\ folder. You can directly run it from the command line in your maven project (Windows users might run into this issue):
java -jar target/swarm-1.0-SNAPSHOT-swarm.jar
Redirecting the browser to http://localhost:8080/rs/customer will show you some json
{"text":"WildFly Swarm Docker Application on OpenShift at http://192.168.99.100:32773/rs/ - Hostname: 093ca3279a43"}

Dockerizing WildFly Swarm
The WildFly Swarm project has some Docker examples up on github. Mostly bash scripts and some wrappers to dockerize your project. But there is something even better: The Docker Maven Plugin by Roland Huss. I used it a couple of times before already, and it is also used in this example. All you have to do is to add the plugin to your pom.xml.
 <plugin>
                <groupId>org.jolokia</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>${docker.maven.plugin.version}</version>
</plugin>
The configuration is a bit more tricky. (Thanks to Roland for all the email support he gave me over the last couple of days!). First of all, the basics are easy. Add an image to the plugin configuration and name it accordingly. I inherit from jboss/jdk-base:8 and the image gets the name and tag myfear/swarm-sample:latest (Lines 77ff). The build configuration exposes the port 8080 and defines the relevant entry point (the command to start java with the -jar parameter). The assembly of the image needs to include project attachments and include the attachment as dependency. Make sure, that the output service mapping and the basedir match the entry point argument.

Let's give it a Test-Run
Make sure you have docker-machine setup on your host. Create a dev machine and configure your environment variables accordingly. Now you can run:
mvn clean install docker:build docker:start -Ddocker.follow
(NOTE: A bug in the 10.3.5 Docker Maven Plugin actually can't pull the base image right now. You need to manually execute a 'docker pull jboss/jdk-base:8' before doing the maven run.)
The project is build and a container is started from the image.


Congratulations, now you have a running Java EE microservice in your local Docker instance. The next blog will actually look into how to take this image and run it on OpenShift Origin and scale it to your needs.

Monday, October 5, 2015

Quick Tip: Running WildFly Docker Image on OpenShift Origin

11:31 Monday, October 5, 2015 Posted by Unknown No comments:
, ,
On to a new week. There's been plenty of travel for me recently, and it don't stop soon. But I have some time to try out OpenShift Origin and run it on my Windows environment. There is an entry level blog-post how to setup everything from a couple of days ago. Now it was about time to just run a vanilla Docker image on it.

Prerequisites
Get your Origin installation up and running. And make sure to also install the OpenShift binaries locally. The OpenShift team released the all in one vm on a separate, developer friendly and good looking website a couple of days after my post. So, all you need to remember is this address: http://www.openshift.org/vm/

Get your OpenShift Environment Up
This is a single vagrant up command. If that succeeded, you should be able to access the local admin-console via your browser at http://localhost:8443/ and also login with the client tools from the command line:
oc login http://localhost:8443
Use admin/admin as username/password.

Create A Project And Run WildFly
First thing to do is to create a new OpenShift project. We want to separate this a bit from the default. At the end, think of it as a namespace in which we can just play around a bit:
oc new-project wildfly-tests --description="WildFly and Docker on OpenShift v3" --display-name="WildFly Test Project"
OpenShift doesn't directly expose a Docker daemon. So, you need to use the oc command line tool to run an image. There are some (unsupported) JBoss community images available and listed on http://www.jboss.org/docker/. I am interested in running latest WildFly 9 for this test.
oc new-app --docker-image=jboss/wildfly
If you watch the web-console, you will see, that a deployment is running and the Docker image get's pulled.


Depending on your connection, this might take some time. But when it's finished, you will see a green bar that states "Running" and also shows an IP-address. Let's see, if everything went well and the WildFly instance is up and running. We do need to see the logs for our pod. Let's list them:
oc get pods
NAME              READY     STATUS    RESTARTS   AGE
wildfly-1-jzvsj   1/1       Running   0          11m
and see the logs:
oc logs wildfly-1-jzvsj
Note, that the pod name will most likely be different in your environment. The command should output the WildFly logs as you are used to them. For now, we have a pod running. Now we need to expose this pod's port via a service to the external world. But first of all we need to decide, via which domain-name we want to expose it. Add/change your hosts file with the following entry:
127.0.0.1 wildfly.openshiftdev.local
And execute the following command to add an external route to the service:
oc expose service wildfly --hostname=wildfly.openshiftdev.local
Browse to the services tab in the console and see, that the route was created for the service.

The only thing left to do now is to change the port-forwarding rules in the VirtualBox console. Add the port 80 from the host to the guest.
Now you can access the WildFly instance via http://wildfly.openshiftdev.local/. Congratulations!

Trouble Shooting
If you're running anything else than the all-in-on-vm, for example the fabric8 vagrant image, you will need to change the security settings in OpenShift. Ssh into the instance, login via the oc command line and edit the security settings:
oc edit scc restricted
Change the runAsUser.Type strategy to RunAsAny. This allows images to run as the root UID if no USER is specified in the Dockerfile.