Enterprise grade Java.
You'll read about Conferences, Java User Groups, Java, Integration, Reactive, Microservices and other technologies.

Tuesday, April 12, 2016

Integration Architecture with Java EE and Spring

14:27 Tuesday, April 12, 2016 Posted by Markus Eisele
, , ,
The O'Reilly Software Architecture Conference in New York happens this week. And I had the pleasure to give a tutorial together with Josh Long about how to integrate Java EE and Spring. We've been joking about this one since a while. The super stupid biased view onto both technologies which some people have in mind was something that bugged both of us since a while. Another important reason for this talk was, that we both are caring about modernisation of old applications. There is so much legacy software out there that is easy 10+ years old. And you find those legacy applications in both technologies. This is, why we wanted to help people to understand how to modernise them and survive the transition phase.

A little history about Spring and Java EE
The first part of the talk caught up on a little historical background of both technologies. Where they came from and how they evolved and lead into the state they are in today. Both evolved significantly since their inception and asking the question about what to chose today can easily be answered with a single sentence: "Chose the right tool for the right job". But you can even mix and match for many reasons.

Spring on Java EE
There is a broad space of problems where you might think about using Spring on top of Java EE. While EE has been around and evolved a lot, we had to learn that you can't really innovate in a standard body. This lead to more than just a handful of features that are to be desired if you build a reasonable modern application. Some of those gaps include the security space (social logins), NoSQL integration, enterprise integration in general. And while you are free to pick from Java EE open or closed source offerings to close them, Spring most often has an answer in the family which makes it easy to use the same programming model and have an integrated offering. Plus, the Spring framework has a very long tail: Spring framework 4 runs on Servlet 2.5+ (2006!!), Java EE 6 (2009) and Java 6+. Which makes it very easy to use modern features even on the most outdated base platform. Find the demo code in my github repository and enjoy how easy it is to deploy a spring war to a Java EE server and just use the APIs.

Java EE on Spring
But you can also turn this around and use Java EE APIs with Spring. The reasons you might want to do this are plenty: It can be a first migration step towards Spring while simply re-using some of your old code. Plus you want to use standards where standards make sense and where everything else would be to invasive. Examples include JTA, JPA, JSR303, JSR 330, JCA, JDBC, JMS, Servlets, etc.
And there is also an example app which you can run as a Spring Boot based fat-jar while using (mostly) Java EE APIs in it.

Technical Integration and Microservices
The last part of the presentation touched on technical integration between two systems and the technologies supported in both worlds. We also talked about microservices designs and answered a bunch of questions over the turn of the three hours.
I really enjoyed it and have to admit that Josh is an amazing presenter and I learned a hell lot over the last couple of days working with him! It's a pleasure to know you, Josh! Make sure to follow him on Twitter @starbuxman.

Thursday, April 7, 2016

Your first Lagom service - getting started with Java Microservices

09:07 Thursday, April 7, 2016 Posted by Markus Eisele
,
I've been heads-down in writing my next O'Reilly report and didn't had enough time to blog in a while. Time to catch up here and give you a real quick start into the new microservices framework named Lagom. It is different to what you might know from Java EE or other application frameworks. And this is both a challenge and opportunity for you to learn something new. If you can wait for a couple of more days, register to be notified when my new report will be available and learn everything about the story behind Lagom and how to get started. I will walk you through an example application and introduce the main concepts to you in more detail than I could in a blog post. This post is for the unpatient that want to get started today and figure everything out themselves.

Some background
Microservices are everywhere these days and more and more is unveiled about what it takes to build a complex distributed system with the existing middleware stacks. And there are far better alternatives and concepts to implement an application as a microservices based architecture. The core concepts of reactive microservices have been introduced by Jonas Bonér in his report Reactive Microservices Architecture which is available for free after registration. Lagom is the implementation of the described concepts. It uses technologies that you might have heard about but probably rarely used before as a Java EE developer: Mainly Akka and Play. But for now, let's just forget about them because Lagom provides you with a great abstraction on top and gives you everything you need to get started.

Prerequisites
Have activator and Java 8 installed. Activator is something that you probably also haven't heard about. It is build on top of sbt and helps you getting started with your projects and much more. A Lagom system is typically made up of a set of sbt builds, each build providing multiple services. The easiest way to get started with a new Lagom system is to create a new project using the lagom Activator template. No need for anything else right now. You probably want to have an IDE installed. IntelliJ or Eclipse should be good for now.

Setting up your first project
Time to get to see some code. Let's generate a simple example from the lagom-java template:
$ activator new first-lagom lagom-java

Change into the newly generated folder "fist-lagom" and issue the sbt command to create an eclipse project.
$ activator eclipse

A bunch of dependencies are downloaded and after the succesful execution you can open Eclipse and use the Import Wizard to import Existing Projects into your Workspace. Note, that if you are using the Immutables library with Eclipse, you need to set this up, too.

Lagom includes a development environment that let you start all your services by simply typing runAll in the activator console. Open the terminal and cd to your Lagom project:
$ activator runAll
The output looks similar to this:
[info] Loading project definition from /Users/myfear/projects/first-lagom/project
[info] Set current project to first-lagom (in build file:/Users/myfear/projects/first-lagom/)
[info] Starting embedded Cassandra server
........
[info] Cassandra server running at 127.0.0.1:4000
[info] Service locator is running at http://localhost:8000
[info] Service gateway is running at http://localhost:9000
[info] Compiling 2 Java sources to /Users/myfear/projects/first-lagom/helloworld-api/target/scala-2.11/classes...
[info] Compiling 1 Java source to /Users/myfear/projects/first-lagom/hellostream-api/target/scala-2.11/classes...
[info] Compiling 2 Java sources to /Users/myfear/projects/first-lagom/hellostream-impl/target/scala-2.11/classes...
[info] Compiling 6 Java sources to /Users/myfear/projects/first-lagom/helloworld-impl/target/scala-2.11/classes...
[info] application - Signalled start to ConductR
[info] application - Signalled start to ConductR
[info] Service hellostream-impl listening for HTTP on 0:0:0:0:0:0:0:0:26230
[info] Service helloworld-impl listening for HTTP on 0:0:0:0:0:0:0:0:24266
[info] (Services started, use Ctrl+D to stop and go back to the console...)
Now go and try out your first service by visiting http://localhost:9000/api/hello/World. Now you're all set for the next blog posts, where I am going to walk you through the example in more detail. If you can't wait, go ahead and read in the Lagom Getting Started guide.

Thursday, March 24, 2016

Free Mini Book about Reactive Microservices

15:06 Thursday, March 24, 2016 Posted by Markus Eisele
, ,
Working with and talking about microservices has been my focus for a while already. And since I started working at Lightbend I already had the pleasure to work with some amazing people. One of them is Jonas Bonér and it has been a real pleasure helping with the creation of this little mini book about reactive microservices. Many of the concepts described in this book are the foundation for our newly open source microservices framework Lagom. It makes getting started into the world of JVM based microservices and reactive systems and introduces all the important aspects that you should have a brief understanding about.

And the best part is, that it is available for free at lightbend.com.

Written for architects and developers that must quickly gain a fundamental understanding of microservice-based architectures, this free O’Reilly report explores the journey from SOA to microservices, discusses approaches to dismantling your monolith, and reviews the key tenets of a Reactive microservice:

  • Isolate all the Things
  • Act Autonomously
  • Do One Thing, and Do It Well
  • Own Your State, Exclusively
  • Embrace Asynchronous Message-Passing
  • Stay Mobile, but Addressable
  • Collaborate as Systems to Solve Problems

And here is the full abstract:
Still chugging along with a monolithic enterprise system that’s difficult to scale and maintain, and even harder to understand? In this concise report, Lightbend CTO Jonas Bonér explains why microservice-based architecture that consists of small, independent services is far more flexible than the traditional all-in-one systems that continue to dominate today’s enterprise landscape.

Reactive Microservices Architecture Download
You’ll explore a microservice architecture, based on Reactive principles, for building an isolated service that’s scalable, resilient to failure, and combines with other services to form a cohesive whole. Specifically, you’ll learn how a Reactive microservice isolates everything (including failure), acts autonomously, does one thing well, owns state exclusively, embraces asynchronous message passing, and maintains mobility.

Bonér also demonstrates how Reactive microservices communicate and collaborate with other services to solve problems. Get a copy of this exclusive report and find out how to bring your enterprise system into the 21st century.

Jonas Bonér is Founder and CTO of Lightbend, inventor of the Akka project, co-author of the Reactive Manifesto and a Java Champion. Learn more at: http://jonasboner.com.

Friday, March 11, 2016

Review: "Learning Akka" by Jason Goodwin

08:43 Friday, March 11, 2016 Posted by Markus Eisele
, , ,
Haven't done a review in a while. It's time to dive a little deeper into the technical portfolio of Lightbend. Today it is Akka. A book with this title is the ideal start with a new technology in general. And for all my Java readers: Rest assured, that all examples in this book are in Java 8 (and in Scala).
A big "Thank you!" to Packt Publishing who provided the book to me for review.

Abstract
Software today has to work with more data, more users, more cores, and more servers than ever. Akka is a distributed computing toolkit that enables developers to build correct concurrent and distributed applications using Java and Scala with ease, applications that scale across servers and respond to failure by self-healing. As well as simplifying development, Akka enables multiple concurrency development patterns with particular support and architecture derived from Erlang’s concept of actors (lightweight concurrent entities). Akka is written in Scala, which has become the programming language of choice for development on the Akka platform.

Learning Akka aims to be a comprehensive walkthrough of Akka. This book will take you on a journey through all the concepts of Akka that you need in order to get started with concurrent and distributed applications and even build your own.

Beginning with the concept of Actors, the book will take you through concurrency in Akka. Moving on to networked applications, this book will explain the common pitfalls in these difficult problem areas while teaching you how to use Akka to overcome these problems with ease.

The book is an easy to follow example-based guide that will strengthen your basic knowledge of Akka and aid you in applying the same to real-world scenarios.

Book: "Learning Akka"
Language : English
Paperback: 274 pages
Release Date : 30. Dezember 2015
ISBN-10: 1784393002
ISBN-13: 978-1784393007

The Author
Jason Goodwin (GitHub: jasongoodwin) is a developer who is primarily self-taught. His entrepreneurial spirit led him to study business at school, but he started programming when he was 15 and always had a high level of interest in technology. This interest led his career to take a few major changes away from the business side and back into software development. His journey has led him to working on high-scale distributed systems. He likes to create electronic music in his free time.

He was first introduced to an Akka project at a Scala/Akka shop—mDialog—that built video ad insertion software for major publishers. The company was acquired by Google eventually. He has also been an influential technologist in introducing Akka to a major Canadian telco to help them serve their customers with more resilient and responsive software. He has experience of teaching Akka and functional and concurrent programming concepts to small teams there. He is currently working via Adecco at Google.

The Content
Take all the preface, index and praises away you end up with 216 pages of plain content. Divided into nine chapters.
Chapter 1: Starting Life as an Actor gives an introduction to the Akka Toolkit and Actor Model in general. It covers everything you need to know to get started including the setup of the development environment.
Chapter 2: Actors and Concurrency introduces you to the reactive design approach. The anatomy of, creation of, and communication with an actor together with the tools and knowledge necessary to deal with asynchronous responses and how to work with Futures—place-holders of results.
Chapter 3: Getting the Message Across helps you to understand the details of message delivery mechanisms in Akka. That includes different messaging patterns.
Chapter 4: Actor Lifecycle – Handling State and Failure introduces you to the actor's life cycle and explains what happens when an actor encounters an exceptional state and how you can change its state to modify its behaviour.
Chapter 5: Scaling Up guides you through how Akka can help us scale up more easily to make better use of our hardware, with very little effort.
Chapter 6: Successfully Scaling Out – Clustering comes in handy, when you reach the physical limits of a single machine. Learn what happens when you reach the limit of a physical host and need to process the work across multiple machines.
Chapter 7: Handling Mailbox Problems digs deeper into what happens when you start to hit the limits of your actor system and how to describe how your system should behave in those situations.
Chapter 8: Testing and Design examines some general approaches to design and testing in greater detail.
Chapter 9: A Journey's End highlights a few outstanding features and modules that you may want to be aware of, and some next steps.

Writing and Style
The author thoughtfully explored all the content in every chapter and created a great resource for everybody who wants to start with the Akka toolkit. Sentences are a little longer from time to time and it is a technical book but absolutely readable also for non native speakers.
Every chapter includes links to further resources and a little homework for you to do. Testing and test-design is covered in a separate chapter but also present in code samples throughout the complete book.

Conclusion and recommendation
This book attempts to give both the introductory reader and the intermediate or advanced reader an understanding of basic distributed computing concepts as well as demonstrates how to use Akka to build fault-tolerant horizontally-scalable distributed applications that communicate over a network. With all the examples being present in both languages (Java 8 and Scala) it is the ideal entry for a Java developer to dive into Akka and get a first idea about the concepts. It does not simply copy the documentation and covers many of the important topics and approaches you should understand to successfully build applications with Akka. But be aware that this book only gets you up to speed quickly. To fully understand the toolkit you should follow the further reading advices at the end of each chapter. Don't forget to use the above codes to get 50% off the eBook or 25% off the printed edition. Because the recommendation is to buy it!

Thursday, March 10, 2016

Microservices trouble? Lagom is here to help. Try it!

09:55 Thursday, March 10, 2016 Posted by Markus Eisele
, ,
The cake is backed. We're proud to announce that the new Apache licensed microservice framework Lagom is available on GitHub! While other frameworks focus on packaging and instance startup, Lagom redefines the way that Java developers build microservice based applications. Services are asynchronous. Intra-service communication is managed for you. Streaming is out of the box. Your microservices are resilient by nature. And you can program in the language you love the most: Java.

What is Lagom? And what does Lagom mean?
Lagom (pronounced [ˈlɑ̀ːɡɔm]) is a Swedish word meaning just right, sufficient. And as such it will help you build microservice based applications in an easier way. Instead of having to find your own answers to how to effectively develop, debug and run tens of different services on your machine you can finally focus on what really is important: The implemented business logic. Lagom takes care for all of the rest for you and eventually helps you to stage and run your application in production. The design is based on three main principles:
  1. Is asynchronous by default.
  2. Favours distributed persistent patterns, in contrast to the traditional centralised database.
  3. Places a high emphasis on developer productivity.

How do I get started?
Read through the quick setup documentation and watch the 11 minute getting started video by
Mirco Dotta who shows you that development is already familiar: Use your favorite IDE and favorite dependency injection tools. You leverage the old to build something new.

How can you give feedback?
That is easy. We're open source and have a couple of channels you can use to get in touch with the project. Start with subscribing to the mailing-list and reach out to us on the Gitter Lagom chat. We're also monitoring questions on StackOverflow which are tagged with Lagom.
And don't forget to follow @Lagom on twitter for latest information

Further Resources:

Wednesday, February 24, 2016

Taking off the red fedora. Hello Lightbend!

07:00 Wednesday, February 24, 2016 Posted by Markus Eisele
,
Almost two years in Red Hat JBoss Middleware have been a tremendous journey for me. Getting to know so many amazing and gifted people to work with in all kinds of Java EE and integration products and projects really made me realize there is far more talent in open source communities than anywhere else.
You’ve known and seen me at different conferences and Java User Groups meetups or read my blogs or are following me on Twitter. While I've been talking about middleware for many years you’ll continue to hear me talk about enterprise grade Java going forward. Focused on education about the latest trends in building enterprise systems in a reactive way with Java.

And as such I'm very happy to announce, that I am joining Lightbend as of March 1st as their new Developer Advocate. Follow +Lightbend and @Lightbend The importance of Java and the need to build today's enterprise grade systems differently will both be a big part of my future topics. If you have particular wishes and questions already, I'm happy to answer them via my Twitter handle @myfear, comment on my blog or maybe in a complete blogpost.

My journey into containers and microservices architectures will also continue. Going forward I will continue to educate more about how microservices architectures can integrate and complement existing platforms, and will also talk about how to successfully build resilient applications with Java.

I have looked in the mirror every morning and asked myself: "If today were the last day of my life, would I want to do what I am about to do today?" And whenever the answer has been "No" for too many days in a row, I know I need to change something.
Steve Jobs 

Make sure to subscribe to JBoss Weekly and @jbossdeveloper for latest updates about JBoss or to @rhdevelopers to learn all about the Red Hat Developers Program.

Wednesday, January 20, 2016

Running Any Docker Image On OpenShift Origin

16:07 Wednesday, January 20, 2016 Posted by Markus Eisele
, ,
I've been using OpenShift since a while now. For many reasons. First of all, I don't want to build my own Docker and Kubernetes environment on Windows and second of all, because I like the simple installation. After the Christmas holidays I decided to upgrade my machine to Windows 10. While I like the look and feel, it broke quite a bit of networking and container installments including the Docker and OpenShift environments. Now that I have everything up and running again, it is time to follow the microserivces path a little more. The first thing is to actually get OpenShift up and running and get a development environment setup in which we can simply push Docker images to it without having to use any of the Source-2-Image or OpenShift build mechanisms.

Installing the all-in-one-VM
Download the all-in-one-vm image and import it into the vagrant box. This image is based off of OpenShift Origin and is a fully functioning OpenShift instance with an integrated Docker registry. The intent of this project is to allow Web developers and other interested parties to run OpenShift V3 on their own computer. Given the way it is configured, the VM will appear to your local machine as if it was running somewhere off the machine. Which is exactly what I need to show you around in OpenShift and introduce some more features. If you need a little more assistance follow the method 2 in this earlier blog-post.
I also assume, that you have docker-machine running. You can install it via the Docker Toolbox.

First steps in OpenShift
Fire up the magazine via vagrant up and redirect you browser to https://localhost:8443/. Accept the certificate warning and enter admin/admin as login. You're now browsing through the admin console. Let's create a new project to play around with:
oc login https://localhost:8443
# enter admin/admin as the credentials

oc new-project myfear --description="Testing random Docker Images on OpenShift" --display-name="Myfears test project"

# Add admin role to user myfear
oc policy add-role-to-user myfear admin -n myfear
First thing to do is to actually get a MySql database up and running. I want to use that in subsequent blog-posts and it's a good test to see if everything is working. Get the two json files from the my github repository and execute the following commands:
oc create -f mysql-pod.json
oc create -f mysql-service.json
Go back to your browser and select the myfear project and see the mysql service up and running with one pod.

Using the OpenShift Registry
You just witnessed how OpenShift pulled the mysql image and started a pod with a container on it. Obviously this image came from the built in registry. But how can one actually upload a docker image to the internal OpenShift registry? Ssh into the vagrant machine and look around a bit:
vagrant ssh
docker ps
You see a lot of running containers and one of them is running the openshift/openshift-registry-proxy. This little gem actually forwards the port 5000 from the internal docker registry to the vagrant vm. Open Virtualbox and look at the port forwarding rules there. Another rule forwards port 5000 from the guest to the host. This means, the internal OpenShift Docker registry is already exposed by default. But how do we push something there? The docker client requires a docker host to work. The OpenShift Docker Daemon isn't exposed externally and you can't just point your docker client to it.
This means, you need another docker host on your machine which is configured to access the OpenShift docker registry as external registry. I'm using docker-machine here, because it is extremely easy to create new docker hosts with it.
docker-machine create -d virtualbox dev
After a couple of seconds your "dev" vm is created and started. Now we need to find out, what the host system's IP address is from the dev box. Ssh into the machine and get the ip of the default gateway:
docker-machine ssh dev
$ ip route | grep default

> 10.0.0.2
Now we need to stop the machine and add the ip address we found to the insecure registry part of the configuration:
docker-machine stop dev
edit  ~/.docker/machine/machines/default/config.json 
# Add the found ip address plus the registry port to the HostOptions => EngineOptions =>  InsecureRegistry array
Afterwards it should look like this:
 "InsecureRegistry": [
                "10.0.2.2:5000"
   ]
time to re-start the dev vm and get the docker client configured for it:
docker-machine start dev
FOR /f "tokens=*" %i IN ('docker-machine env dev --shell cmd') DO %i
That's it for now. One important thing is, that the internal registry is secured and we need to login to it. Get the login token for the user "myfear" with the following commands:
oc login -u myfear
oc whoami -t
This will return something cryptic like: dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY. Now login to the registry:
docker login -u myfear -p dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY -e markus@someemail.org 10.0.2.2:5000
Make sure to use the correct username and token. You get a success message with and your login credentials are being saved in the central config.json.

Build and push the custom image
Time to finally build the custom image and push it. I am using Roland's docker maven plugin again.
If you want to learn more about it, there is an older blog-post about it. Also find the code in this github repository. Compare the pom.xml and make sure to update the docker.host and docker.registry properties
  <docker.host>tcp://192.168.99.101:2376</docker.host>
  <docker.registry>10.0.2.2:5000</docker.registry>
and the <authConfig> section with the correct credentials. Build the image with:
mvn clean install docker:build docker:push
If you ran into an issue with the maven plugin not being able to build the image, you may need to pull the jboss/base-jdk:8 image manually first:
docker pull jboss/base-jdk:8
Let's check, if the image is successfully uploaded by using the console and navigating to the overview => image streams page.
And in fact, the image is listed. Now, we need to start a container with it and expose the service to the world:
oc new-app swarm-sample-discovery:latest --name=swarm
oc expose service swarm --hostname=swarm-sample-discovery.local
Please make sure to add the hostname mapping to your hosts or dns configuration (to 127.0.0.1). As you can see, I am no longer using the docker image tag but the image stream name. OpenShift converted this internally.
Time to access the example via the browser http://swarm-sample-discovery.local:1080/rs/customer.
If you're wondering about the port go back to the Virtualbox configuration and check the nat section. The all on one vm actually assumes, that you have something running on port 80 already and maps the vm ports to the 1080 host port.
The application does very little for now, but I will use this in subsequent blog-posts to dig a little into service discovery options.
The console overview shows the two services with one pod each.


That's it for today. Thanks again to Roland for his help with this. Let me know, if you run into issues and if you want to know something else about OpenShift and custom images.

Tuesday, January 12, 2016

A Refresher - Top 5 Java EE 7 Frontend

19:57 Tuesday, January 12, 2016 Posted by Markus Eisele
, ,
The series continues. After the initial overview and Arjan's post about the most important backend features, I am now very happy to have Ed Burns (@edburns) finish the series with his favorite Java EE 7 frontend features.

Thanks to Markus Eisele for giving me the opportunity to guest post on his very popular blog. Markus and I go way back to 2010 or so, but I've not yet had the pleasure of guest posting.  Markus asked me to cover the Java EE 7 Web Tier.  Since EE 7 is a mature release of a very mature
platform, much has already been published about it.  Rather than rehash what has come before, I'm going to give my opinion about what I think are the important bits and a very high level overview of each.

If you're interested in learning more about this first-hand, please consider attending my full day training at JavaLand 2016.  I'm giving the training with modern finance and HTML5 expert Oliver Szymanski.  For details, please visit the javaland website.

First, a bit of historical perspective.  Markus asked me to write about the Java EE 7 Web Tier.  Let's take a look at that term, "web tier" or "presentation tier" as it is also called.  If one is to believe the hype surrounding newer ideas such as microservices, the term itself is starting to sound a bit dated because it implies a three tier architecture, with the other two tiers being "business logic" and
"persistence".  Surely three tiers is not micro enough, right?  Well, the lines between these tiers are becoming more and more blurred over time as enterprises tinker with the allocation of responsibilities in pursuit of delivering the most business value with their software.  In any case, Java EE has always been a well integrated collection of enterprise technologies for the Java platform, evolved using a consensus based open development practice (the Java Community Process or JCP) with material participation from leading stake holders.  The "web tier" of this platform is really just the set of technologies that one might find useful when developing the "web tier" of your overall solution.  This is a pretty big list:

WebSocket 1.0 JSR-356
JavaServer Faces 2.2 JSR-344
Servlet 3.1 JSR-340
JSON Processing 1.0 JSR-353
REST (JAX-RS) 2.0 JSR 339
Bean Validation 1.1 JSR-349
Contexts and Dependency Injection 1.1 JSR-346
Dependency Injection for Java 1.0 JSR-330
Concurrency Utilities for Java EE 1.0 JSR-236
Expression Language 3.0 JSR-341

For the purposes of this blog entry, let's take a look at the first five: WebSocket, JSF, Servlet, JSON, and JAX-RS.  While the second five are surely essentail for a professional web tier, it is beyond the scope of this blog entry to look at them.

WebSocket
JSF and WebSocket are the only two Java EE 7 specs that have a direct connection to the W3C HTML5 specification.  In the case of WebSocket, there are actually three different standards bodies in play.  WebSocket, the network protocol, is specified by RFC-6455 from the IETF.  WebSocket
the JavaScript API is specified as a sub-spec of HTML5 from the W3C. WebSocket the Java API is specified by JCP under JSR-356.  In all aspects of WebSocket, the whole point is to provide a message based reliable full-duplex client-server connection.

JSR-356 lets you use WebSocket in both client and server capacities from your Java SE and EE applications.

On the server side, it allows you to expose a WebSocket endpoint such that browsers can connect to it using their existing support for the WebSocket JavaScript API and network protocol.  You declare your endpoints to the system either by annotating some POJOS, or by imperatively calling bootstrapping APIs from java code, say from a ServletContextListener.  Once the connection is established, the server can send and receieve messages from/to any number of clients that happen
to be connected at the same time.  The runtime automatically handles connection setup and tear down.

The WebSocket java client API allows java SE applications to talk to WebSocket endpoints (Java or otherwise) by providing a Java analog to the W3C JavaScript WebSocket API.

Java Server Faces (JSF)
In JSF 2.2 we added many new features but I will only cover three of them here.

HTML5 Friendly Markup enables writing your JSF pages in almost pure HTML (must be well formed), without the need for the XML namespaces that some see as clumsy and difficult to understand.  This is possible because the underlying HTML Basic JSF RenderKit (from JSF 1.0) provides all the necessary primitives to adopt mapping conventions from an arbitrary
piece of HTML markup to a corresponding JSF UIComponent.  For example, this is a valid JSF form

        <form jsf:id="form">
           <input jsf:id="name" type="tel" jsf:value="#{complex.name}" />
           <progress jsf:id="progress" max="3" value="#{complex.progress}" />
        </form>

The only catch is the need to flag the element as a JSF component by use of a namespaced attribute.  This means you must declare at least one namespace in the <html> tag:

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:jsf="http://xmlns.jcp.org/jsf">

Faces Flows is a standardization of the page flow concept from ADF Task Flows and Spring Web Flow.  Flows gives you the ability to group pages together that have some kind of logical connection and need to share state.  A flow defines a logical scope that becomes active when the the flow is entered and made available for garbage collection when the flow is exited.  There is a rich syntax for describing flows, how they are entered, exited, relate to each other, pass parameters to each other,
and more.  There are many conveniences provided thanks to the Flows feature being implemented on top of Contexts and Dependency Injection (CDI).  Flows can be packaged as jar files and included in your web application, enabling modularization of sub-sections of your web app.

Just as Flows enable modularizing behavior, Resource Library Contracts (RLC) enable modularizing appearance.  RLC provides a very flexible skinning system that builds on Facelets and lets you package skins in jar files, effectively allowing modularizing appearance.

Servlet
The most important new feature in Servlet 3.1 is the additional support for non-blocking IO.  This builds on top of the major feature of Servlet 3.0 (from Java EE 6): async io.  The rapid rise of reactive programming indicates that Java apps can no longer afford to block for IO, ever. The four concerns of reactive programming: responsiveness, elasticity, resiliency, and event basis are founded on this premise.  Prior to non-blocking IO in Servlet 3.1, it was very difficult to avoid blocking in Servlet apps.

The basic idea is to allow the Servlet runtime to call your application back when IO can be done safely without blocking.  This is accomplished by virtue of new listener interfaces, ReadListener and WriteListener, instances of which can be registered with methods on ServletInputStream and ServletOutputStream.

When you add this feature to the async-io feature added in Servlet 3.0, it is possible to write Servlet based apps that can proudly sport the "We Are Reactive" banner.

JSON
From the outside perspective, the ability to parse and generate JSON in Java is certainly nothing new.  Even before Java EE 7, there were many solutions to this basic need.  Hewing close to the principle that standards are not for innovation, but to confer special status upon existing ideas, the JSON support in Java EE 7 provides the capability to parse and generate JSON with a simple Java API.  Reading can be done in a streaming fashion, with JsonParser, or in a bulk fashion using JsonReader.  Writing can be done in a streaming fashion with JsonGenerator.  Writing can be done in a bulk style with JsonBuilderFactory and JsonWriter.

JAX-RS
It is hard to overstate the importance of REST to the practice of modern enterprise software development for non-end-user facing software.  I'd go so far as to say that gone are the days when people go to the javadoc (or JSDoc or appledoc etc) to learn how to use an API.  Nowadays if your
enterprise API is not exposed as a RESTful web service, you probably will not even be considered. JAX-RS is how REST is done in Java. JAX-RS has been a part of Java EE since Java EE 6, but it received the 2.0 treatment in Java EE 7.  The big ticket features in 2.0 include:

  •  Client support
    In my opinion, the most useful application of this feature is in using   JUnit to do automated testing of RESTful services without having to  resort to curl from continuous integration.  Of course, you could use it for service-to-service interaction as well.
  •  Seamless integration with JSON
    In most cases a simple @Produces("application/json") annotation on  your HTTP method endpoint is sufficient to output JSON.  Data arriving  at your service in JSON format is also automatically made available to  you in an easy to consume format from Java.
  •  Asynchronous support (Reactive again)
    This feature gives you the ability to hand off the processing required  to generate a response to another Thread, allowing the original thread to return immediately so no blocking happens.  The async thread is free to respond when it is ready.

Naturally, this only scratches the surface of the Java EE 7 web tier. For more details, a great place to start is the official Java EE 7 launch webinars.

I hope to see you at JavaLand!

Thank you Ed for taking the time to write this post. If you haven't now is the time to to play around with Java EE 7. Here are some resources to get you started with JBoss EAP 7 and WildFly:

Thursday, January 7, 2016

Get Up to Speed with Microservices in 8 hours

10:44 Thursday, January 7, 2016 Posted by Markus Eisele
, ,
Everybody is talking microservices these days and Red Hat is doing some very cool developer events around the world. The latest one happened at the beginning of November last year. The amazing speaker lineup starts with special guest speaker, Tim Hockin from the Google Cloud Management team and technical lead and co-founder of Kubernetes, along with Red Hat's James Strachan and Claus Ibsen. James created the Groovy programming language and is also a member of the Apache Software Foundation and a co-founder of a number of other open source projects such as; Apache ActiveMQ, Apache Camel, Apache ServiceMix. Claus Ibsen works an open source integration projects such as Apache Camel, fabric8 and hawtio and author of Camel in Action books. Tim, James, Claus and many more talk on areas such as Kubernetes for Java developers, microservices with Apache Camel and mobile-centric architecture.

The complete 8 hour playlist is available for free on Youtube and I just want to pick out some of my personal favorites.

Microservices in the Real World by Christian Posta
Beyond the many technology challenges of introducing microservices, organizations need to also adapt their existing development and operations processes and workflows to reap the bigger benefits of a microservices architecture including continuous delivery style application delivery. This session reviews challenges a number of large enterprises have faced in looking to adopt microservices, and looks at how they’ve adapted on their on-going journey. This session also covers some of the end architectures these companies used as they incorporated these new architectural approaches and technologies with their existing people skills and processes.


WildFly Swarm : Microservices Runtime by Lance Ball
With lightweight microservices dominating the dev chatter these days, traditional Java EE developers have spent a lot of time looking in the mirror and asking themselves, "Does my application look fat in this container?" or "How can I leverage my existing Java EE bits in a lightweight microservice?" or "What if I had Just Enough App Server™ to leverage the power and standards of Java EE, but did it all with a slimmed down, self-contained runnable that is easy to deploy and manage?". Well, maybe not that last one.

Enter WildFly Swarm. WildFly Swarm deconstructs the WildFly Java Application Server into fine-grained parts, allowing the selective reconstitution of those parts, together with your application into a self-contained executable - an "uberjar". The resulting artifact contains just enough Java EE to support whatever subset of the traditional APIs your application requires.

This talk introduces WildFly Swarm, and show you how it helps you bring your existing Java EE investments into the world of lightweight, easily deployed microservices. As a bonus, it shows you how WildFly Swarm helps you easily take advantage of other state-of-the-art components such as Keycloak, Ribbon, and Hystrix, integrating them seamlessly in your application.


Logging and Management for Microservices by Jimmi Dyson
Logging is a key part to making microservices work for you. This session helps you look at logs in a different way in order to understand what your systems are doing and how they’re interacting, in order to fix problems quickly and improve performance. You will understand the problems in collecting logs from your distributed microservices and discuss how to centralize them to get real value out of this goldmine of information.


Microservices Workflow: CI/CD by James Rawlings
We all know that in the real world there is more to developing than writing lines of code. This session explores how fabric8 has evolved to provide a platform that supports not only the development of microservices but also working with them, taking an idea from inception right through to running in a live environment.

With popular trends such as DevOps, we know that it is more about the culture of an organization that will give you greater agility and chance of success. Being able to communicate effectively with your cross functional teams increases productivity, reduces social conflicts, and establishes the all important feedback loops.

We look at how fabric8 provides out-of-the-box integration for hosted git services in Gogs, as well as agile project management with Taiga and social tools such as Lets Chat and Slack, to enable intelligent, extendable automation using Hubot, while providing a platform that is designed for the new age microservices team.

It also cover the integration of popular logging and metric tools that are prerequisites to continuous improvement. We need to understand not only how the platform is operating but also greater visibility of how it’s being used. Being able to visualize how teams communication in and outside of their unit can be seen as first steps to measuring the culture within an organization. This can be extremely useful in identifying early signs of internal silos developing as well as learning from high performing teams.



Look at the complete playlist on Youtube and find out more about the event and the other sessions on the redhat.com microservices developer day website.

Monday, January 4, 2016

How DevOps And Microservices Accelerate Application Delivery

17:38 Monday, January 4, 2016 Posted by Markus Eisele
, , ,
Devops und microservices accelerate application delivery
Happy new year everybody! While I'm officially still on vacation, I'd quickly like to point you to a recent DevOps and Microservices article in the German DOAG Business News. You can download your copy directly via this link (PDF, ~300 KB).

The Business News is a DOAG trade journal publication and is published four times a year. It covers basic business relevant topics from a semi technical perspective. Please be aware, that the linked PDF is in German. If you want to learn more I recommend the following articles: