Java EE and general Java platforms.
You'll read about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies.

Wednesday, January 20, 2016

Running Any Docker Image On OpenShift Origin

16:07 Wednesday, January 20, 2016 Posted by Markus Eisele
, ,
I've been using OpenShift since a while now. For many reasons. First of all, I don't want to build my own Docker and Kubernetes environment on Windows and second of all, because I like the simple installation. After the Christmas holidays I decided to upgrade my machine to Windows 10. While I like the look and feel, it broke quite a bit of networking and container installments including the Docker and OpenShift environments. Now that I have everything up and running again, it is time to follow the microserivces path a little more. The first thing is to actually get OpenShift up and running and get a development environment setup in which we can simply push Docker images to it without having to use any of the Source-2-Image or OpenShift build mechanisms.

Installing the all-in-one-VM
Download the all-in-one-vm image and import it into the vagrant box. This image is based off of OpenShift Origin and is a fully functioning OpenShift instance with an integrated Docker registry. The intent of this project is to allow Web developers and other interested parties to run OpenShift V3 on their own computer. Given the way it is configured, the VM will appear to your local machine as if it was running somewhere off the machine. Which is exactly what I need to show you around in OpenShift and introduce some more features. If you need a little more assistance follow the method 2 in this earlier blog-post.
I also assume, that you have docker-machine running. You can install it via the Docker Toolbox.

First steps in OpenShift
Fire up the magazine via vagrant up and redirect you browser to https://localhost:8443/. Accept the certificate warning and enter admin/admin as login. You're now browsing through the admin console. Let's create a new project to play around with:
oc login https://localhost:8443
# enter admin/admin as the credentials

oc new-project myfear --description="Testing random Docker Images on OpenShift" --display-name="Myfears test project"

# Add admin role to user myfear
oc policy add-role-to-user myfear admin -n myfear
First thing to do is to actually get a MySql database up and running. I want to use that in subsequent blog-posts and it's a good test to see if everything is working. Get the two json files from the my github repository and execute the following commands:
oc create -f mysql-pod.json
oc create -f mysql-service.json
Go back to your browser and select the myfear project and see the mysql service up and running with one pod.

Using the OpenShift Registry
You just witnessed how OpenShift pulled the mysql image and started a pod with a container on it. Obviously this image came from the built in registry. But how can one actually upload a docker image to the internal OpenShift registry? Ssh into the vagrant machine and look around a bit:
vagrant ssh
docker ps
You see a lot of running containers and one of them is running the openshift/openshift-registry-proxy. This little gem actually forwards the port 5000 from the internal docker registry to the vagrant vm. Open Virtualbox and look at the port forwarding rules there. Another rule forwards port 5000 from the guest to the host. This means, the internal OpenShift Docker registry is already exposed by default. But how do we push something there? The docker client requires a docker host to work. The OpenShift Docker Daemon isn't exposed externally and you can't just point your docker client to it.
This means, you need another docker host on your machine which is configured to access the OpenShift docker registry as external registry. I'm using docker-machine here, because it is extremely easy to create new docker hosts with it.
docker-machine create -d virtualbox dev
After a couple of seconds your "dev" vm is created and started. Now we need to find out, what the host system's IP address is from the dev box. Ssh into the machine and get the ip of the default gateway:
docker-machine ssh dev
$ ip route | grep default

> 10.0.0.2
Now we need to stop the machine and add the ip address we found to the insecure registry part of the configuration:
docker-machine stop dev
edit  ~/.docker/machine/machines/default/config.json 
# Add the found ip address plus the registry port to the HostOptions => EngineOptions =>  InsecureRegistry array
Afterwards it should look like this:
 "InsecureRegistry": [
                "10.0.2.2:5000"
   ]
time to re-start the dev vm and get the docker client configured for it:
docker-machine start dev
FOR /f "tokens=*" %i IN ('docker-machine env dev --shell cmd') DO %i
That's it for now. One important thing is, that the internal registry is secured and we need to login to it. Get the login token for the user "myfear" with the following commands:
oc login -u myfear
oc whoami -t
This will return something cryptic like: dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY. Now login to the registry:
docker login -u myfear -p dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY -e markus@someemail.org 10.0.2.2:5000
Make sure to use the correct username and token. You get a success message with and your login credentials are being saved in the central config.json.

Build and push the custom image
Time to finally build the custom image and push it. I am using Roland's docker maven plugin again.
If you want to learn more about it, there is an older blog-post about it. Also find the code in this github repository. Compare the pom.xml and make sure to update the docker.host and docker.registry properties
  <docker.host>tcp://192.168.99.101:2376</docker.host>
  <docker.registry>10.0.2.2:5000</docker.registry>
and the <authConfig> section with the correct credentials. Build the image with:
mvn clean install docker:build docker:push
If you ran into an issue with the maven plugin not being able to build the image, you may need to pull the jboss/base-jdk:8 image manually first:
docker pull jboss/base-jdk:8
Let's check, if the image is successfully uploaded by using the console and navigating to the overview => image streams page.
And in fact, the image is listed. Now, we need to start a container with it and expose the service to the world:
oc new-app swarm-sample-discovery:latest --name=swarm
oc expose service swarm --hostname=swarm-sample-discovery.local
Please make sure to add the hostname mapping to your hosts or dns configuration (to 127.0.0.1). As you can see, I am no longer using the docker image tag but the image stream name. OpenShift converted this internally.
Time to access the example via the browser http://swarm-sample-discovery.local:1080/rs/customer.
If you're wondering about the port go back to the Virtualbox configuration and check the nat section. The all on one vm actually assumes, that you have something running on port 80 already and maps the vm ports to the 1080 host port.
The application does very little for now, but I will use this in subsequent blog-posts to dig a little into service discovery options.
The console overview shows the two services with one pod each.


That's it for today. Thanks again to Roland for his help with this. Let me know, if you run into issues and if you want to know something else about OpenShift and custom images.

Tuesday, January 12, 2016

A Refresher - Top 5 Java EE 7 Frontend

19:57 Tuesday, January 12, 2016 Posted by Markus Eisele
, ,
The series continues. After the initial overview and Arjan's post about the most important backend features, I am now very happy to have Ed Burns (@edburns) finish the series with his favorite Java EE 7 frontend features.

Thanks to Markus Eisele for giving me the opportunity to guest post on his very popular blog. Markus and I go way back to 2010 or so, but I've not yet had the pleasure of guest posting.  Markus asked me to cover the Java EE 7 Web Tier.  Since EE 7 is a mature release of a very mature
platform, much has already been published about it.  Rather than rehash what has come before, I'm going to give my opinion about what I think are the important bits and a very high level overview of each.

If you're interested in learning more about this first-hand, please consider attending my full day training at JavaLand 2016.  I'm giving the training with modern finance and HTML5 expert Oliver Szymanski.  For details, please visit the javaland website.

First, a bit of historical perspective.  Markus asked me to write about the Java EE 7 Web Tier.  Let's take a look at that term, "web tier" or "presentation tier" as it is also called.  If one is to believe the hype surrounding newer ideas such as microservices, the term itself is starting to sound a bit dated because it implies a three tier architecture, with the other two tiers being "business logic" and
"persistence".  Surely three tiers is not micro enough, right?  Well, the lines between these tiers are becoming more and more blurred over time as enterprises tinker with the allocation of responsibilities in pursuit of delivering the most business value with their software.  In any case, Java EE has always been a well integrated collection of enterprise technologies for the Java platform, evolved using a consensus based open development practice (the Java Community Process or JCP) with material participation from leading stake holders.  The "web tier" of this platform is really just the set of technologies that one might find useful when developing the "web tier" of your overall solution.  This is a pretty big list:

WebSocket 1.0 JSR-356
JavaServer Faces 2.2 JSR-344
Servlet 3.1 JSR-340
JSON Processing 1.0 JSR-353
REST (JAX-RS) 2.0 JSR 339
Bean Validation 1.1 JSR-349
Contexts and Dependency Injection 1.1 JSR-346
Dependency Injection for Java 1.0 JSR-330
Concurrency Utilities for Java EE 1.0 JSR-236
Expression Language 3.0 JSR-341

For the purposes of this blog entry, let's take a look at the first five: WebSocket, JSF, Servlet, JSON, and JAX-RS.  While the second five are surely essentail for a professional web tier, it is beyond the scope of this blog entry to look at them.

WebSocket
JSF and WebSocket are the only two Java EE 7 specs that have a direct connection to the W3C HTML5 specification.  In the case of WebSocket, there are actually three different standards bodies in play.  WebSocket, the network protocol, is specified by RFC-6455 from the IETF.  WebSocket
the JavaScript API is specified as a sub-spec of HTML5 from the W3C. WebSocket the Java API is specified by JCP under JSR-356.  In all aspects of WebSocket, the whole point is to provide a message based reliable full-duplex client-server connection.

JSR-356 lets you use WebSocket in both client and server capacities from your Java SE and EE applications.

On the server side, it allows you to expose a WebSocket endpoint such that browsers can connect to it using their existing support for the WebSocket JavaScript API and network protocol.  You declare your endpoints to the system either by annotating some POJOS, or by imperatively calling bootstrapping APIs from java code, say from a ServletContextListener.  Once the connection is established, the server can send and receieve messages from/to any number of clients that happen
to be connected at the same time.  The runtime automatically handles connection setup and tear down.

The WebSocket java client API allows java SE applications to talk to WebSocket endpoints (Java or otherwise) by providing a Java analog to the W3C JavaScript WebSocket API.

Java Server Faces (JSF)
In JSF 2.2 we added many new features but I will only cover three of them here.

HTML5 Friendly Markup enables writing your JSF pages in almost pure HTML (must be well formed), without the need for the XML namespaces that some see as clumsy and difficult to understand.  This is possible because the underlying HTML Basic JSF RenderKit (from JSF 1.0) provides all the necessary primitives to adopt mapping conventions from an arbitrary
piece of HTML markup to a corresponding JSF UIComponent.  For example, this is a valid JSF form

        <form jsf:id="form">
           <input jsf:id="name" type="tel" jsf:value="#{complex.name}" />
           <progress jsf:id="progress" max="3" value="#{complex.progress}" />
        </form>

The only catch is the need to flag the element as a JSF component by use of a namespaced attribute.  This means you must declare at least one namespace in the <html> tag:

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:jsf="http://xmlns.jcp.org/jsf">

Faces Flows is a standardization of the page flow concept from ADF Task Flows and Spring Web Flow.  Flows gives you the ability to group pages together that have some kind of logical connection and need to share state.  A flow defines a logical scope that becomes active when the the flow is entered and made available for garbage collection when the flow is exited.  There is a rich syntax for describing flows, how they are entered, exited, relate to each other, pass parameters to each other,
and more.  There are many conveniences provided thanks to the Flows feature being implemented on top of Contexts and Dependency Injection (CDI).  Flows can be packaged as jar files and included in your web application, enabling modularization of sub-sections of your web app.

Just as Flows enable modularizing behavior, Resource Library Contracts (RLC) enable modularizing appearance.  RLC provides a very flexible skinning system that builds on Facelets and lets you package skins in jar files, effectively allowing modularizing appearance.

Servlet
The most important new feature in Servlet 3.1 is the additional support for non-blocking IO.  This builds on top of the major feature of Servlet 3.0 (from Java EE 6): async io.  The rapid rise of reactive programming indicates that Java apps can no longer afford to block for IO, ever. The four concerns of reactive programming: responsiveness, elasticity, resiliency, and event basis are founded on this premise.  Prior to non-blocking IO in Servlet 3.1, it was very difficult to avoid blocking in Servlet apps.

The basic idea is to allow the Servlet runtime to call your application back when IO can be done safely without blocking.  This is accomplished by virtue of new listener interfaces, ReadListener and WriteListener, instances of which can be registered with methods on ServletInputStream and ServletOutputStream.

When you add this feature to the async-io feature added in Servlet 3.0, it is possible to write Servlet based apps that can proudly sport the "We Are Reactive" banner.

JSON
From the outside perspective, the ability to parse and generate JSON in Java is certainly nothing new.  Even before Java EE 7, there were many solutions to this basic need.  Hewing close to the principle that standards are not for innovation, but to confer special status upon existing ideas, the JSON support in Java EE 7 provides the capability to parse and generate JSON with a simple Java API.  Reading can be done in a streaming fashion, with JsonParser, or in a bulk fashion using JsonReader.  Writing can be done in a streaming fashion with JsonGenerator.  Writing can be done in a bulk style with JsonBuilderFactory and JsonWriter.

JAX-RS
It is hard to overstate the importance of REST to the practice of modern enterprise software development for non-end-user facing software.  I'd go so far as to say that gone are the days when people go to the javadoc (or JSDoc or appledoc etc) to learn how to use an API.  Nowadays if your
enterprise API is not exposed as a RESTful web service, you probably will not even be considered. JAX-RS is how REST is done in Java. JAX-RS has been a part of Java EE since Java EE 6, but it received the 2.0 treatment in Java EE 7.  The big ticket features in 2.0 include:

  •  Client support
    In my opinion, the most useful application of this feature is in using   JUnit to do automated testing of RESTful services without having to  resort to curl from continuous integration.  Of course, you could use it for service-to-service interaction as well.
  •  Seamless integration with JSON
    In most cases a simple @Produces("application/json") annotation on  your HTTP method endpoint is sufficient to output JSON.  Data arriving  at your service in JSON format is also automatically made available to  you in an easy to consume format from Java.
  •  Asynchronous support (Reactive again)
    This feature gives you the ability to hand off the processing required  to generate a response to another Thread, allowing the original thread to return immediately so no blocking happens.  The async thread is free to respond when it is ready.

Naturally, this only scratches the surface of the Java EE 7 web tier. For more details, a great place to start is the official Java EE 7 launch webinars.

I hope to see you at JavaLand!

Thank you Ed for taking the time to write this post. If you haven't now is the time to to play around with Java EE 7. Here are some resources to get you started with JBoss EAP 7 and WildFly:

Thursday, January 7, 2016

Get Up to Speed with Microservices in 8 hours

10:44 Thursday, January 7, 2016 Posted by Markus Eisele
, ,
Everybody is talking microservices these days and Red Hat is doing some very cool developer events around the world. The latest one happened at the beginning of November last year. The amazing speaker lineup starts with special guest speaker, Tim Hockin from the Google Cloud Management team and technical lead and co-founder of Kubernetes, along with Red Hat's James Strachan and Claus Ibsen. James created the Groovy programming language and is also a member of the Apache Software Foundation and a co-founder of a number of other open source projects such as; Apache ActiveMQ, Apache Camel, Apache ServiceMix. Claus Ibsen works an open source integration projects such as Apache Camel, fabric8 and hawtio and author of Camel in Action books. Tim, James, Claus and many more talk on areas such as Kubernetes for Java developers, microservices with Apache Camel and mobile-centric architecture.

The complete 8 hour playlist is available for free on Youtube and I just want to pick out some of my personal favorites.

Microservices in the Real World by Christian Posta
Beyond the many technology challenges of introducing microservices, organizations need to also adapt their existing development and operations processes and workflows to reap the bigger benefits of a microservices architecture including continuous delivery style application delivery. This session reviews challenges a number of large enterprises have faced in looking to adopt microservices, and looks at how they’ve adapted on their on-going journey. This session also covers some of the end architectures these companies used as they incorporated these new architectural approaches and technologies with their existing people skills and processes.


WildFly Swarm : Microservices Runtime by Lance Ball
With lightweight microservices dominating the dev chatter these days, traditional Java EE developers have spent a lot of time looking in the mirror and asking themselves, "Does my application look fat in this container?" or "How can I leverage my existing Java EE bits in a lightweight microservice?" or "What if I had Just Enough App Server™ to leverage the power and standards of Java EE, but did it all with a slimmed down, self-contained runnable that is easy to deploy and manage?". Well, maybe not that last one.

Enter WildFly Swarm. WildFly Swarm deconstructs the WildFly Java Application Server into fine-grained parts, allowing the selective reconstitution of those parts, together with your application into a self-contained executable - an "uberjar". The resulting artifact contains just enough Java EE to support whatever subset of the traditional APIs your application requires.

This talk introduces WildFly Swarm, and show you how it helps you bring your existing Java EE investments into the world of lightweight, easily deployed microservices. As a bonus, it shows you how WildFly Swarm helps you easily take advantage of other state-of-the-art components such as Keycloak, Ribbon, and Hystrix, integrating them seamlessly in your application.


Logging and Management for Microservices by Jimmi Dyson
Logging is a key part to making microservices work for you. This session helps you look at logs in a different way in order to understand what your systems are doing and how they’re interacting, in order to fix problems quickly and improve performance. You will understand the problems in collecting logs from your distributed microservices and discuss how to centralize them to get real value out of this goldmine of information.


Microservices Workflow: CI/CD by James Rawlings
We all know that in the real world there is more to developing than writing lines of code. This session explores how fabric8 has evolved to provide a platform that supports not only the development of microservices but also working with them, taking an idea from inception right through to running in a live environment.

With popular trends such as DevOps, we know that it is more about the culture of an organization that will give you greater agility and chance of success. Being able to communicate effectively with your cross functional teams increases productivity, reduces social conflicts, and establishes the all important feedback loops.

We look at how fabric8 provides out-of-the-box integration for hosted git services in Gogs, as well as agile project management with Taiga and social tools such as Lets Chat and Slack, to enable intelligent, extendable automation using Hubot, while providing a platform that is designed for the new age microservices team.

It also cover the integration of popular logging and metric tools that are prerequisites to continuous improvement. We need to understand not only how the platform is operating but also greater visibility of how it’s being used. Being able to visualize how teams communication in and outside of their unit can be seen as first steps to measuring the culture within an organization. This can be extremely useful in identifying early signs of internal silos developing as well as learning from high performing teams.



Look at the complete playlist on Youtube and find out more about the event and the other sessions on the redhat.com microservices developer day website.

Monday, January 4, 2016

How DevOps And Microservices Accelerate Application Delivery

17:38 Monday, January 4, 2016 Posted by Markus Eisele
, , ,
Devops und microservices accelerate application delivery
Happy new year everybody! While I'm officially still on vacation, I'd quickly like to point you to a recent DevOps and Microservices article in the German DOAG Business News. You can download your copy directly via this link (PDF, ~300 KB).

The Business News is a DOAG trade journal publication and is published four times a year. It covers basic business relevant topics from a semi technical perspective. Please be aware, that the linked PDF is in German. If you want to learn more I recommend the following articles:


Thursday, December 31, 2015

Goodbye 2015, Hello 2016! - Happy New Year.

00:00 Thursday, December 31, 2015 Posted by Markus Eisele
, ,
It has been an amazing year for me. With 31 trips to 13 countries and a total of 110k miles traveled, I have seen a lot of places and spoken at many conferences and user groups. I had the pleasure to talk to many many people and learned a lot about technology and community. The main topics in my past year have been the Red Hat JBoss middleware integration technologies. Mixing those with my Java EE knowledge and also look into new topics like microservices made this a very fast paced year.

84 Blog Posts
Here are the top 5 posts on my blog this year. It's always interesting to see, what helps my readers the most. Thank you for all the visits and comments on the overall 84 blog posts (including this one) this year.

1) NoSQL with Hibernate OGM - Part one: Persisting your first Entities
2) JDBC Realm and Form Based Authentication with WildFly 8.2.0.Final, Primefaces 5.1 and MySQL 5
3) SSL with WildFly 8 and Undertow
4) My Book: Modern Java EE Design Patterns
5) Java EE, Docker, WildFly and Microservices on Docker

3381 Tweets
Social Media is a significant part of my daily life. Mostly because of the many inspirations and I think that it is a very quick and convenient way to catch up with latest happenings and developments. Beside tweeting from @myfear I also look after the @jbossdeveloper handle. And help run the @vjbug. Feel free to catch up with any of them and stay updated with latest happenings and rumblings in the Java/Java EE ecosystem.

One (mini) Book
I did it. Found some time to write a little more than just a blog post. And the feedback I got was pretty amazing. The Modern Java EE Design Pattern mini book can be downloaded for free from the developers.redhat.com website. Just register, benefit from all the free content available there and get a great overview about how DevOps, Microservices, Containers and PaaS influence the way we design and developer applications tomorrow. If you don't have time for the complete book, register and re-watch the accompanying O'Reilly webcast.

And The Best Wishes For The New Year
Beside the already mentioned events and happenings, a lot more was going on over the turn of the year. And I can't wait to start over in 2016. Le't all take some precious offline hours with friends, kids and loved ones and prepare for the new challenges. I am really looking forward to meet a lot of you next year. The first conferences have already been announced and you can follow my schedule on my blog page.

“We spend January 1st walking through our lives, room by room, drawing up a list of work to be done, cracks to be patched. Maybe this year, to balance the list, we ought to walk through the rooms of our lives...not looking for flaws, but for potential.”
― Ellen Goodman

Wednesday, December 23, 2015

Getting Started With The JBoss EAP 7 Quickstarts

16:48 Wednesday, December 23, 2015 Posted by Markus Eisele
, , ,
Now, that the beta of latest Red Hat JBoss Enterprise Application Platform7 is out, it is about time to explore the available Java EE 7 quickstarts and deploy your first application with JBoss Developer Studio (JDBS).
The quickstarts demonstrate JBoss EAP, Java EE 7 and a few additional technologies. They provide small, specific, working examples that can be used as a reference for your own project.

To make it a quick start for you, I recorded a little 6 minute screencast about how to use JDBS to deploy the HelloWorld quickstart.

The quickstarts cover a lot more. So. make sure to check all of them and read the documentation before you start working with them.

Developers can download JBoss EAP 7 with a jboss.org account. They also need to accept the terms and conditions of the JBoss Developer Program which provides $0 subscriptions for development use only. Find out more about the JBoss Developer Program.

And while you're downloading, here are some more resources to get your started:

Tuesday, December 22, 2015

A Refresher - Top 10 Java EE 7 Backend Features

07:53 Tuesday, December 22, 2015 Posted by Markus Eisele
,
This is the second part in my little Java EE 7 refresher series. After a first introduction with a brief overview, I decided to ask Arjan Tijms to write about his favorite new backend features in Java EE 7. You will know Arjan if you're following the Java EE space. He is a long time Java EE developer, JSF and Security EG member and he created OmniFaces together with Bauke Scholtz (aka BalusC) and helps building zeef.com.

1. App Provided Administrative Objects

Java EE has long had the concept of an “administrative object”. This is a kind of resource that is defined on the application server instead of by the application. For some classes of applications using these is a best practice, for others it’s not such a good practice.
Java EE 6 started a small revolution with the introduction of @DataSourceDefinition, which lets an application define its own data source. Java EE 7 expands on this with @MailSessionDefinition (JavaMail 1.5), @ConnectionFactoryDefinition & @AdministeredObjectDefinition (JCA 1.7) and @JMSConnectionFactoryDefinition & @JMSDestinationDefinition (JMS 2.0).
In practice, many applications already used the programmatic API of JavaMail to create mail sessions, and JCA usage is relatively rare. JMS is much more widely used though and lacked an (EE compatible) programmatic API to create destinations (queues and topics).
The importance of this seemingly small feature is that for the first time in the history of JMS it can be used in a fully standard way, without requiring  vendor specific xml files in the application archive or vendor specific configuration in the application server.
Note that none of these application provided resource definitions strongly tie the rest of the application code to these. That application code still only sees a JNDI name, and doesn’t depend on whether the resource is put in JNDI by a standard annotation, standard XML file, proprietary XML file or with proprietary config on the application server.

Further reading:
Automated provisioning of JMS resources in Java EE 7

2. Default resources

Closely related to app provided administrative objects, Java EE also introduced the notion of several default resources.
In the case of a default resource, the Java EE platform provides a ready to use resource of a specific type. Java EE 7 introduced defaults for a data source, the platform’s default JMS connection factory, and the default thread pool.
What characterizes these defaults is that they can’t be further configured in any standardized way. You have to do with whatever is provided by your server.
In the case of a data source, this means you get “something” to which you can send SQL, but there are no further guarantees with respect to performance or even durability (the data base the data source accesses could be fully memory based, although it practice it’s almost always a file in a server specific directory).

For the JMS connection factory, you get a connection to the default JMS provider of the server. Since JMS, unlike a SQL database, is a mandated part of Java EE you usually have a very good idea of what you get here. E.g. if the server in question is a production ready server, the default JMS provider is practically always a production ready one either.

Finally several actual resources such as a ManagedExecutorService gives you access to what is essentially the system’s default thread pool. Such thread pool can be used in much the same way as you would use the @Asynchronous annotation from Java EE 6. You don’t exactly know how many threads are in the pool or whether the ManagedExecutorService is backed by the same pool as @Asynchronous, but for simple ad-hoc multi-threaded work the defaults are typically good enough.

A particular nice aspect of the default resources is that in several situations you don’t even have to say that you want the default. The default data source that a JPA persistence unit for instance uses if you don’t specify any is well, the default data source.

Further reading:
Default DataSource in Java EE 7: Lesser XML and More Defaults 
Defaults in Java EE 7

3. App provided and portable authentication mechanisms

Next to the administrative objects mentioned above, another thing that traditionally had to be defined and configured on the application server side are authentication mechanisms and identity stores (both known by many alternative names).

The Servlet spec does define 4 standardised authentication mechanisms that an application can choose from via its web.xml deployment descriptor (FORM, BASIC, DIGEST, CLIENT-CERT), but did not standardise the actual classes or interfaces for these and subsequently didn’t standardise any API/SPI for custom authentication mechanisms. Furthermore, there’s nothing in the spec about the actual location where caller names/credentials/groups are stored.

Just like with @DataSourceDefinition, Java EE 6 started a small revolution by standardising an API/SPI for authentication mechanisms as well as a programmatic API to register these from within the application: JASPIC 1.0.

Unfortunately, the Java EE 6 version of JASPIC had a few critical omissions that made it hard to actually use those portable authentication mechanisms. The most important of those were addressed in Java EE 7.

Just as with the app provided administrative objects, an app provided authentication mechanism does not tie the rest of the application code to these and they can be transparently swapped out for container provided ones.

Further reading:
What's new in Java EE 7's authentication support?

4. CDI based @Transactional

Before Java EE 7, high level declarative transactions were the domain of EJB. In this model, EJB was intended as a universal facade for a lot of functionality that the platform offers. While EJB evolved from an arcane heavyweight spec in J2EE 1.4 to something that’s actually quite lightweight in Java EE 6, the model of one spec functioning as a facade was not seen as ideal anymore.

While Java EE 6 brought the biggest change with if of actually introducing CDI, Java EE 7 started another small revolution where other specs starting to depend on CDI instead. With this the model of one bean type being a facade started to change to the competing model of one bean type functioning as a base and other specs providing extensions on top of that.

Specifically setting this in motion was JTA 1.2 with the introduction of @Transactional and @TransactionScoped. These are based on an interceptor from the Interceptors spec and a scope from the CDI spec. Both are mainly applicable to CDI beans. The way that this turns the model around is that with EJB, JTA was invisibly used under the hood, while with CDI, JTA (somewhat less invisibly) uses CDI under the hood.

Further reading:
JTA 1.2 - It's not your Grandfather's Transactions anymore!
JTA 1.2 on Arjan's ZEEF page

5. Method validation

Perhaps one of the most versatile and cross layer spec in Java EE is the bean validation spec. Bean validation makes it possible to put validation constraints on various beans, such as CDI beans and JPA entities.

But, those validation constraints only worked at the field level and by extension of that on the class level (which effectively validates multiple fields).

In Java EE 7 the applicability of bean validation took a huge leap by the ability of placing such constraints on methods as well, aptly called method validation. More precisely, constraints can now be put on the input parameters of a method as well as on its return value, and the input constraints can be on individual parameters as well as on multiple parameters.

Whereas field level constraints are validated at a specific moment, e.g. when the JPA entity manager persists an entity or after a postback in JSF, method validation takes place every time a method is called by arbitrary code.In Java EE this works when the method is in a (proxied) CDI managed bean, and the method is indeed accessed via the proxy.

Further reading:
Bean Validation 1.1 Feature Spotlight - Method validation
Bean Validation 1.1 on Arjan's ZEEF page

6. Expression language can be used everywhere

Expression language is a mini script language that’s used within Java EE. It has a long history, from being specifically to JSTL, to being natively incorporated in JSP, natively incorporated in JSF and later on unified between JSP and JSF.

In Java EE 7 this expression language took its biggest leap ever and became a totally independent spec that’s fully usually outside JSP and JSF, and even outside Java EE.

This means that expression language can be used in things like annotations, email templates, configuration files and much more. Just as with the introduction of CDI in Java EE 6, the introduction of a separately usable expression language has the potential to be used by many other specs in the future.

Further reading:
Standard Deviation: An Illustration of Expression Language 3.0 in Servlet Environment
EL 3.0 on Arjan's ZEEF page

7. Greatly simplified JMS API

One of the older specs in Java EE is JMS, which is about (asynchronous) messaging. JMS is also one of the specs that hadn’t been updated for a very long time (not since 2002!), and although still surprisingly usable its age did begin to show a little.

With JMS 2.0 Java EE 7 brought one of the biggest changes to JMS ever; a thoroughly and greatly simplified API. Part of these simplifications piggy back on the default resources mentioned above, but it also takes advantage of Java SE 7’s auto closeable feature and many smart defaults to minimize the amount of objects a user has to manage and juggle with for simple things like sending a message.

Further reading:
What's New in JMS 2.0, Part One: Ease of Use
JMS 2.0 on Arjan's ZEEF page

8. Entity graphs in JPA

Arguably one of the most important specs next to CDI in Java EE is JPA. Whether a Java EE application is a JSF based MVC one, or a JAX-RS based web-service, they pretty much always have some persistence requirements.

One of the difficulties in persistence is determining what is just the right amount of data to be fetched. This should obviously not be too little, but also not be too much since that typically comes with great performance implications.

An important tuning parameter of JPA has always been the eager and lazy loading of specifically relations. This choice is primarily structurally and hard coded on the entities itself. The problem with this is that in different situations the same entity may be required with more or less data. E.g. in an overview of all users you may only want to display the user name, while in a detail view also the address and other contact details.

Before Java EE 7 this could be done without fetching too little or too much data for each case by means of writing separate queries. While this solves the problem it’s not optimal, especially not when it concerns large queries and the only difference is how much associated data is fetched for some entity.

With JPA 2.1, Java EE 7 introduced the concept of entity graphs for this. Via a (named) graph, it’s now possible to determine exactly what data needs to be fetched in a graph style notion. These graphs are defined separately and can be associated at runtime with many different queries.

Further reading
JPA 2.1 Entity Graph – Part 1: Named entity graphs
JPA 2.1 on Arjan's ZEEF page

9. Access to managed thread pools

Briefly mention above when the default resources were discussed is that in Java EE 7 access is provided to the default thread pool.

The support actually goes a bit further than just that and Java EE 7 introduced an entire specification behind this; the concurrency utils for Java EE spec. With this spec you can not just get that default thread pool, but also obtain and work with separate thread pools. This is important for QoS use cases, and specifically to prevent a number of dead locking cases if work that depends on each other is scheduled to the same pool.

Unfortunately the practical usability of these additional pools is somewhat limited by that fact that it’s not possible to actually define those pools in a standard way. This somewhat contradicts the “App provided administrative objects” item in the beginning of this overview.

Despite that issue, for somewhat lower level asynchronous and parallel programming this spec opens up a world of options.

10. Etc; CDI tuning, Interceptors spec, Batching

In Java EE 7 the Interceptors spec was split off from CDI paving the road for CDI to focus more on being the core bean model of Java EE, while simultaneously making interceptors more generally reusable throughout the platform.

CDI itself didn’t get a major overhaul or a really major new feature, but instead got an assortment of smaller but very welcome features such as a (much) easier way to programmatically obtain bean instances, and events that fire when scopes are activated and de-activated.

Automatic enablement of CDI (CDI activated without needing a beans.xml) should have been a major feature, but appeared to be of rather limited use in practice. Without a beans.xml file only beans with so-called “bean defining annotations” are scanned, and especially for beginning users this is not always clear.

Java EE traditionally deals mostly with requests and responses that are generally rather short running. There’s a timer service available for background jobs, but that’s a relatively basic facility. There’s hardly any notion of job management, check-pointing or restarting.

In Java EE a brand new spec was introduced that specifically addresses these concerns; the Batch Applications for Java Platform 1.0 spec. This resolves around XML files in which jobs are specified, which themselves contains so-called steps that perform the actual sequential application logic.

Further reading
CDI 1.1
Interceptors 1.2
Batch 1.0

Thank you Arjan for taking the time to compile all of this. The next post is going to cover the top 10 features of the fronted technologies and will also feature a prominent guest blogger. Until then, there is plenty of time to play around with Java EE 7. Here are some resources to get you started with JBoss EAP 7 and WildFly:

Monday, December 21, 2015

Red Hat JBoss Enterprise Application Platform 7 Beta Released!

08:30 Monday, December 21, 2015 Posted by Markus Eisele
, ,
Red Hat JBoss Enterprise Application Platform 7 Logo
A little over a month after we made the alpha version available, the Red Hat JBoss Enterprise Application Platform 7 Beta - Early Access was released last Friday.

For a complete list of new features, refer to my older blog-post and the official release-notes.
JBoss EAP 7  is a significant release in every respect; the last major EAP release (EAP 6.0) was back in June, 2012 and after 4 minor feature releases and numerous patch releases, EAP 6 is now just 8 months away from entering its long-term maintenance phase. While EAP 6 will be fully supported for many years to come – all new development and new features will target EAP 7 and beyond.

Developers can download JBoss EAP7 with a jboss.org account. They also need to accept the terms and conditions of the JBoss Developer Program which provides $0 subscriptions for development use only. Find out more about the JBoss Developer Program.

There are some known issues in this release, if you find others, please make sure to file an issue in the JBoss EAP Jira tracker.

And while you're downloading, here are some more resources to get your started:


Friday, December 18, 2015

Java EE Microservices Architecture - O'Reilly Webcast

08:09 Friday, December 18, 2015 Posted by Markus Eisele
, , ,
Just a few days ago I had the chance to give a webcast with O'Reilly. This webcast is now available on demand and you can watch it for free after registration.

Java EE Architectures - Evolving the Monolith
Evolving monolithic Java EE applications
With the ascent of DevOps, microservices, containers, and cloud-based development platforms, the gap between state-of-the-art solutions and the technology that enterprises typically support has greatly increased. But some enterprises are now looking to bridge that gap by building microservices-based architectures on top of Java EE.

In this webcast, Red Hat Developer Advocate Markus Eisele explores the possibilities for enterprises that want to move ahead with this architecture. However, the issue is complex: Java EE wasn't built with the distributed application approach in mind, but rather as one monolithic server runtime or cluster hosting many different applications. If you're part of an enterprise development team investigating the use of microservices with Java EE, this webcast will guide you to answers for getting started.


Don't forget to download my free eBook about Modern Java EE Design Patterns - 
Building Scalable Architecture for Sustainable Enterprise Development.

Thursday, December 3, 2015

A Java EE 7 Application on OpenShift 3 With JBoss Tools

14:22 Thursday, December 3, 2015 Posted by Markus Eisele
, , ,
You can create and manage OpenShift applications in Eclipse with the latest version of the OpenShift Plugin for JBoss Tools. They are either pre-bundled with the latest JBoss Developer Studio (9.0.0.GA) or you can install them into an existing Eclipse Mars. This post walks you through deploying the Java EE 7 Hands-On-Lab in OpenShift with the JBoss Developer Studio.

OpenShift 3 Tooling Overview
The OpenShift 3 tooling is included as a TechPreview. It will allow you to connect to OpenShift 3 servers using OAuth or Basic authentication, manage your OpenShit 3 Projects, deploy new applications in the Cloud, using pre-defined (or your own) templates, or even Docker images. You will be able to import existing applications in your workspace, monitor them via remote log streaming directly into your local console, or enable Port-Forwarding and access their data as if it was local.

Get Started
Install the OpenShift 3 all-in-one VM and start your local instance with vagrant. Login via the oc command-line tool with admin/admin and get your OAuth token:
oc login https://localhost:8334
oc whoami -t
And while we're at the command line, let's create a new OpenShift project for this example.
oc new-project javaeehol --display-name="Java EE 7 HOL w/ WildFly MySql"

Install and fire up your JBoss Developer Studio. If you want to get started with the JBoss Tools in an existing Eclipse distribution use this package from the Eclipse marketplace.

Create a new OpenShift Project. Select OpenShift 3 as server type, change the server to https://localhost:8443 and enter the token you gathered from the cli into the token field. When you click next, the credentials are verified and you need to accept a warning about an unsigned certificate when using the all-in-one vm.

Select the project from the first drop down list in the next dialogue. The dialogue also lists all available templates on your server.A complete list can befound on github. We want to use our own template in this case. The Java EE 7 Hands-On-Lab has been converted into a Kubernetes template by Ben Pares. So, we're going to use this. Download it from Ben's Github repository and save it locally.
Open it with a text editor and change the "apiVersion" value from v1beta3 to "v1". And in line 47 is a host entry which says: "www.example.com", change that to "jee-sample.openshiftdev.local". And while you are in a text-editor make sure to add an entry to your hosts file which maps the loopback interface to the changed domain name.
127.0.0.1 jee-sample.openshiftdev.local
Now back to JBDS.
Select "Use a template from my local file system" and browse to the place you saved it.
After clicking next you see another dialogue which allows you to change the template parameter values for the various passwords. Leave everything as it is and click "next" again.
The following dialogue will let you add additional labels. Just click "Finish" now.
The final dialogue gives you an overview about the executed actions and generated passwords. Make sure to write them down in case you need them later.

You can also access the github web-hook secrets and URLs. After clicking "ok" a last wizard clones the application from github into a folder of your choice locally. It get's opened in JBDS and you can browser through the various resources and explore the example a bit. While you're doing that, OpenShift actually triggered a build of the sample application. When you point your browser to the web-console at http://localhost:8443/ and login with admin/admin and select the javaee project, you can see the mysql service running and a build job:


After a couple of minutes, this one finishes and you see the running frontend-service. Let's briefly look into the build logs:
oc get builds #Shows the one successful build
oc build-logs  jee-sample-build-1  #Shows the log output for the build
Everything looks good. You can see, that the maven dependencies are downloaded and looking at the various image streams with:
oc get is
you can see, that there are two:
NAME         DOCKER REPO                                TAGS      UPDATED
jee-sample   172.30.236.154:5000/javaeehol/jee-sample
wildfly      openshift/wildfly-90-centos7               latest    57 seconds ago
OpenShift actually build a new docker image with the javaee-hol in it and deploys the result as a new pod. Time to see everything in action. Point your browser to http://jee-sample.openshiftdev.local:1080/movieplex7-1.0-SNAPSHOT/ and see the Movieplex application in action.


Are you wondering about the weird port? 1080 is actually a specialty of the OpenShift all-in-one-VM. Because, we assume that you already have a service running on port 80 and because of that, the NAT mapping in VirtualBox assigns port 1080 to port 80 on the OpenShift master. Unfortunately this makes some things in the OpenShift Eclipse tooling a little unhandy. But, it's a local installation and has this one drawback. Let's explore the tooling features a little bit more

OpenShift Explorer View - The embedded Web Console.
The OpenShift Explorer View lets you connect to OpenShift 3 instances, create new applications, manage domains or projects, execute action such as Port-Forwarding and Tail Files (Log Streaming). Specific actions are available, depending on the version of the OpenShift instance you’re connected to. For OpenShift 2 connections, you can configure cartridges, for OpenShift 3, you can access Pods, Routes, Services and deploy Docker images. Just expand the tree and right click on the resource you're interested in. For example, like in the following screenshot to access the frontend logs.



You can find even more details about the Docker Tooling and other features in the detailed feature description.

Learn Even More
Learn more about OpenShift Origin and how to get started with the All-In-One-VM. Take the Java EE 7 Hands-On-Lab and follow the individual steps to get a refresher in Java EE 7. Follow @OpenShift on Twitter and stay up to date with latest news. Feel free to reach out to me in the comments or via Twitter @myfear.