Java EE and general Java platforms.
You'll read about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies.

Monday, June 8, 2015

Docker Compose on Windows with Python And Babon

14:30 Monday, June 8, 2015 Posted by Markus Eisele
, , ,
Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. It is the only tool in the Docker tool-chain, which doesn't have a native binary for Windows in place right now and to get it up and running on Windows requires quite some work.

Using Babon and Python
The official compose documentation implies, that there is a python only way on not supported platforms. As a matter of fact, this is not totally true. Even the Python package relies on POSIX based commands which aren't available on Windows. If you try to go down this road you will get surprisingly far, but will not finish. The only way to make it work is to use CygWin. For those of you, who don't like it (like I don't), there is a decent alternative called Babun. Babun is a turn-key CygWin distribution for developers and is very easy to install and maintain.
  • Download the installer ZIP archive from the Babun homepage. (~280MB)
  • Unzip the archive to a temporary folder.
  • Change to the unzipped folder and start the install.bat (this might take a while) When you're finished, you can safely delete the temp folder.
  • The babun shell is now open, run the command: "babun update"
  • Change the default shell from zsh to bash if you prefer that by running the command: "babun shell /bin/bash".
  • Edit ~/.bashrc to activate loading of ~/.bash_aliases. (scroll down a bit until you find the line: "#Aliases" and un-comment the if statement.
  • Install additional Python essentials:
    pact install python-setuptools 
    pact install libxml2-devel libxslt-devel libyaml-devel
    curl -skS | python
    pip install virtualenv
    curl -skS | python
This installed a bunch of python packages and the pipsi package manager to your Babun installation. Now you're ready to actually install the docker compose python package:
pip install -U docker-compose
After everything got downloaded and installed, you can now use compose from Babun:
{ ~ }  » docker-compose --version                                                            
docker-compose 1.2.0
With the mapped directories it is easy to change to a temp folder on your windows drive (e.g. /d/temp/) and use compose. Make sure you have everything you need in your PATH variable (Hint: that is different now, e.g. just use:  PATH=$PATH\:/d/path/to/docker/exe ; export PATH ) and make sure to set your environment properly:
eval "$(docker-machine env)"
Now, you can go ahead and just use a very simple docker-compose.yml file, like the one Arun blogged about and you have a bunch of instances up and running without any further configuration or command line hacks.
Find the complete reference to the compose file format on the official Docker Website.

Using the Docker Image Workaround
If you want to, you can try to use the (unofficial) Docker Compose image and run it as a container locally. While this seems to be a solution, I couldn't get this to work on plain Windows. Any pointer and ideas appreciated.

A Two Minute Babun Screencast
Have a look at a 2 minutes long screencast about Babon by @tombujok.

Wednesday, June 3, 2015

NoSQL with Hibernate OGM - Part three: Building a REST application on WildFly

13:00 Wednesday, June 3, 2015 Posted by Markus Eisele
, , ,
Welcome back to our tutorial series "NoSQL with Hibernate OGM"! Thanks to Gunnar Morling (@gunnarmorling) for creating this tutorial. In this part you will learn how to use Hibernate OGM from within a Java EE application running on the WildFly server. Using the entity model you already know from the previous parts of this tutorial, we will build a small REST-based application for managing hikes. In case you haven't read the first two installments of this series, you can find them here:

In the following you will learn how to prepare WildFly for using it with Hibernate OGM, configure a JPA persistence unit, create repository classes for accessing your data and providing REST resources on top of these. In this post we will primarily focus on the aspects related to persistence, so some basic experience with REST/JAX-RS may help. The complete source code of this tutorial is hosted on GitHub.

Preparing WildFly
The WildFly server runtime is based on the JBoss Modules system. This provides a modular class-loading environment where each library (such as Hibernate OGM) is its own module, declaring the list of other modules it depends on and only "seeing" classes from those other dependencies. This isolation provides an escape from the dreaded "classpath hell".
ZIP files containing all the required modules for Hibernate OGM are provided on SourceForge. Hibernate OGM 4.2 - which we released yesterday - supports WildFly 9, so download for that. If you are on WildFly 8, use Hibernate OGM 4.1 and get instead.
Unzip the archive corresponding to your WildFly version into the modules directory of the application server. If you prefer that the original WildFly directories remain unchanged, you also can unzip the Hibernate OGM modules archive to any other folder and configure this as the "module path" to be used by the server. To do so, export the following two environment variables, matching your specific environment:
export JBOSS_HOME=/path/to/wildfly
export JBOSS_MODULEPATH=$JBOSS_HOME/modules:/path/to/ogm/modules
In case you are working with the Maven WildFly plug-in, e.g. to launch WildFly during development, you'd achieve the same with the following plug-in configuration in your POM file:

Setting up the project
Start by creating a new Maven project using the "war" packaging type. Add the following to your pom.xml:
This makes sure you get matching versions of Hibernate OGM's modules and any (optional) dependencies. Then add the dependency to the Java EE 7 API and one of the Hibernate OGM backend modules, e.g. Infinispan, JBoss' high-performance, distributed key/value data grid (any other such as hibernate-ogm-mongodb or the brand-new hibernate-ogm-cassandra module would work as well):
The provided scope makes these dependencies available for compilation but prevents them from being added to the resulting WAR file. That it because the Java EE API is part of WildFly already, and Hibernate OGM will be contributed through the modules you unzipped before.
Just adding these modules to the server doesn't cut it, though. They also need to be registered as a module dependency with the application. To do so, add the file src/main/webapp/WEB-INF/jboss-web.xml with the following contents:
<?xml version="1.0" encoding="UTF-8"?>

            <module name="org.hibernate" slot="ogm" services="import" />
            <module name="org.hibernate.ogm.infinispan" services="import" />
            <module name="" services="import" />
This will make Hibernate OGM core and the Infinispan backend as well as Hibernate Search available to your application. The latter will be used to run JP-QL queries in a bit.

Adding entity classes and repositories
With the basic project infrastructure in place, it's time to add the entity classes and repository classes for accessing them. The entity types are basically the same as seen in part 1, only now they are annotated with @Indexed in order to allow them to be queried via Hibernate Search and Lucene:
public class Person {

    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String firstName;
    private String lastName;

        mappedBy = "organizer",
        cascade = { CascadeType.PERSIST, CascadeType.MERGE },
        fetch = FetchType.EAGER
    private Set<Hike> organizedHikes = new HashSet<>();

    // constructors, getters and setters...
public class Hike {

    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String description;
    private Date date;
    private BigDecimal difficulty;

    private Person organizer;

    @ElementCollection(fetch = FetchType.EAGER)
    @OrderColumn(name = "sectionNo")
    private List<HikeSection> sections;

    // constructors, getters and setters...
public class HikeSection {

    private String start;
    private String end;

    // constructors, getters and setters...
In order to use these entities, a JPA persistence unit must be defined. To do so, create the file src/main/resources/META-INF/persistence.xml:
<?xml version="1.0" encoding="utf-8"?>
<persistence xmlns=""

    <persistence-unit name="hike-PU" transaction-type="JTA">


                <property name="hibernate.ogm.datastore.provider" value="INFINISPAN" />
                <property name="hibernate.ogm.datastore.database" value="hike_db" />
                <property name="hibernate.ogm.datastore.create_database" value="true" />
Here we define a persistence unit named "hike-PU". Infinispan is a fully transactional datastore, and using JTA as transaction type allows the persistence unit to participate in container-managed transactions. Specifying HibernateOgmPersistence as the provider class enables Hibernate OGM (instead of Hibernate ORM), which is configured with some properties for the setting backend (INFINISPAN in this case), database name etc.
Note that it actually should not be required to specify the entity types in persistence.xml when running in a Java EE container such as WildFly. Instead they should be picked up automatically. When using Hibernate OGM this unfortunately is needed at the moment. This a known limitation (see OGM-828) which we hope to fix soon.
The next step is to implement repository classes for accessing hike and organizer data. As an example, the following shows the PersonRepository class:
public class PersonRepository {

    private EntityManager entityManager;

    public Person create(Person person) {
        entityManager.persist( person );
        return person;

    public Person get(String id) {
        return entityManager.find( Person.class, id );

    public List<Person> getAll() {
        return entityManager.createQuery( "FROM Person p", Person.class ).getResultList();

    public Person save(Person person) {
        return entityManager.merge( person );

    public void remove(Person person) {
        entityManager.remove( person );
        for ( Hike hike : person.getOrganizedHikes() ) {
            hike.setOrganizer( null );
The implementation is straight-forward; by means of the @ApplicationScoped annotation, the class is marked as application-scoped CDI bean (i.e. one single instance of this bean exists throughout the lifecycle of the application). It obtains a JPA entity manager through dependency injection and uses the same to implement some simple CRUD methods (Create, Read, Update, Delete).
Note how the getAll() method uses a JP-QL query to return all person objects. Upon execution this query will be transformed into an equivalent Lucene index query which will be run through Hibernate Search.
The hike repository looks very similar, so it's omitted here for the sake of brevity. You can find its source code on GitHub.

Exposing REST services
JAX-RS makes building REST-ful web services a breeze. It defines a declarative programming model where you annotate plain old Java classes to provide implementations for the GET, POST, PUT etc. operations of an HTTP endpoint.
Describing JAX-RS in depth is beyond the scope of this tutorial, e.g. refer to the Java EE 7 tutorial if you would like to learn more. Let's just have a look at the some methods of a resource class for managing persons as an example:
public class Persons {

    private PersonRepository personRepository;

    private ResourceMapper mapper;

    private UriMapper uris;

    public Response createPerson(PersonDocument request) {
        Person person = personRepository.create( mapper.toPerson( request ) );
        return Response.created( uris.toUri( person ) ).build();

    public Response getPerson(@PathParam("id") String id) {
        Person person = personRepository.get( id );
        if ( person == null ) {
            return Response.status( Status.NOT_FOUND ).build();
        else {
            return Response.ok( mapper.toPersonDocument( person ) ).build();

    public Response listPersons() { … }

    public Response updatePerson(PersonDocument request, @PathParam("id") String id) { … }

    public Response deletePerson(@PathParam("id") String id) { … }
The @Path, @Produces and @Consumes annotations are defined by JAX-RS. They bind the resource methods to specific URLs, expecting and creating JSON based messages. @GET, @POST, @PUT and @DELETE configure for which HTTP verb each method is responsible.
The @Stateless annotation defines this POJO as a stateless session bean. Dependencies such as the PersonRepository can be obtained via @Inject-based dependency injection. Implementing a session bean gives you the comfort of transparent transaction management by the container. Invocations of the methods of Persons will automatically be wrapped in a transaction, and all the interactions of Hibernate OGM with the datastore will participate in the same. This means that any changes you do to managed entities - e.g. by persisting a new person via PersonRepository#create() or by modifying a Person object retrieved from the entity manager - will be committed to the datastore after the method call returns.

Mapping models
Note that the methods of our REST service do not return and accept the managed entity types themselves, but rather specific transport structures such as PersonDocument:
public class PersonDocument {

    private String firstName;
    private String lastName;
    private Set<URI> organizedHikes;

    // constructors, getters and setters...
The reasoning for that is to represent the elements of associations (Person#organizedHikes, Hike#organizer) in form of URIs, which enables a client to fetch these linked resources as required. E.g. a GET call to http://myserver/ogm-demo-part3/hike-manager/persons/123 may return a JSON structure like the following:
    "firstName": "Saundra",
    "lastName": "Johnson",
    "organizedHikes": [
The mapping between the internal model (e.g. entity Person) and the external one (e.g. PersonDocument) can quickly become a tedious and boring task, so some tool-based support for this is desirable. Several tools exist for this job, most of which use reflection or runtime byte code generation for propagating state between different models.
Another approach for this is pursued by MapStruct, which is a spare time project of mine and generates bean mapper implementations at compile time (e.g. with Maven or in your IDE) via a Java annotation processor. The code it generates is type-safe, fast (it's using plain method calls, no reflection) and dependency-free. You just need to declare Java interfaces with mapping methods for the source and target types you need and MapStruct will generate an implementation as part of the compilation process:
    // allows to obtain the mapper via @Inject
    componentModel = "cdi",

    // a hand-written mapper class for converting entities to URIs; invoked by the generated
    // toPersonDocument() implementation for mapping the organizedHikes property
    uses = UriMapper.class
public interface ResourceMapper {

    PersonDocument toPersonDocument(Person person);

    List<PersonDocument> toPersonDocuments(Iterable<Person> persons);

    @Mapping(target = "date", dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
    HikeDocument toHikeDocument(Hike hike);

    // other mapping methods ...
The generated implementation can then be used in the Persons REST resource to map from the internal to the external model and vice versa. If you would like to learn more about this approach for model mappings, check out the complete mapper interface on GitHub or the MapStruct reference documentation.

In this part of our tutorial series you learned how to add Hibernate OGM to the WildFly application server and use it to access Infinispan as the data storage for a small REST application.
WildFly is a great runtime environment for applications using Hibernate OGM, as it provides most of the required building blocks out of the box (e.g. JPA/Hibernate ORM, JTA, transaction management etc.), tightly integrated and ready to use. Our module ZIP allows to put the Hibernate OGM modules into the mix very easily, without the need for re-deploying them each time with your application. With WildFly Swarm there is also support for the micro-services architectural style, but we'll leave it for another time to show how to use Hibernate OGM with Wildfly Swarm (currently JPA support is still lacking from WildFly Swarm).
You can find the sources of the project on GitHub. To build the project run mvn clean install (which executes an integration test for the REST services using Arquillian, an exciting topic on its own). Alternatively, the Maven WildFly plug-in can be used to fire up a WildFly instance and deploy the application via mvn wildfly:run, which is great for manual testing e.g. by sending HTTP requests through curl or wget.
If you have any questions, let us know in the comments below or send us a Tweet to @Hibernate. Also your wishes for future parts of this tutorial are welcome. Stay tuned!

Friday, May 29, 2015

Java EE Deployment Scenarios for Docker Containers

13:20 Friday, May 29, 2015 Posted by Markus Eisele
, ,
I've been posting some content around Docker since a while and I like to play around with containers in general. You can find some more information about how to run Docker-Machine on Windows and also showed you how to use the Docker 1.6 client. One of the first blog posts was a compilation of all kinds of resources around Java EE, Docker and Maven for Java EE developers. Working more detailed and often with containers brings up the question about how Java EE applications should be distributed and how developers should use containers. This tries to clarify a little and give you a good overview about the different options.

Base Image Image vs. Custom Image and Some Basics
Most likely, your application server of choice is available on the public registry, known as docker hub. This is true for WildFly. The first decision you have to make is, if you want to use one of the base images or if you are going to bake your own image. Running with the base is pretty much:

docker run -p 8080 -it jboss/wildfly
and your instance is up and running in a second, after the base image was downloaded. But what does that mean? And how does it work? At the heart of every container is Linux. A teensy one. In a normal Linux, the kernel first mounts the root File System as read-only, checks its integrity, and then switches the whole rootfs volume to read-write mode. The teensy Linux in a Docker container does that differently. Instead of changing the file system to read-write mode, it takes advantage of a union mount to add a read-write file system over the read-only file system. In fact there may be multiple read-only file systems stacked on top of each other. If you look at the jboss/wildfly image, this is what you get on first sight:
You see four different levels in this picture. Let's not call them layer, because they aren't yet. This is the hierarchy of images which are the base for our jboss/wildfly image. Each of those images is composed out of a Dockerfile. This is an empty text file with a bunch of instructions in it. You can think of it as a sort of pom-file which needs do be processed through a tool called "Builder". It can contain a variety of commands and options to add users, volumes, add software, downloads and many many more.  If you look at the jboss/wildfly Dockerfile you see the commands that compose the image:

# Use latest jboss/base-jdk:7 image as the base
FROM jboss/base-jdk:7

# Set the WILDFLY_VERSION env variable

# Add the WildFly distribution to /opt, and make wildfly the owner of the extracted tar content
# Make sure the distribution is available from a well-known place
RUN cd $HOME && curl$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz | tar zx && mv $HOME/wildfly-$WILDFLY_VERSION $HOME/wildfly

# Set the JBOSS_HOME env variable
ENV JBOSS_HOME /opt/jboss/wildfly

# Expose the ports we're interested in

# Set the default command to run on boot
# This will boot WildFly in the standalone mode and bind to all interface
CMD ["/opt/jboss/wildfly/bin/", "-b", ""]
The first line defines the base from which the image is derived. Looking at the jboss/base-jdk:7 Dockerfile reveals the root which is jboss/base.
Now imagine, that every single one of those lines does something to the filesystem. Most obvious example is a download. It adds something to it. But instead of writing it to an already mounted partition, it get's stacked up as a new layer. Looking at all the layers of jboss/wildfly this sums up to 19 unique layers with a total size of 951mb.

Using the base image, you can expect to have a default configuration at hand. And this is a great place to start. We at JBoss try to make our projects (and products too!) usable out-of-box for as many use cases as we can, but there is no way that one configuration could satisfy everyone’s needs. For example, we ship 4 flavors of the standalone.xml configuration file with WildFly since there are so many different deployment scenarios. But this is still not enough. We need to be able to tweak it at any point. The jboss/wildfly image is not an exception here.
Creating a custom image with

# Use latest jboss/wildfly as a base
FROM jboss/wildfly

is your first step into the word of a customized image. If you want to know, how to do that, there is an amazing blog post which covers almost all the details.

Java EE Applications - On Docker
One of the main principles behind Docker is "Recreate — Do Not Modify ". With a container being a read-only, immutable piece of infrastructure with very limited capabilities to be changed at runtime, you might be interested in the different options you have to deploy your application.

Dockerize It
This is mostly referred to as "custom image" beside the needed configuration changes, you also add the binary of your application as a new layer your image.

RUN curl -L -o /opt/jboss/wildfly/standalone/deployments/myapp.war
Done. Build, Push and run your custom image. If one instance dies, don't worry and fire up a new one.

- No re-deployments. Every
- No changes
- New version, new image version
- Not the typical operations model for now.
- Might need additional tooling (plugins for Maven/Jenkins)

- The Docker Way
- Easy to integrate into your project build.
- Easy to roll-out and configure

Containers as Infrastructure
There's no real term for it. It basically means that you don't care how your infrastructure is run. This might be the called old-fashioned operations model. You will need to have some kind of access to the infrastructure. Either a shared filesystem with the host to deploy applications via the deployment scanner or the management ports need to be open in order to use the CLI or other tooling to deploy your applications into the container.

- More complex layering to keep state in containers
- Not the Docker Way
- Not fancy.
- Centralized ops and administration

- Hardly any change to what you're used to as a developer in enterprises today
- Doesn't need to be integrated into your build at all. It's just an instance running somewhere
- No additional tooling.

This is hard. I'd suggest, that you look into what fits best for your situation. Most of the enterprise companies might tend to stick with the Containers as Infrastructure solution. This has a lot of drawbacks for now looking at it from a developers perspective. A decent intermediate solution might be OpenShift v3 which will optimize operations for containers and bring a lot of usability and comfort to the Java EE PaaS world.
If you are free to make a choice, you can totally go with the "Dockerize" way. Keep in mind, that this is a vendor-lock-in as of today and some more promising and open solutions on the horizon already. 

Wednesday, May 20, 2015

My Toddler Knows More About Computers Than I Do. Help?

20:17 Wednesday, May 20, 2015 Posted by Markus Eisele
, , ,
This is a feature post written by Tonya Rae Moore (@tonyaraemoore, website). Long years working for Oracle as the caretaker for, she went over to help with kick-starting ZEEF marketing. Working with communities and individuals has always been her passion. Strong believer in equality in tech and fighting anything which doesn't respect minorities. Marketer at heart, knows everything about beer, and is a loving mother married to a sport talk-show host. Member of the CJUG, helping promote Java developers and technologies in Chicago.
We both have toddlers equal age and talked quite a bit about their excitement for our daily work tools and how much is too much and some other parenting things. Here are the answers.

Until fairly recently, I thought of myself as a computer-savvy American. I learned BASIC in grammar school, became proficient in MS Office as it grew and changed, and made chat room mistakes early. I became friends with programmers/engineers/system architects, asked questions, and learned from the answers. I was the person who first built, and later picked out the computers for my parents and less-savvy friends. Then I started working for a giant tech company, running a website that caters to users of a specific programming language. I didn’t know anything about it, but I asked even more questions, and eventually how to translate what the answers to those questions meant in Normal Person Language. I hung in there, and I learned even more. I didn’t learn how to be an engineer, because the engineers are better at engineering that I ever could be.  But, I got pretty good at translating Computer People Language into Normal People Language. I was pretty sure I was a Computer Person, or at least adjacent to them.

Then I had a child. A wonderful, bright little boy who is currently three-years-old, and is proficient on my smartphone. So, here is your first lesson: If you think you know something about computers, just stop right now. If you are correct, and you have a programming background, this is not your article. But if, like me, you suddenly feel like the generational gap is widening around your laptop, there is help available. There are two kinds of folks from here on out: Them and Us. They know what they’re talking about, and we, most assuredly, aren’t even sure where to start. That can be fixed.

The second lesson is: There is a very real probability that your child(ren) will become/already is one of Them instead of one of Us. And that’s GOOD.

Being computer-savvy in the modern world doesn’t mean our kids will grow up to be nerds (though, here’s hoping, because nerds make more money and have more influence on shaping our society than just about anyone). They might become doctors who use imaging to treat patients, writers who build their own websites, or anybody who goes home from their 9-5 job to write an app for extra cash or fun. Being a “computer person” no longer means you work in tech, it has become a component of life.

In an effort to not be left behind by our own children (Who had to teach their parents how to text? Show your hands. Everyone? Good.), we’ve put together a guide to help you get and stay up to pre-kindergarten speed.

Question #1: What in the heck is Minecraft, anyway?
If you are a Clueless Parent, you’re probably here because Minecraft. Here is a pretty good overview. Go read it.
Once you’ve read that, email me with any questions, and bookmark MineMum for later. For now, you should know that Minecraft is the game all the kids are playing, and it is fast becoming a first gateway to nerdom. You should try playing it sometime, too. It’s really pretty fun.

Question #2: I want my child to be exposed to computers at an early age/I’m scared of my child being exposed to computers at too early of an age. How young is too young? And how much is too much?
Let’s tackle the big stuff first, right? The answers are: I don’t know and It depends.

To address the second question first, I’m sure you’ve read all the research headlines about screen time on developing brains. But, I hope we can agree on a difference between learning new skills via technology and letting Bob the Builder raise your offspring. Does your kid show an aptitude for visual learning? Maybe that kid can benefit from early computer games. Is s/he listlessly poking around on Google looking up words you don’t understand but frighten you? Pull the plug and get on your bicycles. But, if your child is showing a true interest in something, I trust you will encourage that as much as you feel comfortable, and then go get on your bicycles. Because, seriously, bicycles are awesome.

Now, how young is too young? That’s sticky. My three-year-old is obsessed with my laptop. It has become impossible for me to get any work done while he’s around, because he’s constantly launching sneak attacks to hit the spacebar or calculator quick key. He is in love with that calculator. If I leave the computer to get some water, when I come back it looks like I won at Solitaire. Does this mean I’m enrolling him in a toddler CompSci class? No. What does he know about what he likes? He’s three. He also likes it when I hit him with his beanbag until he falls down.

There are lots of apps and games to help develop your toddler’s squishy brain in a STEM direction, but let’s not get bogged down in big dreams of Nobel Prizes just yet, k? Learn the alphabet, play outside, figure out how not to poop during the night. These should be the toddler’s focus.

Question #3: That’s great. But where do I go?
If you want to encourage (or start) your child’s computer development, I can personally and highly recommend downloading the Magic Desktop for your little one. I’ve made my son a user account on my computer. His icon is his picture, and his password is his name. The only thing available under his account is the Magic Desktop. A lot of it is still beyond him, but he’s familiarizing himself with a keyboard, learning how to launch apps, and has a bunch of educational games he can complete on his own. There are also solid parental controls and customizable options for many ages. I’ve found it to be a great tool for both of us.

Are you more of a study/research type? Try Kids, Computers, and Learning: An Activity Guide For Parents. It will help you get hands-on at an early age, and keep control of what your kids are learning on computers. It’s a handy resource to keep in your library, especially for non-tech ‘rents who are struggling with first steps, and scared about second ones.

Do you have a little builder? Maybe more like four, five, or six, who likes to get his/her hands on the Legos? I LOVE the Kano for the younger set. Build your own computer! It is so simple for adults, even clueless ones, and is a great first project for young kids. Even if it turns out to not be their specific interest, they’ll love the satisfaction of building their own computer. Bragging rights are important.

If they Kano bored them, or they zipped right through it, try one of the Snap Circuits kits. The Jr. is their first-step system, where you and your child (if they’ll let you) can build over 100 experiments like flying saucers and alarms. Who doesn’t want to build a flying saucer? COME ON.

Did I go too far for you? Let’s take a step back and look at educational games on your computer. We utilize the websites from Nick Jr., PBS Kids, and especially our old, familiar friend, Sesame Street, which has a “Parent Tip” attached to each game, giving you an offline activity to do with your child. My son loves Dinosaur Train, possibly more than he loves me. Thanks to PBS Kids, we can watch the cartoon, play the game, and learn from the app on my phone.

Question #4: All in the same day?
No, and that’s where the real stuff comes in. More than being one specific thing, we want to encourage our kids to be healthy and well-rounded. Computers are awesome, but so are bicycles (I really love bikes). Being present while your child is learning is important, but so is letting them figure things out on their own. Using your brain is healthy, but so is using your hands and feet.

Show them some of the above programs and games, see if they like them. I can also recommend some non-electronic games that will stimulate the same sort of brain functions, like the Perplexus and Quirkle for the first school years, and building sets for kids who have outgrown MegaBloks but aren’t quite ready for Kano. Our lucky kids have innumerable options available for learning these days. And we lucky parents can present them with a variety of ways to grow and express themselves.

Two last links of advice. If you’re looking for some guidelines as to what is actually educational and what is pap with good marketing, bookmark and use Common Sense Media and the Center on Media and Child Health. These are two useful resources that research and nurture child development in a media-rich culture.

So, be skeptical, but not scared. Be proactive, not reactive. The world has changed since we were kids, and it can be overwhelming. But, we all want to encourage our children to be the best people they can be, now and later. Our computers can be incredibly useful tools toward that goal, and it’s up to us to learn how to utilize them correctly.

Now go outside and play. 

Wednesday, May 6, 2015

Docker Machine on Windows - How To Setup You Hosts

12:00 Wednesday, May 6, 2015 Posted by Markus Eisele
, ,
I've been playing around with Docker a lot lately. Many reasons for that, one for sure is, that I love to play around with latest technology and even help out to build a demo or two or a lab. The main difference, between what everybody else of my coworkers is doing is, that I run my setup on Windows. Like most of the middleware developers out there. So, If you followed Arun's blog about "Docker Machine to Setup Docker Host" you might have tried to make this work on windows already. Here is the ultimate short how-to guide on using Docker Machine to administrate and spin up your Docker hosts.

Docker Machine
Machine lets you create Docker hosts on your computer, on cloud providers, and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them. You basically don't have to have anything installed on your machine prior to this. Which is a hell lot easier, than having to manually install boot2docker before. So, let's try this out.

You want to have at least one thing in place before starting with anything Docker or Machine. Go and get Git for Windows (aka msysgit). It has all kinds of helpful unix tools in his belly, which you need anyway.

Prerequisites - The One For All Solution
The first is to install the windows boot2docker distribution which I showed in an earlier blog. It contains the following bits configured and ready for you to use:
- VirtualBox
- Docker Windows Client

Prerequisites- The Bits And Pieces
I dislike the boot2docker installer for a variety of reasons. Mostly, because I want to know what exactly is going on on my machine. So I played around a bit and here is the bits and pieces installer if you decide against the one-for-all solution. Start with the virtualization solution. We need something like that on Windows, because it just can't run Linux and this is what Docker is based on. At least for now. So, get VirtualBox and ensure that version 4.3.18 is correctly installed on your system (VirtualBox-4.3.18-96516-Win.exe, 105 MB). WARNING: There is a strange issue, when you run Windows itself in Virtualbox. You might run into an issue with starting the host.
And while you're at it, go and get the Docker Windows Client.  The other is to grab the final from the test servers as a direct download (docker-1.6.0.exe, x86_64, 7.5MB). Rename to "docker" and put it into a folder of your choice (I assume it will be c:\docker\. Now you also need to download Docker Machine, which is another single executable (docker-machine_windows-amd64.exe, 11.5MB). Rename to "docker-machine" and put it into the same folder. Now add this folder to your PATH:
set PATH=%PATH%;C:\docker
If you change your standard PATH environment variable, this might safe your from a lot of typing. That's it. Now you're ready to create your first Machine managed Docker Host.

Create Your Docker Host With Machine
All you need is a simple command:
docker-machine create --driver virtualbox dev
And the output should state:
←[34mINFO←[0m[0000] Creating SSH key...
←[34mINFO←[0m[0001] Creating VirtualBox VM...
←[34mINFO←[0m[0016] Starting VirtualBox VM...
←[34mINFO←[0m[0022] Waiting for VM to start...
←[34mINFO←[0m[0076] "dev" has been created and is now the active machine.
←[34mINFO←[0m[0076] To point your Docker client at it, run this in your shell: eval "$(docker-machine.exe env dev)"
This means, you just created a Docker Host using the VirtualBox provider and the name “dev”. Now you need to find out on which IP address the host is running.
docker-machine ip
If you want to configure your environment variables, needed by the client more easy, just use the following command:
docker-machine env dev
export DOCKER_CERT_PATH="C:\\Users\\markus\\.docker\\machine\\machines\\dev"
export DOCKER_HOST=tcp://
Which outputs the Linux version of environment variable definition. All you have to do is to change the "export" keyword to "set", remove the " and the double back-slashes and you are ready to go.
C:\Users\markus\Downloads>set DOCKER_TLS_VERIFY=1
C:\Users\markus\Downloads>set DOCKER_CERT_PATH=C:\Users\markus\.docker\machine\machines\dev
C:\Users\markus\Downloads>set DOCKER_HOST=tcp://

Time to test our Docker Client
And here we go now run WildFly on your freshly created host:

docker run -it -p 8080:8080 jboss/wildfly
Watch the container being downloaded and check, that it is running by redirecting your browser to

Congratulations on having setup your very first docker host with Maschine on Windows.

Tuesday, May 5, 2015

Red Hat is now a Strategic Eclipse Member

14:56 Tuesday, May 5, 2015 Posted by Markus Eisele
, ,
The Eclipse Foundation announced today, that Red Hat has become a strategic member of the Eclipse Foundation. Red Hat has been a long-time solution member of the Eclipse Foundation and actively involved in the Eclipse open source community. As a new strategic member, Red Hat will take a seat on the Board of Directors of the Eclipse Foundation, strengthening its support of the Foundation.

What Does That Really Mean?
The Eclipse Foundation knows several different types of memberships with different levels of commitment. We've always been active in contributing and officially a solution member since some time. While the solution member commitment was very limited, the new Strategic Developer membership allows us to be seen as a major contributor to technology to Eclipse. One main point here is, that we will have at least eight developers assigned full time to developing Eclipse technology and will be represented on the Eclipse Foundation Board of Directors allowing them direct influence over the strategic direction of Eclipse. Strategic members also have a seat on the Eclipse Requirements Council providing input and influence over the themes and priorities over the Eclipse technology.

Eclipse Is An IDE. Why Are We Interested In That?
It is true that flagship product of the Eclipse Foundation is the Eclipse IDE, but Eclipse Foundation actually has a lot of other projects that are not related to Eclipse IDE at all. Red Hat already contributes to a lot of projects and we just recently launched a website with all the details about the membership and the projects, we lead or co-lead on
By becoming strategic developer we also plan to be involved more in how Eclipse IP and Development process works and evolves. Something that become more important to make more effective for fast moving projects to feel better at home at Eclipse.
On top of that Eclipse have a lot of other areas going on which Red Hat are keeping our eye on - especially in the area of web IDE’s and Internet-of-things.

What Is Next?
We’ve been contributing and continue to help making Eclipse Mars a great release, together with the rest of the community. We are especially working on fixing GTK/SWT on Linux, making Docker support and looking at improving the Java Script Development tools.

If you are interested in hearing more about this or have a suggestion please feel free to contact Max Rydahl Andersen (@maxandersen). And in general, reach out to the projects we are involved with and learn how to contribute to the Eclipse ecosystem.

Further Reading!
The official blog on JBoss Tools.
The Eclipse Foundation Press Release
Red Hat's Commitment To Eclipse

Thursday, April 30, 2015

Continuous Delivery with Docker Containers and Java EE

12:00 Thursday, April 30, 2015 Posted by Markus Eisele
, , ,
Organizations need a way to make application delivery fast, predictable and secure and the agility provided by containers, such as docker, help developers realize this goal. For Java EE applications, this enables packaging of applications, the application server, and other dependencies in a container that can be replicated in build, test, and production environments. This takes you one step closer to achieving continuous delivery. At least this was the abstract on the Webinar Thomas and I have been giving a couple of days ago. This is the supporting blog-post with a little more details about the setup including all the links to the source code and the demo. Find a more detailed technical walkthrough in the developer interview also embedded below. A big thank you to my co-author Thomas who helped me doing this blog-post.

What Did we Cover?
First we’re going to talk a bit about why everybody is keen on optimizing application delivery these days.  Increasingly complicated applications are putting even more pressure on infrastructures, Teams and processes. Containers promise to bring a solution by keeping applications and their runtime components together.
But let’s not stop there and look beyond, what seems to be a perfect topic for operations. It leaks more and more into the developer space. And as a developer it is easy to ignore latest hypes by just concentrating on what we can do best: Delivering functioning applications. But honestly, there is more to it. Especially Java EE requires more than just code. So, containers promise to make our lives easier.
Just talking about containers isn't the whole story. They have to be usable and out there in production for developers to finally use them. This is where we’re going to briefly sneak into what is upcoming with OpenShift v3 and how this fits into the bigger picture.
After this brief introduction, Thomas is going to walk you through the details, starting with Docker Containers and how they allow for a complete Continuous delivery Chain which fully supports DevOps.

But why do we need containers? And why now?
Most importantly, the new architecture approaches like micro-services drive us away from large-VMs and physical servers running monolithic applications. Individually bootstrapped services are a natural fit for container based deployment, because everything needed to run them is completely encapsulated. Plus, the urge for optimized operations is driving more and more infrastructures into the cloud model. We will see containers as a service offers, which will be faster to deploy, cheaper to run, and be easier to manage than VMs.  Enterprises will run PaaS products that focus on enterprise-class operations using Containers as a target. Distributing software in containerised-packages instead of virtual machines is far more complete and more standardized with Containers. Easier to adapt to different suppliers and vendors. No matter what language or runtime the product is built for. Enterprises don’t necessarily have to focus on a single platform anymore to achieve optimized operations and costs. The container infrastructure allows a more heterogeneous technology base by holding up standardized operational models and having the potential for future optimizations and add-ons for example around security.Containers and their management systems are the glue between developers and operators and are a technological layer to support the DevOps movement.  To make it short: Containers are ready. 

What do I as a Java EE developer gain from all of that?
Containers are about what’s inside of them, not outside of them. It’s easy to compare this with PaaS offerings. Developers don’t want to care about configuration or hosting. They just want a reliable runtime for their applications. There’s not a lot beside Containers what you need. Standard formats, standard images and even the option to use a company wide hub for them, will make development teams a lot more efficient. And this does also relate to how we will setup local environments and roll them out into our teams. Differently configured instances can be spun up and teared down in seconds. No need to maintain different versions of middleware or databases and messing around with paths or configurations. Preconfigured Containers will reduce team setup times significantly and allow for testing with different configurations more easily. Images can be centrally developed, configured and maintained. According to corporate standards and including specific frameworks or integration libraries. Responsibility and education are the key parts in terms of motivation. Today’s full stack developer want to be responsible for their work of art – End to End. Programming stopped being a tedious job using the same lame APIs day in and day out. As a matter of fact, Containers allow for a complete round-trip from building to packaging and shipping your applications through the different environments into production. And because everything can be versioned and centrally maintained and relies on the same operating system and configuration in any environment the complete software delivery chain is a lot more predictable with Containers.

How OpenShift fits into all of that?
The perfect example how the market is shifting towards containers is OpenShift. It comes in different editions:
  • OpenShift Origin is the Open Source Project for Red Hat’s cloud offering
  • OpenShift Online is Red Hat's public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or next big idea. Try out yourself by signing up on
  • OpenShift Enterprise is the an on-premise, private Platform as a Service (PaaS) solution offering that allows you to deliver apps faster and meet your enterprise's growing application demands.

Depending on your needs you’re free to pick the solution that best fits your needs. From building your own PaaS with Origin to running a fully supported on-premise PaaS yourself.
And we’re going big with the next version of OpenShift! With each milestone of Origin comes a new version of OpenShift. And now that the Origin source code repository for OpenShift 3 is available. It is progressing towards a whole new architecture entirely re-engineered from the ground up. This new architecture integrates Docker and the Kubernetes container orchestration and management system, available on an Atomic host optimized for running containerized applications.  On top of all that, OpenShift will incorporate effective and efficient DevOps workflows that play a critical role in platform-as-a-service to accelerate application delivery.

What will OpenShift v3 Look Like?
OpenShift adds developer and operational centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams and applications.
Starting at the bottom of everything, Red Hat has been working with the Docker community to evolve our existing containers technology and drive a new standard for containerization through the libcontainer project. This work lead to announcing Docker support in RHEL 7 and the launch of Project Atomic to develop a new container-optimized Linux host. This new container architecture is at the core of OpenShift v3.
The OpenShift v3 Cartridge format will adopt the Docker packaging model and enable users to leverage any application component packaged as a Docker image. This will enable developers to tap into the Docker Hub community to both access and share container images to use in OpenShift
In OpenShift v3, we will be integrating Kubernetes in the OpenShift Broker to drive container orchestration.
OpenShift v3 will bring new capabilities for provisioning, patching and managing application containers, routing and networking enhancements, and provisioning and managing the OpenShift platform itself.  The goal is to deliver a best of breed user experience for OpenShift developers.
Be excited for the upcoming release!

The Complete Demo
Of you're done with the webcast replay, it's time to get hands on the source-code and grab a #coffee+++ and sit back to relax the demo in 30 instead of just 10 minutes. Thomas is going to cover all the details and I was nice enough to ask some nasty questions in between.
Don't forget, you can always re-watch the original webinar.

And here is an architectural overview as a prezi presentation, which Thomas showed in the webcast.

Links and Further Readings
Some food for thought and homework. The link collection form the webinar plus some more resources for you to dig through.

Tuesday, April 28, 2015

Welcome Google Summer of Code Students 2015 to JBoss

13:30 Tuesday, April 28, 2015 Posted by Markus Eisele
, ,
Google Summer of Code is a global program that offers student developers stipends to write code for various open source software projects. It works with many open source, free software, and technology-related groups to identify and fund projects over a three month period. Since its inception in 2005, the program has brought together over 8,500 successful student participants from 101 countries and over 8,300 mentors from over 109 countries worldwide to produce over 50 million lines of code. Through Google Summer of Code, accepted student applicants are paired with a mentor or mentors from the participating projects, thus gaining exposure to real-world software development scenarios and the opportunity for employment in areas related to their academic pursuits. In turn, the participating projects are able to more easily identify and bring in new developers. Best of all, more source code is created and released for the use and benefit of all.

This is a big GSoC year for JBoss - 13 students accepted! 
Red Hat JBoss Middleware is contributing since a couple of years already and after beginning with 8 students in 2012 we were hovering around that number. This year we had so much support from various projects in JBoss and so many great proposals by students, that we were able to accept 13 students! Another warm welcome from all of us here at Red Hat! 

If you come across any of the names on the project mailing-lists or IRC channels, please make sure to give them a helping hand and support them with their ambitious plans for the summer.

Find out more about recent developments and stay up to date with GSoC 2015 on the Google Open Source Blog. The students will begin posting status updates to their own blogs which will be aggregated via soon.

The List Of Projects
Find all the details for the thirteen accepted projects in the following table. Please welcome all of our students and a very hearty "Thank you!" goes out from me to all the mentors! Good luck and success for the next months!

Implement a Database Migration system as a Forge addon
Student: Wisem Zrellli
Mentor: George Gastaldi
Project: JBoss Forge
Database migration is an essential part of almost every web base application. The project consists in creating a Forge addon to support database migration and ensure that all the schemas are in sync with the code. This goes along with Forge Philosophy which is building Java EE in a fast, productive and joyful way. The addon will wrap several features offered by Liquibase migration tool and supports mainly JPA entities.

Hawkular Android Client
Student: Artur Dryomov
Mentor: Daniel Passos
Project: Hawkular
Mobile phones, especially smartphones, came to our life very quickly. We use them every day and sometimes more often than our computers. At the moment there is no way to use Hawkular comfortably on Android, there is only a web interface. This conclusion assumes that some work should be done to provide a new way of user interaction with Hawkular. My work is to implement this idea in an Android application for Hawkular.

jBPM on Android
Student: Supun Athukorala
Mentor: Kris Verlaenen
Project: jBPM
jBPM is a flexible Business Process Management (BPM) Suite which can be accessed by a web based workbench. But cannot be accessed by mobile users. Therefore the idea of the project is to create a mobile UI of the jBPM-console where mobile users can interact some of the features of the jBPM-console. The jBPM core engine itself is a lightweight workflow engine which can be run on android as well. Therefore apart from the mobile UI, a prototype of jBPM on android will be also created.

Hibernate Search Tools
Student: Bocharov Dmitry
Mentor: Sanne Grinovero
Project: Hibernate
Hibernate Search is a powerful project with a lot of possibilities. However, it is in need of the instruments which allow quick experimentation and provide an easy way of indexes inspections. The aim of this project is to create a number of tools for Hibernate Search.

Docker Addon for JBoss Forge 2
Student: Devanshu Singh
Mentor: George Gastaldi
Project: JBoss Forge
The idea is to create a Docker Addon which will facilitate support for Docker Technologies for Forge users. It aims for provide features like an API to support image and container operations , creation of Docker containers for new projects and a Dockerfile linter all enclosed inside a Forge 2 Addon.

Keycloak Certificate Management System
Student: Giriraj Sharma
Mentor: Stian Thorgersen
Project: Keycloak
Mobile, Internet of Things, Bring Your Own Device in general introduces a bigger demand for Public Key Infrastructure. This idea is to enhance Keycloak with Certificate Authority to issue certificates for users, applications and devices. An interesting extension point to the project will be to add support for automatic certificate management by Automated Certificate Management Environment specification (ACME). If feasible this will also delegate to Let's Encrypt for public domains.

Application for "Make Ceylon scriptable"
Student: Miguel Gordian
Mentor: Stephane Epardaud
Project: Ceylon
Making Ceylon scriptable with a CLI.

Java to Ceylon Code Converter
Student: Rohit Mohan
Mentor: Stephane Epardaud
Project: Ceylon
This project idea aims at converting Java code to Ceylon easily through the Ceylon IDE and also as a standalone utility and as a Ceylon command line plugin. The Java code will be converted to Ceylon by copy pasting in the Ceylon IDE and will also be available as a separate tool that can be used from the command line.

Dynamic visual BPMN2 Diff tool for jBPM Web Designer
Student: Roman Procopenco
Mentor: Tihomir Surdilovic
Project: jBPM 
A visual diff tool created for JBPM Web Designer. The tool will provide Change Tracking Graphs that will give to the users an immediate idea about the changes made on the business process. The tool will have different options to help the users understand the changes made on the process such as a comparison of the whole graph, as well as comparison between two sub parts of the process.

Application Development with jBPM and MGWT
Student: Rodrigo Osmar Garcete
Mentor:Mauricio Salatino
Project: jBPM 
I'll improve the design of the application by doing two things: 1. Migrate existing application to version 2.0 of MGWT. 2. Add new functional features that support the mobile world in a clear and transparent way devices.

Hibernate Search with any JPA implementor
Student: Martin Braun
Mentor: Sanne Grinovero
Project: Hibernate
I've been around the Hibernate-Search project for around 1-1.5 years now, starting with "dumb" questions and now I want to contribute actual source code and pay back for the nice help I got :). Basically, my proposal is to implement support for any JPA provider into Hibernate Search, to remove the neccessity of Hibernate-ORM when using it.

Hawkular - pluggable data processors for metrics
Student: Aakarsh Agarwal
Mentor: Heiko W. Rupp
Project: Hawkular
Hawkular-metrics deals with computation of values and operations on the raw data. This project is aimed to develop interface for plugins that improve the performance of Hawkular-Metrics making it more dependable, dynamic and extending the scope of its usage in operating with data sets.Plugins are needed to apply statistical algorithms to the data and compute those necessary functions. Such plugins may be plugged in at runtime whenever user wants it to.

WebPush support for mobile cloud services
Student: Idel Pivnitskiy
Mentors: Matthias Wessendorf, S├ębastien Blanc
Project: AeroGear
AeroGear WebPush is a proof of concept implementation of the WebPush Protocol specification. It allows to maintain a single HTTP/2 connect which can service as many client applications as needed. It also enables the service worker to receive notifications even if the target application of those notifications is not currently active. It would be perfect to add support for mobile cloud services and try it with IoT devices!

Wednesday, April 22, 2015

JavaOne 2015 - Tips And Recommendations For Your Submission.

13:00 Wednesday, April 22, 2015 Posted by Markus Eisele
Everybody knows JavaOne. It feels like, it's been there forever. And even if we had our ups and downs and the location isn't exactly what we want it to be and San Francisco is expensive and and and. It is the number one premium conference about all kinds of Java. And being part of the program committee ("Java, DevOps, and the Cloud" and "Java and Server-Side Development") again this year makes me proud. And this is my personal call to action for you: If you haven't considered submitting something to JavaOne yet, time is running out. The CfP will close on April 29th and the different review teams of the individual tracks are eagerly awaiting all your awesome submissions.
We can brag as much as we want but JavaOne would be nothing without all the great speakers. That is why we need your help to make sure that the 2015 edition will be even more awesome than the past ones. Here are some ideas and recommendations for the undecided.

What Do I Want To Hear From You About Cloud?
The evolution of service-related enterprise Java standards has been underway for more than a decade, and in many ways the emergence of cloud computing was almost inevitable. Whether you call your current service-oriented development “cloud” or not, Java offers developers unique value in cloud-related environments such as software as a service (SaaS) and platform as a service (PaaS). The Java Virtual Machine is an ideal deployment environment for new microservice and container application architectures that deploy to cloud infrastructures. And as Java development in the cloud becomes more pervasive, enabling application portability can lead to greater cloud productivity.
As this track covers everything from service oriented development and architecture approaches to continuous delivery and DevOps, I expect a lot of different kinds of proposals to come in here. If you want my eyes to catch your proposal here are some ideas:
  • Microservices are cool. I get that. But there's more to it than just buzzword bingo. What I am looking for are some real world ideas or at least something that you tried out. Don't just try to explain what they are (I dropped the famous Fowler slide from my presentations some time ago already). Try to explain what they solve for you and why you've just not gone down the typical Java EE road. And of course, there's stuff like OSGi and Vert.x which also might be a suitable way to do microservices in Java. Surprise me with your experiences.
  • Containers are cool. And Docker is one of them. There's a lot more. And speaking about Containers isn't exactly Java related. For me it will not be enough if you just Docker-ize everything. Please make sure to link your container proposal to Java. This can be anything about introductory content, or how to make the most out of containers as a Java developer. Even in this particular area, I think it is most important to stress your real experiences. Show me your code; your story.
  • PaaS is cool. Oh yes. And we're going down the cloud road further over the next couple of years. Please don't just pitch a product. Don't just tell me how to use OpenShift, Cloud Foundry, Spring Cloud. That is nothing I want to hear. There are readme's and documentation out there. Show me what you did with the PaaS of your choice. Tell me about your choice and let me know, what worked and what didn't. Found out about something that is extremely rough? Or very easy to do? Got some best practices to share! That's what I am looking for.
  • DevOps is cool. Sort of. Not many of us do it. Enterprises have a hard time with it. What did make you look into this topic. What worked and why? Was technology a key to your success? Tell me more about how you made it work in which context. 
What Makes A Good Server Side Track Submission?
Java Platform, Enterprise Edition (Java EE) is the standard in community-driven enterprise software. Developed using contributions from industry experts, commercial and open source organizations, Java user groups, and countless individuals, Java EE offers developers a rich enterprise software platform. And believe it or not, I've been working with Java EE on different containers for almost 15 years now. So, what can surprise me in terms of a good presentation?
  • Java EE 6 isn't cool anymore. We've been there before. If you want to talk about this or even earlier versions, JavaOne might be to fancy for you. We are years into Java EE 7 already and I think, I heard everything about earlier versions that one can say.
  • Java EE 8 is for spec leads and innovators. And this is pretty true. If you really want to talk about something that is hopefully going to be announced at JavaOne 2016, you better be an expert group member or part of an Adopt-A-JSR initiative or an active committer on an Open Source Project that strives to deliver an early implementation. I might consider other community activities which want to help shape EE 8. But beyond this point, it might be a bit trickier to get a talk about EE 8 into this year with my votes on it.
  • Java EE 7 is where the music plays. This is what we are hopefully using today. And weather you want to talk about an individual specification or full-stack, or if you want to showcase your app/product based on it. Or if you have a migration story to share or some real production war-stories. This is right up my alley. 
  • Deployment War Stories are yesterday. We want success stories. Tell us what did work, what didn't and how you solved it. Surprise me with an entertaining talk about how you made an enterprise release more often than 2 times a year. How you package and deliver your application. How have you been able to implement a DevOps workflow? Crossing the Container bridge here, you might consider adding the Cloud and Container track as a second potion.
  • Security is your wildcard. Because it's my favorite topic. If you solved some complex rights and roles requirement and did that done with a decent performance, or if you came up with a highly secure x-factor authentication solutions, I would love to hear about that.
  • Product pitches are for beginners. We do know better, don't we? Wanna talk about JBoss EAP or WebSphere or WebLogic? I'm sure there's a conference for that. I want to hear people talking about Open Source and community driven projects. How they contributed, how they used them successfully. 
In more general, there are some good write-ups about how your submission should look like. Arun did a great summary and I think some basic tips are also included on the JavaOne website. Please, keep in mind, that the program committee members might not know you. And we all invest double digit hours into reviewing all the amazing submissions. So, please make it as easy as possible for us and try to walk in our shoes a bit before submitting.

Good Luck! We're nothing without you! Keep trying and give us your best! I can't wait to see what you come up with. Submit your proposals today. Time is running out!

Tuesday, April 21, 2015

Time to toss out Java 7 - JBoss EAP 6.4 is here!

12:00 Tuesday, April 21, 2015 Posted by Markus Eisele
What a great end to the week. JBoss EAP 6.4 was released and among a ton of technical enhancements and new features, the biggest is: Java 8 has been added to the list of supported configurations. And this includes the Oracle JDK and IBM JDK.

Java SE 7 End of Public Updates Notice
After April 2015, Oracle will no longer post updates of Java SE 7 to its public download sites. Existing Java SE 7 downloads already posted as of April 2015 will remain accessible in the Java Archive on the Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download in order to continue receiving public updates and security enhancements. This means, that Java 7u79 and 7u80 are the last public releases of Java 7. So, I guess, that we're just in time with Java 8 support.

WildFly vs. EAP - A Symbiotic Relationship
The so called upstream community project WildFly is the basis for the commercial (yet open source) JBoss Enterprise Application Platform product. While WildFly continuous to embark on the Java EE journey implementing the latest and greatest iteration of the spec, introduce a host of great new features or strive for even better performance. The mission for EAP is long term, strategic and JBoss EAP follows up with a clear focus on enterprise level performance and stability, long term maintenance and first class professional support.

To download JBoss EAP as a developer you must have a account. You also need to accept the terms and conditions of the JBoss Developer Program which provides $0 subscriptions for development use only. Read more about the JBoss Developer Program.

For further information and details refer to the full JBoss EAP 6.4.0 documentation suite.