Enterprise grade Java.
You'll read about Conferences, Java User Groups, Java, Integration, Reactive, Microservices and other technologies.

Tuesday, December 20, 2016

No, being wary doesn't hurt Java. A comment about Java licensing.

10:19 Tuesday, December 20, 2016 Posted by Unknown No comments:
,
Screenshot from the Oracle Website (source)
So. Oracle want's to make money from Java. And The Register published a very polarising piece with a super catchy title about it. According to their sources, "Oracle is massively ramping up audits of Java customers it claims are in breach of its licences". While the Twitter-verse went ballistic about people criticising Oracle's behaviour, I want to take a minute to recap, why I am against this method and clarify, that the normal developer and user have nothing to fear!

People complaining about Oracle are hurting the community
You know me in and around the Oracle-sphere since many years. And tweeting at the moment is probably the most important part of how you can get the lastest news about me. I rarely post on this blog lately, because there is so much to do and I do a bunch of posts for my employer already.
The tweet I send was basically the title of The Register article and replies and reactions implied that just by spreading a FUD article, I am hurting the community. Let's look into the details.

Are Java developers affected?
NOTE: First of all, I am not a lawyer. This isn't mean to be a legal advice! If you are in doubt over your compliance to the BCL, contact a licensing lawyer, your local Oracle User Group or Oracle.

NO. We are covered. The Binary Code License (BCL) explicitly mentions:

"Oracle grants you a non-exclusive, non-transferable, limited license without fees to reproduce internally and use internally the Software complete and unmodified for the purpose of designing, developing, and testing your Programs."
(BCL, April 2013)

There are some cases you should be aware of.

"You may not use the Commercial Features for running Programs, Java applets or applications in your internal business operations or for any commercial or production purpose, or for any purpose other than as set forth in Sections B, C, D and E of these Supplemental Terms."
(BCL, April 2013)

- If you use a commercial feature in your local environment and stage it to test, you might not be covered.
- If you are accessing any of the  JMX MBeans under oracle.jrockit.management and bea.jrockit.management to enable observation of a running JVM, independently of the monitoring solutions, you are not covered.
- Flight Recorder, Mission Control and everything mentioned in the below linked PDF are commercial and you can only use them on your local machine.

A complete list of commercial features is listed in Table 1-1 on page 5 of the Java SE product edition description (PDF). As a general rule of thumb, make sure to NOT use the -XX:+UnlockCommercialFeatures option.

This mostly affects companies who are already Oracle customers and have access to the commercial features, that are most interesting (e.g. MSI Enterprise installer). Funny enough, WebLogic for example includes a Java SE license (random product link).

I'm still afraid, what are the alternatives?
You can go with various alternatives. First of all, there is the OpenJDK itself. Windows builds are a little tricky, as they are not available directly from the project. Only latest development releases are available for Windows. But you can also get binaries from Azul (Zulu) and Red Hat. If you are a Red Hat customer of any JBoss Middleware product, you also get support from them.

Why do I dislike what Oracle does?
I do like, that Oracle is sponsoring the Java development and I acknowledge that they invest significant manpower into the project. But that doesn't get them a carte blanche to get away with everything.
My main point of criticism is, that Oracle makes it easy to accidentally use commercial features. And yes, as people on Twitter pointed out, you should have just read the license and know about it. But as a matter of fact, I believe that separation of concerns is a good design decision. Instead of directing potential customers and users from the OpenJDK site to java.oracle.com with the comment "which are based largely on the same code" and not even distantly mentioning, that these do contain commercial features which you aren't allowed to use, it would be easier to have separate commercial and open source builds.

If you click around on the Oracle Java website you have a couple of hints at Licenses. But they don't get you the full picture.
Java SE Licenses overview page (source)

And there are plenty of more examples. The number one search result from (my personalized) Google for "Java License Business" leads to this page for example. Speaking of misleading information, I think, this is a good example. If license conditions aren't well known, it's also easier to change them. Which probably hasn't been the case, since the BCL PDF linked was last updated 02 April 2013.

My personal opinion is, that it can't be healthy for the community to stop improving. And improvements don't come from silence. Unfortunately, Oracle doesn't have a great history in listening to their communities which also might lead to some catchy and inappropriate headlines from time to time. Nevertheless, let's stay wary and keep communicating things that could improve. It'll help the community more than it hurts in the long run: Open communication is a sign of a healthy community and the way it deals with feedback and criticism is the gauge for their values.

Friday, September 16, 2016

Replacing legacy Java EE application servers with microservices and containers

18:39 Friday, September 16, 2016 Posted by Unknown No comments:
, ,
Lightbend recently ran a survey with more than 2000 JVM developers and the results just got published. The survey was launched to discover:  correlations between development trends and IT infrastructure trends, how organizations at the forefront of digital transformation are modernizing their applications, and real production usage break-downs of today’s most buzzed about emerging developer technologies. While you can download the complete results from the official website, I would love to highlight some particular things that I found very interesting. Especially around containers and microservices.

Lightweight Containers Are Democratising Infrastructure and Challenging the Old Guard Java EE App Servers
The momentum around containers has gone much more quickly than many anticipated. People are looking at containers as that great hope for infrastructure portability that they’ve been chasing for a long time. And I was always interested in learning about how containers are actually used by developers in the wild. And bottom line is, that containers are really happening in production right now. What types of applications are people putting in containers is the million dollar question. Today it’s primarily greenfield applications, with far fewer examples of legacy applications being modernized for containers in production. This is the reason, that everybody is looking for more leightweight approaches to run their applications on the JVM without the overhead of Java EE servers. The survey has more details around which kind of containers and orchestration models.

Microservices and Fast Data Are Driving Application Modernization Efforts
Microservices-Based Architectures advocate the creation for a system build from a collection of small, isolated services, each of which owns their data, and is independently isolated, scalable and resilient to failure. Services integrate with other services in order to form a cohesive system that’s far more flexible than legacy monolithic applications. But how is this taken into production? Are people already building those systems or is this just a hype? Almost three third of the respondents run a microservice based system in production. And as I've been talking about in my talks before, the driver is mostly the need for real time data handling and streaming requirements.

The survey reveals a lot more details and I strongly suggest, that you look at the details of it. One thing is for sure, the changing requirements put on today's architectures can't be easily meet by just creating new applications on old platforms. And even Java EE is starting to adopt those new principles as JavaOne will hopefully how in a couple of days. I keep you posted.

Wednesday, August 31, 2016

Join me for the Reactive Microservices Roadshow

10:00 Wednesday, August 31, 2016 Posted by Unknown No comments:
, ,
What are the benefits of the Reactive Manifesto and the Microservices approach, especially for those who want to fundamentally modernize their business? I will discuss this at the free event „Reactive Microservices Roadshow“, on 28th September in Berlin. The event is hosted by codecentric AG and Lightbend. There will be three talks which walk you through the most important parts.



Reaktive Mircroservices with Akka (Heiko Seeberger)
The Reactive Manifesto defines essential qualities which modern systems need to have in order to cope with today's requirements: Responsiveness, which is the cornerstone of usability and utility, requires resilience and elasticity; all of these are based upon asynchronous messaging. In this talk we look at the meaning of "Reactive" for a microservice architecture, for individual services and for their collaboration.

AutoScout24 goes Microservices (Christian Deger)
Fed up with stop and go in your data center? Why not shift into overdrive and pull into the fast lane? Learn how AutoScout24, the largest online car marketplace Europe-wide, are building their Autobahn in the cloud.
Reinventing themselves by making a radical transition from monoliths to microservices, from .NET on Windows to Scala on Linux, from data center to AWS and from built by devs and run by ops to a DevOps mindset.

One microservice is no microservice—they come in systems! (Markus Eisele)
Building a complete system out of individual Microservices is not as simple as we're being told. While Microservices-based Architecture continues to draw more and more attention we're also starting to learn about the trade-offs and drawbacks. Individual Microservices are fairly easy to understand and implement, but they only make sense as systems, and it is in-between the services that the most challenging (and interesting) problems arise—here we are entering the world of distributed systems.

Find a German event description on the codecentric.de website. Looking forward to interesting discussions! Make sure to REGISTER for FREE TODAY!

Lightbender at JavaOne 2016

07:20 Wednesday, August 31, 2016 Posted by Unknown No comments:
,
It's only short 16 days until the Java community is starting to travel to San Francisco again. The annual family gathering at JavaOne is the place to be and it's time to get a little excited about attending once again. This year, Lightbend is a bronze sponsor and I am excited to be one of the featured speakers! A lot is going to happen during the week and you have to plan your schedule accordingly to get the most out of it.

A must attend are the sessions by Lightbend engineers.

Monitoring Reactive Microservices [CON1091] with Henrik Engström (@h3nk3)
Tuesday, Sep 20, 4:00 p.m. - 5:00 p.m. | Parc 55—Market Street
Reactive applications are the next major evolution of the internet. They allow for applications to be responsive, scalable, and resilient by building on a fully event-driven foundation. However, at the same time, this way of architecting systems introduces some new issues. One of these issues is how to monitor this type of system. This session covers the traditional monitoring approach and different ways to monitor asynchronous applications and finally looks at the way Lightbend has chosen to build a monitoring tool for reactive applications. After this presentation, developers will have a better understanding of how to monitor microservices in a reactive architecture.

End-to-End Reactive Streams, from Socket to Business [CON1852] with Konrad Malawski (@ktosopl)
Thursday, Sep 22, 11:30 a.m. - 12:30 p.m. | Hilton—Continental Ballroom 1/2/3
The Reactive Streams specification, along with its TCK and various implementations such as Akka Streams, is coming closer and closer with the inclusion of the RS types in JDK 9. Using an example Twitter-like streaming service implementation, this session shows why this is a game changer in terms of how you can design reactive streaming applications by connecting pipelines of back-pressured asynchronous processing stages. The presentation looks at the example from two perspectives: a raw implementation and an implementation addressing a high-level business need.

Stay Productive While Slicing Up the Monolith [CON6472] with myself (@myfear)
Tuesday, Sep 20, 11:00 a.m. - 12:00 p.m. | Parc 55—Mission
With microservices-based architectures, developers are left alone with provisioning and continuous delivery systems, containers and resource schedulers, frameworks and patterns to slice monoliths. How to efficiently develop them without having to provision complete production-like environments locally by hand? How to run microservices-based systems on local development machines, managing provisioning and orchestration of hundreds of services from a command-line tool without sacrificing productivity enablers. New buzzwords, frameworks, and hyped tools have made Java developers forget what it means to be productive. This session shows how much fun it can be to develop large-scale microservices-based systems. Understand the power of a fully integrated microservices development environment.

One Microservice Is No Microservice: They Come in Systems [CON6471] with myself (@myfear)
Wednesday, Sep 21, 1:00 p.m. - 2:00 p.m. | Parc 55—Embarcadero
Building a complete system out of individual microservices is hard. Microservices-based architecture is gaining attention, but there are trade-offs and drawbacks. Individual microservices are fairly easy to understand and implement, but they make sense only as systems; it’s between services that the most-challenging problems arise—in distributed systems. Slicing a system into REST services and wiring them back together with synchronous protocols and traditional enterprise tools means failure. This session distills the essence of microservices-based systems and covers a new development approach to microservices that gets you started quickly with a guided, minimalistic approach on your machine and takes you to a productive scaled-out microservices-based on the Oracle Cloud system with hundreds of services.

The Cloud-Natives Are RESTless [CON2514] Panel session with Konrad Malawski (@ktosopl)
Wednesday, Sep 21, 8:30 a.m. - 9:30 a.m. | Parc 55—Powell I/II
Representational State Transfer—the REST architecture—has served us well for the past 15 years as a style of cross-language distributed computing that is web-friendly. REST is simple and cacheable and is implemented over the original protocol for the web, good ole HTTP. For many use cases, the synchronous, request/response nature of REST fits perfectly. What are the alternatives to REST for event-based Java microservices? What reactive frameworks should Java developers learn and use in their services and overall application architecture? What synchronous cross-language alternatives should Java engineers use for high-performance, non-HTTP distributed computing in 2016 and beyond? Attend this session to find out.

I am looking forward to meet all the amazing peers from the Java Community! Find more information on the official JavaOne website and on the JavaOne blog and make also sure to follow @JavaOneConf on Twitter. And also don't forget to follow @myfear and @lightbend for more fun and games and raffles and stuff during JavaOne!

Friday, August 5, 2016

Remote JMS with WildFly Swarm

09:16 Friday, August 5, 2016 Posted by Unknown No comments:
, ,
I'm blogging about WildFly swarm again? Short version is: I needed a test for remote JMS access and refused to setup something complex like a complete application server. The idea was to have a simple WildFly Swarm application which has a queue and a topic configured. Both should be accessible remotely from a standalone Java application. While the topic receives messages a Message Driven Bean (MDB) dumps the output to the console. The queue is filled with random text+timestamp messages by a singleton timer bean.
Turned out, that WildFly Swarm can do it, but for now only in the snapshot release.

The code
Find the complete code on my GitHub repository. It's not the most beautiful thing I have written but it actually shows you the complete configuration of Swarm with the relevant security settings, and the construction of the queue and the topic. In short the MessagingFraction needs the relevant security settings with remote access enabled and it also needs to define the remote topic. The NamingFraction needs to enable the remote naming service and finally the ManagamentFraction needs to define authorization handler.

How to run the example
To run the server, you can just use 'mvn wildfly-swarm:run' after the startup, you see the timer bean starting to emit messages to the queue:

2016-08-05 08:44:48,003 INFO  [sample.SampleQueueTimer] (EJB default - 5) Send: Test 1470379488003
2016-08-05 08:44:49,005 INFO  [sample.SampleQueueTimer] (EJB default - 6) Send: Test 1470379489005

If you point your browser to http://localhost:8080/ you can trigger a single message being send to the topic. This also get's logged to the console:

2016-08-05 08:44:36,220 INFO  [sample.SampleTopicMDB] (Thread-250 (ActiveMQ-client-global-threads-859113460)) received: something

The real magic happens, when you look at the standalone Java client. It performs the relevant JNDI lookups and creates the JMS connection with user and password, the session and the producer and finally produces and sends a text message.
More about the "why the hell does he needs Java EE again" in some upcoming blog posts ;)

Credits
A super big thank you goes out to Ken Finnigan who fixed the issue I ran into literally over night!

Monday, August 1, 2016

Build and deploy microservices the modern way

15:53 Monday, August 1, 2016 Posted by Unknown No comments:
, ,
There's been a lot of buzz from me lately around microservices and containers. And all the efforts were directed towards today's public announcement by Lightbend and Mesosphere. If you are interested in learning more about how traditional architectures are beginning to evolve very quickly to embrace microservices architecture and various cloud and hybrid-cloud deployment models, I would love to invite you to listen to the recording of my recent webinar with Aaron Williams from Mesosphere. Find the slides on slideshare, the recording is embedded below.

The traditional model that enterprises run their businesses on has typically been delivered as monolithic applications running in a virtualized, on-premise infrastructure. We’ve seen how public and private cloud technologies have changed everything, but if the applications are not designed, or re-designed, appropriately, then it is impossible to take advantage of the advances in both distributed application services and hybrid infrastructure. Consequently, we will show how enterprise architects are looking to microservices architecture and technologies like Mesosphere DC/OS as a means to modernize their legacy applications.

This webinar introduces Lagom, a new framework specifically designed to help developers modernize legacy Java EE applications into systems of microservices and then discuss exactly what is required to run these distributed systems at enterprise scale with DC/OS.

Friday, July 15, 2016

CQRS with Java and Lagom

14:36 Friday, July 15, 2016 Posted by Unknown No comments:
, , ,
I've had the pleasure to talk at the Chicago Java User Group and talk about how Lagom implements CQRS, the Command Query Responsibility Segregation pattern. Thankfully, there is a recording and I also published the slides on slideshare.

Abstract:
As soon as an application becomes even moderately complex, CQRS and an Event Sourced architecture start making a lot of sense. The talk is focused on: - the challenges and tactics of separating the write model from the query model in a complex domain - how commands naturally lead to events and to an event based system, and - how events get projected into useful, eventually consistent views. Event Sourcing is one of those things that you really need to push through at the beginning (much like TDD) and that - once understood and internalized, will change the way you architect a system. This talk introduces you to the basic concepts and problem spaces to solve.

Thanks again for hosting me, CJUG! It was a real pleasure!

Tuesday, June 7, 2016

How Cloud, Containers, Microservices and you can help Charity!

14:01 Tuesday, June 7, 2016 Posted by Unknown No comments:
, , ,
No, this isn't a customer success story. And this isn't the typical post about some awesome stuff that you can do with technology. This time it is actually about knowledge in a different kind. Lightbend would love to know how you use the Cloud, Containers and Microservices in your daily work. How Enterprises are adopting these technologies and last but not least, how influential are the decisions that are made in development to the final system architecture choices being made.

Take the quick ten minute survey and don't forget to share the link with friends, colleagues and your contacts.

The results will be published next month (July) on the Lightbend tech blog!
Lightbend is donating to Charity for each completed survey! It's going to be either #yeswecode, SeniorNet or Mouse and you can select the cause at the end of the survey.  Make you voice heard and your answers count! We are aiming to break 3000 respondents, which unlocks a total donation of $1500!

Share the link
https://lightbend.qualtrics.com/SE/?SID=SV_aaXOYeDecDEMoRv

Sunday, May 15, 2016

Modern Architecture with Containers, Microservices, and NoSQL

14:19 Sunday, May 15, 2016 Posted by Unknown No comments:
, ,
On Tuesday, May 10, 2016 I had the pleasure to join Arun Gupta (Couchbase), Mano Marks (Docker) and Shane Johnson (Couchbase) for a great webinar with ADTMag. You can watch the complete replay for free after registration. This blog highlights some of the most prominent findings and provides a brief writeup.

After a short introduction of the main business drivers behind the new architectures and the panel by Arun Gupta, it was time for Mano Marks (@manomarks) from Docker to give an overview of the container hype. With applications changing from centralised server installations which very rarely got updated to loosely coupled services with a high update frequency running on my small servers, containers provide a standard format which is easily portable between environments. Docker provides a great ecosystem around their products and is a solid foundation for applications following the new principles.

After Mano, it was my part. I did an overview from where we came from in terms of monolithic applications and why they survived so long including their advantages. With the introduction of microservices or better "right-sized" services we finally start to build systems for flexibility
and resiliency, not just efficiency and robustness. The relevant aspects for a successful microservices architecture are plenty and not easily to be achieved by using a single framework. You also have to respect the architecture, software design, methodology and organisation and also embrace the distributed systems thinking. I introduced the audience to some available decomposition strategies and also gave a very quick rundown about the Lightbend microservice framework Lagom.

Shane finished the presentation part of the webinar with an overview about the capabilities of the Couchbase server and how it supports application modernisation and microservice base architectures. The following FaQ with all the panelists tried to answer some of the most pressuring audience questions.

The whole webinar runs for an hour and it is packed with all the latest information around modern architectures. With the additional minutes spend on an hour, this is pretty much the most recent information you can get on the topic by the top speakers in the field. If you have nothing to do on this rainy weekend I highly recommend to watch it!

Wednesday, May 11, 2016

Developing Reactive Microservices with Java - My new free mini-book!

20:26 Wednesday, May 11, 2016 Posted by Unknown No comments:
, ,
I am very happy to announce, that I finished another O'Reilly Mini-Book a couple of weeks ago. After the success of the very first edition which introduced you to the overall problem space of microservices and the amazing book by Jonas Bonér about the architecture of reactive microservice systems, it was about time to share a little more about how to implement them in Java. I am very proud to had Jonas write the foreword for this one and that I was able to write another 50+ pages in such a short time. The book uses Lagom as a framework to walk you through the service API, persistence API and how to work with Lagom-Services. Can't wait to hear your feedback and get you to try out Lagom. Here is the complete abstract and you'll find some further readings and links at the very bottom of the post. Did I mention, it is free to download? It is!

Abstract:
With microservices taking the software industry by storm, traditional enterprises running large, monolithic Java EE applications have been forced to rethink what they’ve been doing for nearly two decades. But how can microservices built upon reactive principles make a difference?

In this O’Reilly report, author Markus Eisele walks Java developers through the creation of a complete reactive microservices-based system. You’ll learn that while microservices are not new, the way in which these independent services can be distributed and connected back together certainly is. The result? A system that’s easier to deploy, manage, and scale than a typical Java EE-based infrastructure.

With this report, you will:

  • Get an overview of the Reactive Programming model and basic requirements for developing reactive microservices
  • Learn how to create base services, expose endpoints, and then connect them with a simple, web-based user interface
  • Understand how to deal with persistence, state, and clients
  • Use integration technologies to start a successful migration away from legacy systems
  • The detailed example in this report is based on Lagom, a new framework that helps you follow the requirements for building distributed, reactive systems. Available on GitHub as an Apache-licensed open source project, this example is freely available for download.


Markus Eisele is a Developer Advocate at Lightbend. He has worked with monolithic Java EE applications for more than 16 years, and now gives presentations at leading international tech conferences on how to evolve these applications into microservices-based architectures. Markus is the author of Modern Java EE Design Patterns (O’Reilly).


Tuesday, April 12, 2016

Integration Architecture with Java EE and Spring

14:27 Tuesday, April 12, 2016 Posted by Unknown No comments:
, , ,
The O'Reilly Software Architecture Conference in New York happens this week. And I had the pleasure to give a tutorial together with Josh Long about how to integrate Java EE and Spring. We've been joking about this one since a while. The super stupid biased view onto both technologies which some people have in mind was something that bugged both of us since a while. Another important reason for this talk was, that we both are caring about modernisation of old applications. There is so much legacy software out there that is easy 10+ years old. And you find those legacy applications in both technologies. This is, why we wanted to help people to understand how to modernise them and survive the transition phase.

A little history about Spring and Java EE
The first part of the talk caught up on a little historical background of both technologies. Where they came from and how they evolved and lead into the state they are in today. Both evolved significantly since their inception and asking the question about what to chose today can easily be answered with a single sentence: "Chose the right tool for the right job". But you can even mix and match for many reasons.

Spring on Java EE
There is a broad space of problems where you might think about using Spring on top of Java EE. While EE has been around and evolved a lot, we had to learn that you can't really innovate in a standard body. This lead to more than just a handful of features that are to be desired if you build a reasonable modern application. Some of those gaps include the security space (social logins), NoSQL integration, enterprise integration in general. And while you are free to pick from Java EE open or closed source offerings to close them, Spring most often has an answer in the family which makes it easy to use the same programming model and have an integrated offering. Plus, the Spring framework has a very long tail: Spring framework 4 runs on Servlet 2.5+ (2006!!), Java EE 6 (2009) and Java 6+. Which makes it very easy to use modern features even on the most outdated base platform. Find the demo code in my github repository and enjoy how easy it is to deploy a spring war to a Java EE server and just use the APIs.

Java EE on Spring
But you can also turn this around and use Java EE APIs with Spring. The reasons you might want to do this are plenty: It can be a first migration step towards Spring while simply re-using some of your old code. Plus you want to use standards where standards make sense and where everything else would be to invasive. Examples include JTA, JPA, JSR303, JSR 330, JCA, JDBC, JMS, Servlets, etc.
And there is also an example app which you can run as a Spring Boot based fat-jar while using (mostly) Java EE APIs in it.

Technical Integration and Microservices
The last part of the presentation touched on technical integration between two systems and the technologies supported in both worlds. We also talked about microservices designs and answered a bunch of questions over the turn of the three hours.
I really enjoyed it and have to admit that Josh is an amazing presenter and I learned a hell lot over the last couple of days working with him! It's a pleasure to know you, Josh! Make sure to follow him on Twitter @starbuxman.

Thursday, April 7, 2016

Your first Lagom service - getting started with Java Microservices

09:07 Thursday, April 7, 2016 Posted by Unknown No comments:
,
I've been heads-down in writing my next O'Reilly report and didn't had enough time to blog in a while. Time to catch up here and give you a real quick start into the new microservices framework named Lagom. It is different to what you might know from Java EE or other application frameworks. And this is both a challenge and opportunity for you to learn something new. If you can wait for a couple of more days, register to be notified when my new report will be available and learn everything about the story behind Lagom and how to get started. I will walk you through an example application and introduce the main concepts to you in more detail than I could in a blog post. This post is for the unpatient that want to get started today and figure everything out themselves.

Some background
Microservices are everywhere these days and more and more is unveiled about what it takes to build a complex distributed system with the existing middleware stacks. And there are far better alternatives and concepts to implement an application as a microservices based architecture. The core concepts of reactive microservices have been introduced by Jonas Bonér in his report Reactive Microservices Architecture which is available for free after registration. Lagom is the implementation of the described concepts. It uses technologies that you might have heard about but probably rarely used before as a Java EE developer: Mainly Akka and Play. But for now, let's just forget about them because Lagom provides you with a great abstraction on top and gives you everything you need to get started.

Prerequisites
Have activator and Java 8 installed. Activator is something that you probably also haven't heard about. It is build on top of sbt and helps you getting started with your projects and much more. A Lagom system is typically made up of a set of sbt builds, each build providing multiple services. The easiest way to get started with a new Lagom system is to create a new project using the lagom Activator template. No need for anything else right now. You probably want to have an IDE installed. IntelliJ or Eclipse should be good for now.

Setting up your first project
Time to get to see some code. Let's generate a simple example from the lagom-java template:
$ activator new first-lagom lagom-java

Change into the newly generated folder "fist-lagom" and issue the sbt command to create an eclipse project.
$ activator eclipse

A bunch of dependencies are downloaded and after the succesful execution you can open Eclipse and use the Import Wizard to import Existing Projects into your Workspace. Note, that if you are using the Immutables library with Eclipse, you need to set this up, too.

Lagom includes a development environment that let you start all your services by simply typing runAll in the activator console. Open the terminal and cd to your Lagom project:
$ activator runAll
The output looks similar to this:
[info] Loading project definition from /Users/myfear/projects/first-lagom/project
[info] Set current project to first-lagom (in build file:/Users/myfear/projects/first-lagom/)
[info] Starting embedded Cassandra server
........
[info] Cassandra server running at 127.0.0.1:4000
[info] Service locator is running at http://localhost:8000
[info] Service gateway is running at http://localhost:9000
[info] Compiling 2 Java sources to /Users/myfear/projects/first-lagom/helloworld-api/target/scala-2.11/classes...
[info] Compiling 1 Java source to /Users/myfear/projects/first-lagom/hellostream-api/target/scala-2.11/classes...
[info] Compiling 2 Java sources to /Users/myfear/projects/first-lagom/hellostream-impl/target/scala-2.11/classes...
[info] Compiling 6 Java sources to /Users/myfear/projects/first-lagom/helloworld-impl/target/scala-2.11/classes...
[info] application - Signalled start to ConductR
[info] application - Signalled start to ConductR
[info] Service hellostream-impl listening for HTTP on 0:0:0:0:0:0:0:0:26230
[info] Service helloworld-impl listening for HTTP on 0:0:0:0:0:0:0:0:24266
[info] (Services started, use Ctrl+D to stop and go back to the console...)
Now go and try out your first service by visiting http://localhost:9000/api/hello/World. Now you're all set for the next blog posts, where I am going to walk you through the example in more detail. If you can't wait, go ahead and read in the Lagom Getting Started guide.

Thursday, March 24, 2016

Free Mini Book about Reactive Microservices

15:06 Thursday, March 24, 2016 Posted by Unknown No comments:
, ,
Working with and talking about microservices has been my focus for a while already. And since I started working at Lightbend I already had the pleasure to work with some amazing people. One of them is Jonas Bonér and it has been a real pleasure helping with the creation of this little mini book about reactive microservices. Many of the concepts described in this book are the foundation for our newly open source microservices framework Lagom. It makes getting started into the world of JVM based microservices and reactive systems and introduces all the important aspects that you should have a brief understanding about.

And the best part is, that it is available for free at lightbend.com.

Written for architects and developers that must quickly gain a fundamental understanding of microservice-based architectures, this free O’Reilly report explores the journey from SOA to microservices, discusses approaches to dismantling your monolith, and reviews the key tenets of a Reactive microservice:

  • Isolate all the Things
  • Act Autonomously
  • Do One Thing, and Do It Well
  • Own Your State, Exclusively
  • Embrace Asynchronous Message-Passing
  • Stay Mobile, but Addressable
  • Collaborate as Systems to Solve Problems

And here is the full abstract:
Still chugging along with a monolithic enterprise system that’s difficult to scale and maintain, and even harder to understand? In this concise report, Lightbend CTO Jonas Bonér explains why microservice-based architecture that consists of small, independent services is far more flexible than the traditional all-in-one systems that continue to dominate today’s enterprise landscape.

Reactive Microservices Architecture Download
You’ll explore a microservice architecture, based on Reactive principles, for building an isolated service that’s scalable, resilient to failure, and combines with other services to form a cohesive whole. Specifically, you’ll learn how a Reactive microservice isolates everything (including failure), acts autonomously, does one thing well, owns state exclusively, embraces asynchronous message passing, and maintains mobility.

Bonér also demonstrates how Reactive microservices communicate and collaborate with other services to solve problems. Get a copy of this exclusive report and find out how to bring your enterprise system into the 21st century.

Jonas Bonér is Founder and CTO of Lightbend, inventor of the Akka project, co-author of the Reactive Manifesto and a Java Champion. Learn more at: http://jonasboner.com.

Friday, March 11, 2016

Review: "Learning Akka" by Jason Goodwin

08:43 Friday, March 11, 2016 Posted by Unknown No comments:
, , ,
Haven't done a review in a while. It's time to dive a little deeper into the technical portfolio of Lightbend. Today it is Akka. A book with this title is the ideal start with a new technology in general. And for all my Java readers: Rest assured, that all examples in this book are in Java 8 (and in Scala).
A big "Thank you!" to Packt Publishing who provided the book to me for review.

Abstract
Software today has to work with more data, more users, more cores, and more servers than ever. Akka is a distributed computing toolkit that enables developers to build correct concurrent and distributed applications using Java and Scala with ease, applications that scale across servers and respond to failure by self-healing. As well as simplifying development, Akka enables multiple concurrency development patterns with particular support and architecture derived from Erlang’s concept of actors (lightweight concurrent entities). Akka is written in Scala, which has become the programming language of choice for development on the Akka platform.

Learning Akka aims to be a comprehensive walkthrough of Akka. This book will take you on a journey through all the concepts of Akka that you need in order to get started with concurrent and distributed applications and even build your own.

Beginning with the concept of Actors, the book will take you through concurrency in Akka. Moving on to networked applications, this book will explain the common pitfalls in these difficult problem areas while teaching you how to use Akka to overcome these problems with ease.

The book is an easy to follow example-based guide that will strengthen your basic knowledge of Akka and aid you in applying the same to real-world scenarios.

Book: "Learning Akka"
Language : English
Paperback: 274 pages
Release Date : 30. Dezember 2015
ISBN-10: 1784393002
ISBN-13: 978-1784393007

The Author
Jason Goodwin (GitHub: jasongoodwin) is a developer who is primarily self-taught. His entrepreneurial spirit led him to study business at school, but he started programming when he was 15 and always had a high level of interest in technology. This interest led his career to take a few major changes away from the business side and back into software development. His journey has led him to working on high-scale distributed systems. He likes to create electronic music in his free time.

He was first introduced to an Akka project at a Scala/Akka shop—mDialog—that built video ad insertion software for major publishers. The company was acquired by Google eventually. He has also been an influential technologist in introducing Akka to a major Canadian telco to help them serve their customers with more resilient and responsive software. He has experience of teaching Akka and functional and concurrent programming concepts to small teams there. He is currently working via Adecco at Google.

The Content
Take all the preface, index and praises away you end up with 216 pages of plain content. Divided into nine chapters.
Chapter 1: Starting Life as an Actor gives an introduction to the Akka Toolkit and Actor Model in general. It covers everything you need to know to get started including the setup of the development environment.
Chapter 2: Actors and Concurrency introduces you to the reactive design approach. The anatomy of, creation of, and communication with an actor together with the tools and knowledge necessary to deal with asynchronous responses and how to work with Futures—place-holders of results.
Chapter 3: Getting the Message Across helps you to understand the details of message delivery mechanisms in Akka. That includes different messaging patterns.
Chapter 4: Actor Lifecycle – Handling State and Failure introduces you to the actor's life cycle and explains what happens when an actor encounters an exceptional state and how you can change its state to modify its behaviour.
Chapter 5: Scaling Up guides you through how Akka can help us scale up more easily to make better use of our hardware, with very little effort.
Chapter 6: Successfully Scaling Out – Clustering comes in handy, when you reach the physical limits of a single machine. Learn what happens when you reach the limit of a physical host and need to process the work across multiple machines.
Chapter 7: Handling Mailbox Problems digs deeper into what happens when you start to hit the limits of your actor system and how to describe how your system should behave in those situations.
Chapter 8: Testing and Design examines some general approaches to design and testing in greater detail.
Chapter 9: A Journey's End highlights a few outstanding features and modules that you may want to be aware of, and some next steps.

Writing and Style
The author thoughtfully explored all the content in every chapter and created a great resource for everybody who wants to start with the Akka toolkit. Sentences are a little longer from time to time and it is a technical book but absolutely readable also for non native speakers.
Every chapter includes links to further resources and a little homework for you to do. Testing and test-design is covered in a separate chapter but also present in code samples throughout the complete book.

Conclusion and recommendation
This book attempts to give both the introductory reader and the intermediate or advanced reader an understanding of basic distributed computing concepts as well as demonstrates how to use Akka to build fault-tolerant horizontally-scalable distributed applications that communicate over a network. With all the examples being present in both languages (Java 8 and Scala) it is the ideal entry for a Java developer to dive into Akka and get a first idea about the concepts. It does not simply copy the documentation and covers many of the important topics and approaches you should understand to successfully build applications with Akka. But be aware that this book only gets you up to speed quickly. To fully understand the toolkit you should follow the further reading advices at the end of each chapter. Don't forget to use the above codes to get 50% off the eBook or 25% off the printed edition. Because the recommendation is to buy it!

Thursday, March 10, 2016

Microservices trouble? Lagom is here to help. Try it!

09:55 Thursday, March 10, 2016 Posted by Unknown No comments:
, ,
The cake is backed. We're proud to announce that the new Apache licensed microservice framework Lagom is available on GitHub! While other frameworks focus on packaging and instance startup, Lagom redefines the way that Java developers build microservice based applications. Services are asynchronous. Intra-service communication is managed for you. Streaming is out of the box. Your microservices are resilient by nature. And you can program in the language you love the most: Java.

What is Lagom? And what does Lagom mean?
Lagom (pronounced [ˈlɑ̀ːɡɔm]) is a Swedish word meaning just right, sufficient. And as such it will help you build microservice based applications in an easier way. Instead of having to find your own answers to how to effectively develop, debug and run tens of different services on your machine you can finally focus on what really is important: The implemented business logic. Lagom takes care for all of the rest for you and eventually helps you to stage and run your application in production. The design is based on three main principles:
  1. Is asynchronous by default.
  2. Favours distributed persistent patterns, in contrast to the traditional centralised database.
  3. Places a high emphasis on developer productivity.

How do I get started?
Read through the quick setup documentation and watch the 11 minute getting started video by
Mirco Dotta who shows you that development is already familiar: Use your favorite IDE and favorite dependency injection tools. You leverage the old to build something new.

How can you give feedback?
That is easy. We're open source and have a couple of channels you can use to get in touch with the project. Start with subscribing to the mailing-list and reach out to us on the Gitter Lagom chat. We're also monitoring questions on StackOverflow which are tagged with Lagom.
And don't forget to follow @Lagom on twitter for latest information

Further Resources:

Wednesday, February 24, 2016

Taking off the red fedora. Hello Lightbend!

07:00 Wednesday, February 24, 2016 Posted by Unknown No comments:
,
Almost two years in Red Hat JBoss Middleware have been a tremendous journey for me. Getting to know so many amazing and gifted people to work with in all kinds of Java EE and integration products and projects really made me realize there is far more talent in open source communities than anywhere else.
You’ve known and seen me at different conferences and Java User Groups meetups or read my blogs or are following me on Twitter. While I've been talking about middleware for many years you’ll continue to hear me talk about enterprise grade Java going forward. Focused on education about the latest trends in building enterprise systems in a reactive way with Java.

And as such I'm very happy to announce, that I am joining Lightbend as of March 1st as their new Developer Advocate. Follow +Lightbend and @Lightbend The importance of Java and the need to build today's enterprise grade systems differently will both be a big part of my future topics. If you have particular wishes and questions already, I'm happy to answer them via my Twitter handle @myfear, comment on my blog or maybe in a complete blogpost.

My journey into containers and microservices architectures will also continue. Going forward I will continue to educate more about how microservices architectures can integrate and complement existing platforms, and will also talk about how to successfully build resilient applications with Java.

I have looked in the mirror every morning and asked myself: "If today were the last day of my life, would I want to do what I am about to do today?" And whenever the answer has been "No" for too many days in a row, I know I need to change something.
Steve Jobs 

Make sure to subscribe to JBoss Weekly and @jbossdeveloper for latest updates about JBoss or to @rhdevelopers to learn all about the Red Hat Developers Program.

Wednesday, January 20, 2016

Running Any Docker Image On OpenShift Origin

16:07 Wednesday, January 20, 2016 Posted by Unknown No comments:
, ,
I've been using OpenShift since a while now. For many reasons. First of all, I don't want to build my own Docker and Kubernetes environment on Windows and second of all, because I like the simple installation. After the Christmas holidays I decided to upgrade my machine to Windows 10. While I like the look and feel, it broke quite a bit of networking and container installments including the Docker and OpenShift environments. Now that I have everything up and running again, it is time to follow the microserivces path a little more. The first thing is to actually get OpenShift up and running and get a development environment setup in which we can simply push Docker images to it without having to use any of the Source-2-Image or OpenShift build mechanisms.

Installing the all-in-one-VM
Download the all-in-one-vm image and import it into the vagrant box. This image is based off of OpenShift Origin and is a fully functioning OpenShift instance with an integrated Docker registry. The intent of this project is to allow Web developers and other interested parties to run OpenShift V3 on their own computer. Given the way it is configured, the VM will appear to your local machine as if it was running somewhere off the machine. Which is exactly what I need to show you around in OpenShift and introduce some more features. If you need a little more assistance follow the method 2 in this earlier blog-post.
I also assume, that you have docker-machine running. You can install it via the Docker Toolbox.

First steps in OpenShift
Fire up the magazine via vagrant up and redirect you browser to https://localhost:8443/. Accept the certificate warning and enter admin/admin as login. You're now browsing through the admin console. Let's create a new project to play around with:
oc login https://localhost:8443
# enter admin/admin as the credentials

oc new-project myfear --description="Testing random Docker Images on OpenShift" --display-name="Myfears test project"

# Add admin role to user myfear
oc policy add-role-to-user myfear admin -n myfear
First thing to do is to actually get a MySql database up and running. I want to use that in subsequent blog-posts and it's a good test to see if everything is working. Get the two json files from the my github repository and execute the following commands:
oc create -f mysql-pod.json
oc create -f mysql-service.json
Go back to your browser and select the myfear project and see the mysql service up and running with one pod.

Using the OpenShift Registry
You just witnessed how OpenShift pulled the mysql image and started a pod with a container on it. Obviously this image came from the built in registry. But how can one actually upload a docker image to the internal OpenShift registry? Ssh into the vagrant machine and look around a bit:
vagrant ssh
docker ps
You see a lot of running containers and one of them is running the openshift/openshift-registry-proxy. This little gem actually forwards the port 5000 from the internal docker registry to the vagrant vm. Open Virtualbox and look at the port forwarding rules there. Another rule forwards port 5000 from the guest to the host. This means, the internal OpenShift Docker registry is already exposed by default. But how do we push something there? The docker client requires a docker host to work. The OpenShift Docker Daemon isn't exposed externally and you can't just point your docker client to it.
This means, you need another docker host on your machine which is configured to access the OpenShift docker registry as external registry. I'm using docker-machine here, because it is extremely easy to create new docker hosts with it.
docker-machine create -d virtualbox dev
After a couple of seconds your "dev" vm is created and started. Now we need to find out, what the host system's IP address is from the dev box. Ssh into the machine and get the ip of the default gateway:
docker-machine ssh dev
$ ip route | grep default

> 10.0.0.2
Now we need to stop the machine and add the ip address we found to the insecure registry part of the configuration:
docker-machine stop dev
edit  ~/.docker/machine/machines/default/config.json 
# Add the found ip address plus the registry port to the HostOptions => EngineOptions =>  InsecureRegistry array
Afterwards it should look like this:
 "InsecureRegistry": [
                "10.0.2.2:5000"
   ]
time to re-start the dev vm and get the docker client configured for it:
docker-machine start dev
FOR /f "tokens=*" %i IN ('docker-machine env dev --shell cmd') DO %i
That's it for now. One important thing is, that the internal registry is secured and we need to login to it. Get the login token for the user "myfear" with the following commands:
oc login -u myfear
oc whoami -t
This will return something cryptic like: dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY. Now login to the registry:
docker login -u myfear -p dV2Dep7vP8pJTxNGk5oNJuqXdPbdznHrxy5_7MZ4KFY -e markus@someemail.org 10.0.2.2:5000
Make sure to use the correct username and token. You get a success message with and your login credentials are being saved in the central config.json.

Build and push the custom image
Time to finally build the custom image and push it. I am using Roland's docker maven plugin again.
If you want to learn more about it, there is an older blog-post about it. Also find the code in this github repository. Compare the pom.xml and make sure to update the docker.host and docker.registry properties
  <docker.host>tcp://192.168.99.101:2376</docker.host>
  <docker.registry>10.0.2.2:5000</docker.registry>
and the <authConfig> section with the correct credentials. Build the image with:
mvn clean install docker:build docker:push
If you ran into an issue with the maven plugin not being able to build the image, you may need to pull the jboss/base-jdk:8 image manually first:
docker pull jboss/base-jdk:8
Let's check, if the image is successfully uploaded by using the console and navigating to the overview => image streams page.
And in fact, the image is listed. Now, we need to start a container with it and expose the service to the world:
oc new-app swarm-sample-discovery:latest --name=swarm
oc expose service swarm --hostname=swarm-sample-discovery.local
Please make sure to add the hostname mapping to your hosts or dns configuration (to 127.0.0.1). As you can see, I am no longer using the docker image tag but the image stream name. OpenShift converted this internally.
Time to access the example via the browser http://swarm-sample-discovery.local:1080/rs/customer.
If you're wondering about the port go back to the Virtualbox configuration and check the nat section. The all on one vm actually assumes, that you have something running on port 80 already and maps the vm ports to the 1080 host port.
The application does very little for now, but I will use this in subsequent blog-posts to dig a little into service discovery options.
The console overview shows the two services with one pod each.


That's it for today. Thanks again to Roland for his help with this. Let me know, if you run into issues and if you want to know something else about OpenShift and custom images.

Tuesday, January 12, 2016

A Refresher - Top 5 Java EE 7 Frontend

19:57 Tuesday, January 12, 2016 Posted by Unknown No comments:
, ,
The series continues. After the initial overview and Arjan's post about the most important backend features, I am now very happy to have Ed Burns (@edburns) finish the series with his favorite Java EE 7 frontend features.

Thanks to Markus Eisele for giving me the opportunity to guest post on his very popular blog. Markus and I go way back to 2010 or so, but I've not yet had the pleasure of guest posting.  Markus asked me to cover the Java EE 7 Web Tier.  Since EE 7 is a mature release of a very mature
platform, much has already been published about it.  Rather than rehash what has come before, I'm going to give my opinion about what I think are the important bits and a very high level overview of each.

If you're interested in learning more about this first-hand, please consider attending my full day training at JavaLand 2016.  I'm giving the training with modern finance and HTML5 expert Oliver Szymanski.  For details, please visit the javaland website.

First, a bit of historical perspective.  Markus asked me to write about the Java EE 7 Web Tier.  Let's take a look at that term, "web tier" or "presentation tier" as it is also called.  If one is to believe the hype surrounding newer ideas such as microservices, the term itself is starting to sound a bit dated because it implies a three tier architecture, with the other two tiers being "business logic" and
"persistence".  Surely three tiers is not micro enough, right?  Well, the lines between these tiers are becoming more and more blurred over time as enterprises tinker with the allocation of responsibilities in pursuit of delivering the most business value with their software.  In any case, Java EE has always been a well integrated collection of enterprise technologies for the Java platform, evolved using a consensus based open development practice (the Java Community Process or JCP) with material participation from leading stake holders.  The "web tier" of this platform is really just the set of technologies that one might find useful when developing the "web tier" of your overall solution.  This is a pretty big list:

WebSocket 1.0 JSR-356
JavaServer Faces 2.2 JSR-344
Servlet 3.1 JSR-340
JSON Processing 1.0 JSR-353
REST (JAX-RS) 2.0 JSR 339
Bean Validation 1.1 JSR-349
Contexts and Dependency Injection 1.1 JSR-346
Dependency Injection for Java 1.0 JSR-330
Concurrency Utilities for Java EE 1.0 JSR-236
Expression Language 3.0 JSR-341

For the purposes of this blog entry, let's take a look at the first five: WebSocket, JSF, Servlet, JSON, and JAX-RS.  While the second five are surely essentail for a professional web tier, it is beyond the scope of this blog entry to look at them.

WebSocket
JSF and WebSocket are the only two Java EE 7 specs that have a direct connection to the W3C HTML5 specification.  In the case of WebSocket, there are actually three different standards bodies in play.  WebSocket, the network protocol, is specified by RFC-6455 from the IETF.  WebSocket
the JavaScript API is specified as a sub-spec of HTML5 from the W3C. WebSocket the Java API is specified by JCP under JSR-356.  In all aspects of WebSocket, the whole point is to provide a message based reliable full-duplex client-server connection.

JSR-356 lets you use WebSocket in both client and server capacities from your Java SE and EE applications.

On the server side, it allows you to expose a WebSocket endpoint such that browsers can connect to it using their existing support for the WebSocket JavaScript API and network protocol.  You declare your endpoints to the system either by annotating some POJOS, or by imperatively calling bootstrapping APIs from java code, say from a ServletContextListener.  Once the connection is established, the server can send and receieve messages from/to any number of clients that happen
to be connected at the same time.  The runtime automatically handles connection setup and tear down.

The WebSocket java client API allows java SE applications to talk to WebSocket endpoints (Java or otherwise) by providing a Java analog to the W3C JavaScript WebSocket API.

Java Server Faces (JSF)
In JSF 2.2 we added many new features but I will only cover three of them here.

HTML5 Friendly Markup enables writing your JSF pages in almost pure HTML (must be well formed), without the need for the XML namespaces that some see as clumsy and difficult to understand.  This is possible because the underlying HTML Basic JSF RenderKit (from JSF 1.0) provides all the necessary primitives to adopt mapping conventions from an arbitrary
piece of HTML markup to a corresponding JSF UIComponent.  For example, this is a valid JSF form

        <form jsf:id="form">
           <input jsf:id="name" type="tel" jsf:value="#{complex.name}" />
           <progress jsf:id="progress" max="3" value="#{complex.progress}" />
        </form>

The only catch is the need to flag the element as a JSF component by use of a namespaced attribute.  This means you must declare at least one namespace in the <html> tag:

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:jsf="http://xmlns.jcp.org/jsf">

Faces Flows is a standardization of the page flow concept from ADF Task Flows and Spring Web Flow.  Flows gives you the ability to group pages together that have some kind of logical connection and need to share state.  A flow defines a logical scope that becomes active when the the flow is entered and made available for garbage collection when the flow is exited.  There is a rich syntax for describing flows, how they are entered, exited, relate to each other, pass parameters to each other,
and more.  There are many conveniences provided thanks to the Flows feature being implemented on top of Contexts and Dependency Injection (CDI).  Flows can be packaged as jar files and included in your web application, enabling modularization of sub-sections of your web app.

Just as Flows enable modularizing behavior, Resource Library Contracts (RLC) enable modularizing appearance.  RLC provides a very flexible skinning system that builds on Facelets and lets you package skins in jar files, effectively allowing modularizing appearance.

Servlet
The most important new feature in Servlet 3.1 is the additional support for non-blocking IO.  This builds on top of the major feature of Servlet 3.0 (from Java EE 6): async io.  The rapid rise of reactive programming indicates that Java apps can no longer afford to block for IO, ever. The four concerns of reactive programming: responsiveness, elasticity, resiliency, and event basis are founded on this premise.  Prior to non-blocking IO in Servlet 3.1, it was very difficult to avoid blocking in Servlet apps.

The basic idea is to allow the Servlet runtime to call your application back when IO can be done safely without blocking.  This is accomplished by virtue of new listener interfaces, ReadListener and WriteListener, instances of which can be registered with methods on ServletInputStream and ServletOutputStream.

When you add this feature to the async-io feature added in Servlet 3.0, it is possible to write Servlet based apps that can proudly sport the "We Are Reactive" banner.

JSON
From the outside perspective, the ability to parse and generate JSON in Java is certainly nothing new.  Even before Java EE 7, there were many solutions to this basic need.  Hewing close to the principle that standards are not for innovation, but to confer special status upon existing ideas, the JSON support in Java EE 7 provides the capability to parse and generate JSON with a simple Java API.  Reading can be done in a streaming fashion, with JsonParser, or in a bulk fashion using JsonReader.  Writing can be done in a streaming fashion with JsonGenerator.  Writing can be done in a bulk style with JsonBuilderFactory and JsonWriter.

JAX-RS
It is hard to overstate the importance of REST to the practice of modern enterprise software development for non-end-user facing software.  I'd go so far as to say that gone are the days when people go to the javadoc (or JSDoc or appledoc etc) to learn how to use an API.  Nowadays if your
enterprise API is not exposed as a RESTful web service, you probably will not even be considered. JAX-RS is how REST is done in Java. JAX-RS has been a part of Java EE since Java EE 6, but it received the 2.0 treatment in Java EE 7.  The big ticket features in 2.0 include:

  •  Client support
    In my opinion, the most useful application of this feature is in using   JUnit to do automated testing of RESTful services without having to  resort to curl from continuous integration.  Of course, you could use it for service-to-service interaction as well.
  •  Seamless integration with JSON
    In most cases a simple @Produces("application/json") annotation on  your HTTP method endpoint is sufficient to output JSON.  Data arriving  at your service in JSON format is also automatically made available to  you in an easy to consume format from Java.
  •  Asynchronous support (Reactive again)
    This feature gives you the ability to hand off the processing required  to generate a response to another Thread, allowing the original thread to return immediately so no blocking happens.  The async thread is free to respond when it is ready.

Naturally, this only scratches the surface of the Java EE 7 web tier. For more details, a great place to start is the official Java EE 7 launch webinars.

I hope to see you at JavaLand!

Thank you Ed for taking the time to write this post. If you haven't now is the time to to play around with Java EE 7. Here are some resources to get you started with JBoss EAP 7 and WildFly:

Thursday, January 7, 2016

Get Up to Speed with Microservices in 8 hours

10:44 Thursday, January 7, 2016 Posted by Unknown No comments:
, ,
Everybody is talking microservices these days and Red Hat is doing some very cool developer events around the world. The latest one happened at the beginning of November last year. The amazing speaker lineup starts with special guest speaker, Tim Hockin from the Google Cloud Management team and technical lead and co-founder of Kubernetes, along with Red Hat's James Strachan and Claus Ibsen. James created the Groovy programming language and is also a member of the Apache Software Foundation and a co-founder of a number of other open source projects such as; Apache ActiveMQ, Apache Camel, Apache ServiceMix. Claus Ibsen works an open source integration projects such as Apache Camel, fabric8 and hawtio and author of Camel in Action books. Tim, James, Claus and many more talk on areas such as Kubernetes for Java developers, microservices with Apache Camel and mobile-centric architecture.

The complete 8 hour playlist is available for free on Youtube and I just want to pick out some of my personal favorites.

Microservices in the Real World by Christian Posta
Beyond the many technology challenges of introducing microservices, organizations need to also adapt their existing development and operations processes and workflows to reap the bigger benefits of a microservices architecture including continuous delivery style application delivery. This session reviews challenges a number of large enterprises have faced in looking to adopt microservices, and looks at how they’ve adapted on their on-going journey. This session also covers some of the end architectures these companies used as they incorporated these new architectural approaches and technologies with their existing people skills and processes.


WildFly Swarm : Microservices Runtime by Lance Ball
With lightweight microservices dominating the dev chatter these days, traditional Java EE developers have spent a lot of time looking in the mirror and asking themselves, "Does my application look fat in this container?" or "How can I leverage my existing Java EE bits in a lightweight microservice?" or "What if I had Just Enough App Server™ to leverage the power and standards of Java EE, but did it all with a slimmed down, self-contained runnable that is easy to deploy and manage?". Well, maybe not that last one.

Enter WildFly Swarm. WildFly Swarm deconstructs the WildFly Java Application Server into fine-grained parts, allowing the selective reconstitution of those parts, together with your application into a self-contained executable - an "uberjar". The resulting artifact contains just enough Java EE to support whatever subset of the traditional APIs your application requires.

This talk introduces WildFly Swarm, and show you how it helps you bring your existing Java EE investments into the world of lightweight, easily deployed microservices. As a bonus, it shows you how WildFly Swarm helps you easily take advantage of other state-of-the-art components such as Keycloak, Ribbon, and Hystrix, integrating them seamlessly in your application.


Logging and Management for Microservices by Jimmi Dyson
Logging is a key part to making microservices work for you. This session helps you look at logs in a different way in order to understand what your systems are doing and how they’re interacting, in order to fix problems quickly and improve performance. You will understand the problems in collecting logs from your distributed microservices and discuss how to centralize them to get real value out of this goldmine of information.


Microservices Workflow: CI/CD by James Rawlings
We all know that in the real world there is more to developing than writing lines of code. This session explores how fabric8 has evolved to provide a platform that supports not only the development of microservices but also working with them, taking an idea from inception right through to running in a live environment.

With popular trends such as DevOps, we know that it is more about the culture of an organization that will give you greater agility and chance of success. Being able to communicate effectively with your cross functional teams increases productivity, reduces social conflicts, and establishes the all important feedback loops.

We look at how fabric8 provides out-of-the-box integration for hosted git services in Gogs, as well as agile project management with Taiga and social tools such as Lets Chat and Slack, to enable intelligent, extendable automation using Hubot, while providing a platform that is designed for the new age microservices team.

It also cover the integration of popular logging and metric tools that are prerequisites to continuous improvement. We need to understand not only how the platform is operating but also greater visibility of how it’s being used. Being able to visualize how teams communication in and outside of their unit can be seen as first steps to measuring the culture within an organization. This can be extremely useful in identifying early signs of internal silos developing as well as learning from high performing teams.



Look at the complete playlist on Youtube and find out more about the event and the other sessions on the redhat.com microservices developer day website.

Monday, January 4, 2016

How DevOps And Microservices Accelerate Application Delivery

17:38 Monday, January 4, 2016 Posted by Unknown No comments:
, , ,
Devops und microservices accelerate application delivery
Happy new year everybody! While I'm officially still on vacation, I'd quickly like to point you to a recent DevOps and Microservices article in the German DOAG Business News. You can download your copy directly via this link (PDF, ~300 KB).

The Business News is a DOAG trade journal publication and is published four times a year. It covers basic business relevant topics from a semi technical perspective. Please be aware, that the linked PDF is in German. If you want to learn more I recommend the following articles: