Enterprise grade Java.
You'll read about Conferences, Java User Groups, Java, Integration, Reactive, Microservices and other technologies.

Friday, January 30, 2015

JDBC Realm and Form Based Authentication with WildFly 8.2.0.Final, Primefaces 5.1 and MySQL 5

14:27 Friday, January 30, 2015 Posted by Test No comments:
, , ,
I'm looking at the most popular content on my blog from time to time and try to address your needs the best. So, reading my blog is the one way for my fellow readers to drive the content. Another way is to reach out to me in comments or via email. For today, I am going to revamp my JDBC Realm example with Primefaces and update it to latest WildFly server.

Preparations
First step is to download and unzip latest WildFly 8.2.0.Final (I'm using the Java EE7 Full & Web Distribution) to a location of your choice. Also make sure, that you have the MySQL Community Server (5.6.22) installed, up and running. I'm going to use NetBeans 8.0.2 because the version-number just fits nicely with WildFly :) And you also should download latest MySQL Connector/J (5.1.34).

Some Basics
WildFly 8.x uses a combination of PicketBox and JAAS as the WildFly client and  server security mechanism. The configuration is completely covered by the so-called Security Subsystem. The security subsystem operates by using a security context associated with the current request, this security context then makes available to the relevant container a number of capabilities from the configured security domain, the capabilities exposed are an authentication manager, an authorization manager, an audit manager and a mapping manager. More details can be found in the WildFly documentation. The good news is, that you don't have to dig into all the details at once, because I am focusing on the one element of configuration, which is really needed for the deployment specific configuration. These are the security domains. The configuration needs to be done in any of the relevant server configuration files (standalone.xml / standalone-full.xml / etc.). By default, the "other", "jboss-ejb-policy" and the "jboss-web-policy" security domains are configured. Please note, that this is totally relevant for your applications and I am going to look into the configuration into a minute. If you are looking for a way to secure other interfaces, you want to look into how to secure the management interfaces. I have a recent blog-post about how to setup ssl/tls with undertow.

Add A User
As I'm going to use the admin-console quite a bit, make sure to add an admin-user to the management realm first. The %JBOSS_HOME%/bin/add-user.sh|.bat will guide you through that.

WildFly and MySQL
To use database persisted usernames/passwords/roles for authentication and authorization in your application, first thing you need is a database. And this needs to be deployed and configured in WildFly. There are two ways to install a JDBC4-compliant driver: Either deploy it as any other application package or you install it as a module. Any JDBC 4-compliant driver will automatically be recognized and installed into the system by name and version. For non compliant drivers, please refer to the WildFly documentation.

Extract the mysql-connector-java-5.1.34-bin.jar from the archive and go to the WildFly admin-console and select "Deployments" and press the "add" button. Now select the jar and enable it on the last wizard step. Now switch to "Configuration" > Connector > Datasources and press the "add" button. Enter a name ("UserDS") and a JNDI name ("java:jboss/datasources/UserDS". On the next page, select the detected driver "mysql-connector-java-5.1.34-bin.jar_com.mysql.jdbc.Driver_5_1" and in the last step, you need to configure the connection url, user and password for the instance. In my case the schema name is "wildfly" and the connection url "jdbc:mysql://localhost:3306/wildfly".
After you successfully tested the connection, go ahead and enable it. Now, you're ready to configure the rest.

Preparing The Database
Before I dive further into the security domain configuration, the database needs some tables for us to work with. At the very bare minimum, those should be able to hold login names, passwords and roles. But the Database login module, which I'm going to use here is very flexible and allows you to configure a SQL query to return those. Which means, you can re-use the same application user-database containing all kinds of user relevant information and just specify the needed SQL for the login module to return the correct information. In this example, it is going to be a very simple setup with two tables which contain exactly the minimum information that are needed by the Database login module.
CREATE TABLE Users(username VARCHAR(255) PRIMARY KEY, passwd VARCHAR(255))
CREATE TABLE UserRoles(username VARCHAR(255), role VARCHAR(32)) 

Just to be clear here: This is a very simple example. It doesn't contain a lot of checks on the db level and is the most simplistic model, that you could have. I wouldn't use this in production without adding some foreign keys and further constraints to it.
Pre-filling the tables with at least one user for test proposes is the next step. In order to do that, we need to decide on the MessageDigest algorithm that should be used. There are many samples on the web which try to imply that MD5 is a feasible way of encrypting anything. This is not true. It has to be at least SHA-256 or higher. JDK 8 introduced SHA-512 but this does not seem to work with this version of WildFly and so I'm falling back to SHA-256. So, we need a way to encrypt the password with SHA-256 before we can add a user. Thankfully, there is a nice little tool buried in PicketBox and you can just use it via the command line:

java -cp %JBOSS_HOME%\modules\system\layers\base\org\picketbox\main\picketbox-4.0.21.Final.jar org.jboss.security.Base64Encoder <password> <MessageDigest>

And the output is the base64 encoded password. For the password "admin" with the MessageDigest "SHA-256" this is: jGl25bVBBBW96Qi9Te4V37Fnqchz/Eu4qB9vKrRIqRg=

Now it's time to do some inserts into the Database:
INSERT INTO `wildfly`.`user` (`username`, `passwd`) VALUES ('myfear', 'jGl25bVBBBW96Qi9Te4V37Fnqchz/Eu4qB9vKrRIqRg=');

INSERT INTO `wildfly`.`userroles` (`unsername`, `role`) VALUES ('myfear', 'ADMIN');
This was the last step outside of WildFly. Back to the server configuration and on to the sample application.

Configuring The Security Domain in WildFly
Make sure, your WildFly instance is shut down and open the configuration xml (e.g. standalone.xml) for editing. Now find the <security-domains> tag and add a new security domain to it:
 <security-domain name="secureDomain" cache-type="default">
                    <authentication>
                        <login-module code="Database" flag="required">
                            <module-option name="dsJndiName" value="java:jboss/datasources/UserDS"/>
                            <module-option name="principalsQuery" value="select passwd from Users where username=?"/>
                            <module-option name="rolesQuery" value="select role, 'Roles' from UserRoles where username=?"/>
                            <module-option name="hashAlgorithm" value="SHA-256"/>
                            <module-option name="hashEncoding" value="base64"/>
                        </login-module>
                    </authentication>
                </security-domain>
Start your instance and well shortly see, if everything is working. Go fork the SimpleJDBCRealmWildFly on my GitHub account and open it in NetBeans.

Adjusting The WebApplication
You notice, that there isn't a lot specific stuff to see in this web-application. It contains two different folders in the Web Pages folder, "admin" and "users". The "admin" folder should be protected, and this is done in the web.xml by adding the relevant <security-constraint>.  The <auth-contraint> is the role-name "admin". Compare the complete web.xml for details and make sure to check back with my older posting about how everything works in detail if you have questions. The only thing, that still is open is, how to link the deployment to the security domain "secureDomain". This is done in the jboss-web.xml descriptor.
<jboss-web>
    <security-domain>secureDomain</security-domain>
</jboss-web>
That's about all the magic it needs to get started. If you now try to access the admin section of the sample app you are prompted with a login-form.

What about Role-Group Mapping?
This is a very simple example and I decided to not add Role-Group mapping. This common concept actually allows to further abstract developer roles from administrative/operative roles in production. There are some ways, to actually do this. I will follow up with a more detailed post about how to add it soon. As for now, make sure to use the same case for both the <role-name> element in the web.xml and the database role entry for the user. In this example, both are written in capital letters:"ADMIN".

Troubleshooting Tips
You will run into trouble. For many reasons. Cache is one. If you do change a role name in the database you will most likely not see an update if you already authenticated a user. You can remove the cache-type="default" attribute from the security-domain definition and run with no-cache.
Error-Messages are another helpful too. For security reasons, not much is logged in INFO mode. Make sure, to add the security logger and change the log level to TRACE for the console logger in the logging subsystem:

 <logger category="org.jboss.security">
                <level name="TRACE"/>
            </logger>

Even more helpful is the "incognito function" of your favorite browser. It will prevent you from running with the same credentials all over again while all you wanted to do is to use a different account. But this could also easily solved with a logout. But this is another great topic for a next post.

Monday, January 26, 2015

SSL with WildFly 8 and Undertow

12:02 Monday, January 26, 2015 Posted by Test No comments:
, , ,
I've been working my way through some security topics along WildFly 8 and stumbled upon some configuration options, that are not very well documented. One of them is the TLS/SSL configuration for the new web-subsystem Undertow. There's plenty of documentation for the older web-subsystem and it is indeed still available to use, but here is the short how-to configure it the new way.

Generate a keystore and self-signed certificate 
First step is to generate a certificate. In this case, it's going to be a self signed one, which is enough to show how to configure everything. I'm going to use the plain Java way of doing it, so all you need is the JRE keytool. Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. It also allows users to cache certificates. Java Keytool stores the keys and certificates in what is called a keystore. By default the Java keystore is implemented as a file. It protects private keys with a password. A Keytool keystore contains the private key and any certificates necessary to complete a chain of trust and establish the trustworthiness of the primary certificate.

Please keep in mind, that an SSL certificate serves two essential purposes: distributing the public key and verifying the identity of the server so users know they aren't sending their information to the wrong server. It can only properly verify the identity of the server when it is signed by a trusted third party. A self signed certificate is a certificate that is signed by itself rather than a trusted authority.
Switch to a command-line and execute the following command which has some defaults set, and also prompts you to enter some more information.
$>keytool -genkey -alias mycert -keyalg RSA -sigalg MD5withRSA -keystore my.jks -storepass secret  -keypass secret -validity 9999

What is your first and last name?
  [Unknown]:  localhost
What is the name of your organizational unit?
  [Unknown]:  myfear
What is the name of your organization?
  [Unknown]:  eisele.net
What is the name of your City or Locality?
  [Unknown]:  Grasbrun
What is the name of your State or Province?
  [Unknown]:  Bavaria
What is the two-letter country code for this unit?
  [Unknown]:  ME
Is CN=localhost, OU=myfear, O=eisele.net, L=Grasbrun, ST=Bavaria, C=ME correct?
  [no]:  yes

Make sure to put your desired "hostname" into the "first and last name" field, otherwise you might run into issues while permanently accepting this certificate as an exception in some browsers. Chrome doesn't have an issue with that though.
The command generates a my.jks file in the folder it is executed. Copy this to your WildFly config directory (%JBOSS_HOME%/standalone/config).

Configure The Additional WildFly Security Realm
The next step is to configure the new keystore as a server identity for ssl in the WildFly security-realms section of the standalone.xml (if you're using -ha or other versions, edit those).
 <management>
        <security-realms>
<!-- ... -->
 <security-realm name="UndertowRealm">
                <server-identities>
                    <ssl>
                        <keystore path="my.keystore" relative-to="jboss.server.config.dir" keystore-password="secret" alias="mycert" key-password="secret"/>
                    </ssl>
                </server-identities>
            </security-realm>
<!-- ... -->

And you're ready for the next step.

Configure Undertow Subsystem for SSL
If you're running with the default-server, add the https-listener to the undertow subsystem:
  <subsystem xmlns="urn:jboss:domain:undertow:1.2">
         <!-- ... -->
            <server name="default-server">
            <!-- ... -->
                <https-listener name="https" socket-binding="https" security-realm="UndertowRealm"/>
<! -- ... -->

That's it, now you're ready to connect to the ssl port of your instance https://localhost:8443/. Note, that you get the privacy error (compare screenshot). If you need to use a fully signed certificate you mostly get a PEM file from the cert authority. In this case, you need to import this into the keystore. This stackoverflow thread may help you with that.

Friday, January 23, 2015

Developer Interview (#DI 12) - Henryk Konsek (@hekonsek) about Camel on Docker

12:00 Friday, January 23, 2015 Posted by Test No comments:
, , ,
Fridays seem to be the Developer Interview day. Today I welcome Henryk Konsek (@hekonsek). Henryk is a software engineer at Red Hat (JBoss) who has been working with Java-related technologies for many years. His area of expertise is middleware and integration technologies. He authored the "Instant Apache ServiceMix How-to" book at Packt and is working with Red Hat customers on all kinds of solutions around integration technologies.

We've had a great chat about Apache Camel, Fabric, MongoDB, Docker and Microservices. If you want to learn more, follow his blog or watch his work on GitHub.

Sit back, relax and get a #Coffee+++! Thanks, Henryk for taking the time!


The source code of the demo can be found on GitHub and Henryk was so kind to provide a general overview diagram about the demo he was doing:

Thursday, January 22, 2015

About WildFlies, Camel and Large Enterprise Projects

12:00 Thursday, January 22, 2015 Posted by Test No comments:
, , ,
Just wanted to quickly publish my slides from the recent JDK.io talks about WildFlies, Apache Camel, Java EE and large enterprise projects.
Thanks to all the attendees for great questions and the attention.
JDK.io is the two day conference of the Danish Java User Group. The venue is pretty unique as it is the IT-University. Which is an amazing building and a unique atmosphere in the session rooms. Check out their website for more information.

See you soon somewhere. Check out my upcoming talks!





Wednesday, January 21, 2015

Getting Started With OpenShift - A Quick Hands-On Introduction To OpenShift

12:00 Wednesday, January 21, 2015 Posted by Test No comments:
, ,
Did you know, that there is a free ebook about OpenShift? Free, like in free beer? You’ll learn the steps necessary to build, deploy, and host a complete real-world application on OpenShift, without having to read long, detailed explanations of the technologies involved.
Though the book uses Python, application examples in other languages are available on GitHub. If you can build web applications, use a command line, and program in Java, Python, Ruby, Node.js, PHP, Perl, you’re ready to get started.
You can even run your own JBoss WildFly or EAP server on it. The book is available in mobi and PDF format and the download has slim 11 MB.

It was written by Steve Pousty (@TheSteve0, visit his blog) and Katie Miller (@codemiller, visit her website). Steve Pousty is a Developer Advocate at Red Hat. Having earned a Ph.D. in Ecology, he’s been mapping since the late 1980s and building applications for over 15 years. Steve has spoken widely on everything from developer evangelism to auto-scaling applications in the cloud. Katie Miller, an OpenShift Developer Advocate at Red Hat, is a polyglot programmer with a penchant for Haskell. A former newspaper journalist, Katie co-founded the Lambda Ladies group for women in functional programming.

Download your free copy today and get started with Red Hat's PaaS offering.

Further Readings:
Open Shift Developers
Getting Started Guide


Tuesday, January 20, 2015

NoSQL with Hibernate OGM - Part one: Persisting your first Entities

12:00 Tuesday, January 20, 2015 Posted by Test No comments:
, ,
The first final version of Hibernate OGM is out and the team recovered a bit from the release frenzy. So they thought about starting a series of tutorial-style blogs which give you the chance to start over easily with Hibernate OGM. Thanks to Gunnar Morling (@gunnarmorling) for creating this tutorial.

Introduction
Don't know what Hibernate OGM is? Hibernate OGM is the newest project under the Hibernate umbrella and allows you to persist entity models in different NoSQL stores via the well-known JPA.
We'll cover these topics in the following weeks:
  • Persisting your first entities (this instalment)
  • Querying for your data
  • Running on WildFly
  • Running with CDI on Java SE
  • Store data into two different stores in the same application
If you'd like us to discuss any other topics, please let us know. Just add a comment below or tweet your suggestions to us.
In this first part of the series we are going to set up a Java project with the required dependencies, create some simple entities and write/read them to and from the store. We'll start with the Neo4j graph database and then we'll switch to the MongoDB document store with only a small configuration change.

Project set-up 
Let's first create a new Java project with the required dependencies. We're going to use Maven as a build tool in the following, but of course Gradle or others would work equally well.
Add this to the dependencyManagement block of your pom.xml:

...
<dependencyManagement>
    <dependencies>
        ...
        <dependency>
            <groupId>org.hibernate.ogm</groupId>
            <artifactId>hibernate-ogm-bom</artifactId>
            <type>pom</type>
            <version>4.1.1.Final</version>
            <scope>import</scope>
        </dependency>
            ...
    </dependencies>
</dependencyManagement>
...
This will make sure that you are using matching versions of the Hibernate OGM modules and their dependencies. Then add the following to the dependencies block:

...
<dependencies>
    ...
    <dependency>
        <groupId>org.hibernate.ogm</groupId>
        <artifactId>hibernate-ogm-neo4j</artifactId>
    </dependency>
    <dependency>
        <groupId>org.jboss.jbossts</groupId>
        <artifactId>jbossjta</artifactId>
    </dependency>
    ...
</dependencies>
...
The dependencies are:
  • The Hibernate OGM module for working with an embedded Neo4j database; This will pull in all other required modules such as Hibernate OGM core and the Neo4j driver. When using MongoDB, you'd swap that with hibernate-ogm-mongodb.
  • JBoss' implementation of the Java Transaction API (JTA), which is needed when not running within a Java EE container such as WildFly
The domain model
Our example domain model is made up of three classes: Hike, HikeSection and Person.

There is a composition relationship between Hike and HikeSection, i.e. a hike comprises several sections whose life cycle is fully dependent on the Hike. The list of hike sections is ordered; This order needs to be maintained when persisting a hike and its sections.
The association between Hike and Person (acting as hike organizer) is a bi-directional many-to-one/one-to-many relationship: One person can organize zero ore more hikes, whereas one hike has exactly one person acting as its organizer.

Mapping the entities
Now let's map the domain model by creating the entity classes and annotating them with the required meta-data. Let's start with the Person class:

@Entity
public class Person {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private long id;

    private String firstName;
    private String lastName;

    @OneToMany(mappedBy = "organizer", cascade = CascadeType.PERSIST)
    private Set<Hike> organizedHikes = new HashSet<>();

    // constructors, getters and setters...
}
The entity type is marked as such using the @Entity annotation, while the property representing the identifier is annotated with @Id.
Instead of assigning ids manually, Hibernate OGM can take care of this, offering several id generation strategies such as (emulated) sequences, UUIDs and more. Using a UUID generator is usually a good choice as it ensures portability across different NoSQL datastores and makes id generation fast and scalable. But depending on the store you work with, you also could use specific id types such as object ids in the case of MongoDB (see the reference guide for the details).
Finally, @OneToMany marks the organizedHikes property as an association between entities. As it is a bi-directional entity, the mappedBy attribute is required for specifying the side of the association which is in charge of managing it. Specifying the cascade type PERSIST ensures that persisting a person will automatically cause its associated hikes to be persisted, too.
Next is the Hike class:

@Entity
public class Hike {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String description;
    private Date date;
    private BigDecimal difficulty;

    @ManyToOne
    private Person organizer;

    @ElementCollection
    @OrderColumn(name = "sectionNo")
    private List<HikeSection> sections;

    // constructors, getters and setters...
}
Here the @ManyToOne annotation marks the other side of the bi-directional association between Hike and Organizer. As HikeSection is supposed to be dependent on Hike, the sections list is mapped via @ElementCollection. To ensure the order of sections is maintained in the datastore, @OrderColumn is used. This will add one extra "column" to the persisted records which holds the order number of each section.
Finally, the HikeSection class:

@Embeddable
public class HikeSection {

    private String start;
    private String end;

    // constructors, getters and setters...
}
Unlike Person and Hike, it is not mapped via @Entity but using @Embeddable. This means it is always part of another entity (Hike in this case) and as such also has no identity on its own. Therefore it doesn't declare any @Id property.
Note that these mappings looked exactly the same, had you been using Hibernate ORM with a relational datastore. And indeed that's one of the promises of Hibernate OGM: Make the migration between the relational and the NoSQL paradigms as easy as possible!

Creating the persistence.xml
With the entity classes in place, one more thing is missing, JPA's persistence.xml descriptor. Create it under src/main/resources/META-INF/persistence.xml:

<?xml version="1.0" encoding="utf-8"?>

<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
    version="2.0">

    <persistence-unit name="hikePu" transaction-type="RESOURCE_LOCAL">
        <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>

        <properties>
            <property name="hibernate.ogm.datastore.provider" value="neo4j_embedded" />
            <property name="hibernate.ogm.datastore.database" value="HikeDB" />
            <property name="hibernate.ogm.neo4j.database_path" value="target/test_data_dir" />
        </properties>
    </persistence-unit>
</persistence>
If you have worked with JPA before, this persistence unit definition should look very familiar to you. The main difference to using the classic Hibernate ORM on top of a relational database is the specific provider class we need to specify for Hibernate OGM: org.hibernate.ogm.jpa.HibernateOgmPersistence.
In addition, some properties specific to Hibernate OGM and the chosen back end are defined to set:
  • the back end to use (an embedded Neo4j graph database in this case)
  • the name of the Neo4j database
  • the directory for storing the Neo4j database files
Depending on your usage and the back end, other properties might be required, e.g. for setting a host, user name, password etc. You can find all available properties in a class named <BACK END>Properties, e.g. Neo4jProperties, MongoDBProperties and so on.

Saving and loading an entity
With all these bits in place its time to persist (and load) some entities. Create a simple JUnit test shell for doing so:

public class HikeTest {

    private static EntityManagerFactory entityManagerFactory;

    @BeforeClass
    public static void setUpEntityManagerFactory() {
        entityManagerFactory = Persistence.createEntityManagerFactory( "hikePu" );
    }

    @AfterClass
    public static void closeEntityManagerFactory() {
        entityManagerFactory.close();
    }
}
The two methods manage an entity manager factory for the persistence unit defined in persistence.xml. It is kept in a field so it can be used for several test methods (remember, entity manager factories are rather expensive to create, so they should be initialized once and be kept around for re-use).
Then create a test method persisting and loading some data:

@Test
public void canPersistAndLoadPersonAndHikes() {
    EntityManager entityManager = entityManagerFactory.createEntityManager();

    entityManager.getTransaction().begin();

    // create a Person
    Person bob = new Person( "Bob", "McRobb" );

    // and two hikes
    Hike cornwall = new Hike(
            "Visiting Land's End", new Date(), new BigDecimal( "5.5" ),
            new HikeSection( "Penzance", "Mousehole" ),
            new HikeSection( "Mousehole", "St. Levan" ),
            new HikeSection( "St. Levan", "Land's End" )
    );
    Hike isleOfWight = new Hike(
            "Exploring Carisbrooke Castle", new Date(), new BigDecimal( "7.5" ),
            new HikeSection( "Freshwater", "Calbourne" ),
            new HikeSection( "Calbourne", "Carisbrooke Castle" )
    );

    // let Bob organize the two hikes
    cornwall.setOrganizer( bob );
    bob.getOrganizedHikes().add( cornwall );

    isleOfWight.setOrganizer( bob );
    bob.getOrganizedHikes().add( isleOfWight );

    // persist organizer (will be cascaded to hikes)
    entityManager.persist( bob );

    entityManager.getTransaction().commit();

    // get a new EM to make sure data is actually retrieved from the store and not Hibernate's internal cache
    entityManager.close();
    entityManager = entityManagerFactory.createEntityManager();

    // load it back
    entityManager.getTransaction().begin();

    Person loadedPerson = entityManager.find( Person.class, bob.getId() );
    assertThat( loadedPerson ).isNotNull();
    assertThat( loadedPerson.getFirstName() ).isEqualTo( "Bob" );
    assertThat( loadedPerson.getOrganizedHikes() ).onProperty( "description" ).containsOnly( "Visiting Land's End", "Exploring Carisbrooke Castle" );

    entityManager.getTransaction().commit();

    entityManager.close();
}
Note how both actions happen within a transaction. Neo4j is a fully transactional datastore which can be controlled nicely via JPA's transaction API. Within an actual application one would probably work with a less verbose approach for transaction control. Depending on the chosen back end and the kind of environment your application runs in (e.g. a Java EE container such as WildFly), you could take advantage of declarative transaction management via CDI or EJB. But let's save that for another time.
Having persisted some data, you can examine it, using the nice web console coming with Neo4j. The following shows the entities persisted by the test:



Hibernate OGM aims for the most natural mapping possible for the datastore you are targeting. In the case of Neo4j as a graph datastore this means that any entity will be mapped to a corresponding node.
The entity properties are mapped as node properties (see the black box describing one of the Hike nodes). Any not natively supported property types will be converted as required. E.g. that's the case for the date property which is persisted as an ISO-formatted String. Additionally, each entity node has the label ENTITY (to distinguish it from nodes of other types) and a label specifying its entity type (Hike in this case).
Associations are mapped as relationships between nodes, with the association role being mapped to the relationship type.
Note that Neo4j does not have the notion of embedded objects. Therefore, the HikeSection objects are mapped as nodes with the label EMBEDDED, linked with the owning Hike nodes. The order of sections is persisted via a property on the relationship.

Switching to MongoDB
One of Hibernate OGM's promises is to allow using the same API - namely, JPA - to work with different NoSQL stores. So let's see how that holds and make use of MongoDB which, unlike Neo4j, is a document datastore and persists data in a JSON-like representation. To do so, first replace the Neo4j back end with the following one:

...
<dependency>
    <groupId>org.hibernate.ogm</groupId>
    <artifactId>hibernate-ogm-mongodb</artifactId>
</dependency>
...
Then update the configuration in persistence.xml to work with MongoDB as the back end, using the properties accessible through MongoDBProperties to give host name and credentials matching your environment (if you don't have MongoDB installed yet, you can download it here):

...
<properties>
    <property name="hibernate.ogm.datastore.provider" value="mongodb" />
    <property name="hibernate.ogm.datastore.database" value="HikeDB" />
    <property name="hibernate.ogm.datastore.host" value="mongodb.mycompany.com" />
    <property name="hibernate.ogm.datastore.username" value="db_user" />
    <property name="hibernate.ogm.datastore.password" value="top_secret!" />
</properties>
...
And that's all you need to do to persist your entities in MongoDB rather than Neo4j. If you now run the test again, you'll find the following BSON documents in your datastore:

# Collection "Person"
{
    "_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "firstName" : "Bob",
    "lastName" : "McRobb",
    "organizedHikes" : [
        "a78d731f-eff0-41f5-88d6-951f0206ee67",
        "32384eb4-717a-43dc-8c58-9aa4c4e505d1"
    ]
}

# Collection Hike
{
    "_id" : "a78d731f-eff0-41f5-88d6-951f0206ee67",
    "date" : ISODate("2015-01-16T11:59:48.928Z"),
    "description" : "Visiting Land's End",
    "difficulty" : "5.5",
    "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "sections" : [
        {
            "sectionNo" : 0,
            "start" : "Penzance",
            "end" : "Mousehole"
        },
        {
            "sectionNo" : 1,
            "start" : "Mousehole",
            "end" : "St. Levan"
        },
        {
            "sectionNo" : 2,
            "start" : "St. Levan",
            "end" : "Land's End"
        }
    ]
}
{
    "_id" : "32384eb4-717a-43dc-8c58-9aa4c4e505d1",
    "date" : ISODate("2015-01-16T11:59:48.928Z"),
    "description" : "Exploring Carisbrooke Castle",
    "difficulty" : "7.5",
    "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "sections" : [
        {
            "sectionNo" : 1,
            "start" : "Calbourne",
            "end" : "Carisbrooke Castle"
        },
        {
            "sectionNo" : 0,
            "start" : "Freshwater",
            "end" : "Calbourne"
        }
    ]
}
Again, the mapping is very natural and just as you'd expect it when working with a document store like MongoDB. The bi-directional one-to-many/many-to-one association between Person and Hike is mapped by storing the referenced id(s) on either side. When loading back the data, Hibernate OGM will resolve the ids and allow to navigate the association from one object to the other.
Element collections are mapped using MongoDB's capabilities for storing hierarchical structures. Here the sections of a hike are mapped to an array within the document of the owning hike, with an additional field sectionNo to maintain the collection order. This allows to load an entity and its embedded elements very efficiently via a single round-trip to the datastore.

Wrap-up
In this first instalment of NoSQL with Hibernate OGM 101 you've learned how to set up a project with the required dependencies, map some entities and associations and persist them in Neo4j and MongoDB. All this happens via the well-known JPA API. So if you have worked with Hibernate ORM and JPA in the past on top of relational databases, it never was easier to dive into the world of NoSQL.
At the same time, each store is geared towards certain use cases and thus provides specific features and configuration options. Naturally, those cannot be exposed through a generic API such as JPA. Therefore Hibernate OGM lets you make usage of native NoSQL queries and allows to configure store-specific settings via its flexible option system.
You can find the complete example code of this blog post on GitHub. Just fork it and play with it as you like.
Of course storing entities and getting them back via their id is only the beginning. In any real application you'd want to run queries against your data and you'd likely also want to take advantage of some specific features and settings of your chosen NoSQL store. We'll come to that in the next parts of this series, so stay tuned!

Monday, January 19, 2015

DevNation - Call For Papers, Program Committee and Raising Excitement

12:00 Monday, January 19, 2015 Posted by Test No comments:
, ,
DevNation Pictures from 2014
You have heard about DevNation before, did you? It is a 3-day technical, open source, polyglot conference for full-stack application developers and maintainers. The inaugural edition was held last year in San Francisco and delivered a promising start. You can find my trip report on this blog. While I've just been one among others in this incredible speaker lineup, the one things that changed for me working for Red Hat is, that I now actually have the chance to help shape what DevNation looks like. And it is a real pleasure to work with the whole team on making it even better this year.

Call For Papers - Open Until 28th of January
But first things first: We want you! Give us your best talk you have. DevNation is not just a Red Hat or JBoss conference. It is about all things relevant to software development and operations that you can imagine. Not only Java, but on languages that matter today. If you have something to say about Enterprise Applications, Front-End, Mobile Development, Big Data, Application Integration, DevOps,  Continuous Delivery, Performance, Tuning, Platform Development or other cool stuff that you want people to be excited about, this is the place to talk about it. Don't wait any longer: send us your best; the call for papers for the 2015 edition is open until January 28th! More information about the location or highlights from last year can be found on the devnation.org website.



Program Committee - Independent, Open, Experienced
One of the bigger changes this year is, that we're going to have an experienced, well known and Red Hat external program committee supporting the selection process. As the head of it, I am going to work closely with:

Simon Maple (@sjmaple)
Rabea Gransberger (@rgransberger)
Christian Kaltepoth (@chkal)
David Blevins (@dblevins)
Tonya Rae Moore (@TonyaRaeMoore)
Joel Tosi (@joeltosi)

To shape the best program we ever had and set a high bar for all following editions. If you have questions or ideas regarding talks feel free to discuss them with any of us and reach out over twitter or send us an email.

Raising Even More Excitement
The location will be the The Hynes Convention Center in Boston. So, there will be plenty of space for all the amazing sessions, that you're going to see. And we also do have even more cool things planned: Hacking events, Birds-Of-A-Feather sessions, an evening event, keynotes, plenty of room for networking and discussions and even more which we're going to announce shortly on the official website.


Friday, January 16, 2015

Developer Interview (#DI 11) - Stuart Douglas (@stuartwdouglas) about WildFly9 and Undertow

12:00 Friday, January 16, 2015 Posted by Test No comments:
, ,
You know, that I am a Java EE guy. And I love looking into what comes up with latest servers. JBoss is working on WildFly 9 these days, and one particular area that always caught my interest is scaling, clustering and failover. So, this is a great chance to look at what the new version of Untertow will have to offer. And it is my pleasure to welcome Stuart Douglas to my developer interview series.

Stuart (@stuartwdouglas) is a Senior Software Engineer at Red Hat working on the Wildfly and JBoss EAP application servers. In his 4 years at Red Hat Stuart has worked on many parts of the server, including EJB, Servlets and Websockets. Stuart currently leads the Undertow project, which is an embeddable high performance web server used by Wildfly.

Sit back, relax and get a #Coffee+++! Thanks Stuart for taking the time!



Resources:
The GitHub Repository mentioned in the recording. And for a better understanding, this is a topology diagram of what Stuart built.


Thursday, January 15, 2015

Kickstart on API Management with JBoss Apiman 1.0

12:00 Thursday, January 15, 2015 Posted by Test No comments:
, , ,
The JBoss apiman project hit its first public milestone release (1.0.0.Final) recently, making it the perfect time to go out and have a look at it! Now that the first public release is out the door, we’re planning on iterating quickly on new features and bug fixes.  You should expect to see apiman community releases at least monthly.

Getting Started with apiman
So how can you get started with apiman?  I’m thrilled you asked!  There are already a number of articles and videos discussing apiman functionality and concepts.  So let’s start with some links:
The 1.0 release of apiman can be easily run as a standalone server, running on WildFly 8 out of the box.  However, the runtime component (policy engine) can also be embedded into other projects. This is useful if you want to add API Management functionality to your existing API platform.

There’s Already a 1.0.1?
We didn't waste any time resting on our 1.0.0.Final laurels!  We got right back to work after the first release and added a bunch of new stuff (and fixed a few bugs, for good measure).  Early in January we came with 1.0.1.Final, which adds a bunch of stuff, including:
  • Public Services (services that can be invoked without a Service Contract)
  • Support for multiple API Gateways (although a single gateway usually makes the most sense)
  • Retiring of Services and Applications (removed from the API Gateway)
  • New Policy:  Ignore Resources (use regular expressions to prevent access to specific parts of your API)
  • Version cloning (no longer a need to re-create all your configuration when making a new version of a Service, App, or Plan)
  • First stab at a highly scalable vert.x based API Gateway
(read more)

What are Public Services?
One of the new features that some users will find really helpful is the concept of a “Public” Service.  A public service is one that can be invoked without a Service Contract.  In fact, if you only use public services in apiman then there isn’t any reason to create Applications!  This can be very useful if you are only looking to add policies to your services, but not interested in tracking which applications are invoking it.
(read more)

Why is Version Cloning Important?
An important feature of apiman is the ability to have multiple versions of Plans, Services, and Apps.  But whenever a new version of one of these entities is created, it is often necessary to tweak only a small part of the configuration.  For example, if a new version of a Service is released into production, then a new version of it may need to be created in apiman.  But all of the policies and plan probably still apply to the new version - only the Service Implementation endpoint details may have changed.  Now you can clone all of this information whenever you create a new version, saving you the hassle of re-entering all of that config.  Just clone what you had and change what you need.
(read more)

Why Have a vert.x Gateway?
For many users, having the API Gateway running in WildFly 8 is no problem.  We can handle a lot of load using WildFly, and scaling it up to moderate usage levels isn’t too hard.  However, asynchronous systems are designed to scale out to very heavy load, so we designed our runtime Policy Engine to have an asynchronous API to take advantage of these types of systems.  The latest version introduces an asynchronous API Gateway based on the very nice vert.x platform.
We’ll be doing a lot more work on this in the future, but for now it’s a great start and very exciting! We’re hoping that this solution will eventually be used in very large deployments (once we work out some of the details).

Wednesday, January 14, 2015

Developer Interview (#DI 10) - Gorkem Ercan (@gorkemercan) about Mobile Dev with JBDS and Cordova

12:00 Wednesday, January 14, 2015 Posted by Test No comments:
, , ,
New Year, new developer interviews. Yesterday evening I had the pleasure to talk to Görkem Ercan (@gorkemercan, blog) who is a Toronto based software engineer with Red Hat. has tens of years of experience working on software projects with different technologies ranging from enterprise and mobile Java to Symbian and Qt C++. He specializes on providing tools and APIs for developers. He works in the JBoss Developer Studio (JBDS) and is focused on the Cordova tooling. After my first experiences with mobile and such with the Devoxx keynote team, I thought it might be a good idea to look into what JBDS offers and if he can get me excited about it. I can tell you one thing: He made it.

Sit back, relax and get a #Coffee+++! Thanks Görkem  for taking the time!

Tuesday, January 13, 2015

Pushing the Limits - Howto use AeroGear Unified Push for Java EE and Node.js

12:00 Tuesday, January 13, 2015 Posted by Test No comments:
, ,
At the end of 2014 the AeroGear team announced the availability of the Red Hat JBoss Unified Push Server on xPaaS. Let's take a closer look!

Overview
The Unified Push Server allows developers to send native push messages to Apple's Push Notification Service (APNS) and Google's Cloud Messaging (GCM). It features a built-in administration console that makes it easy for developers to create and manage push related aspects of their applications for any mobile development environment. Includes client SDKs (iOS, Android, & Cordova), and a REST based sender service with an available Java sender library. The following image shows how the Unified Push Server enables applications to send native push messages to Apple's Push Notification Service (APNS) and Google's Cloud Messaging (GCM):

Architecture
The xPaaS offering is deployed in a managed EAP container, while the server itself is based on standard Java EE APIs like:
  • JAX-RS 
  • EJB 
  • CDI 
  • JPA 
Another critical component is Keycloak, which is used for user management and authentication. The heart of the Unified Push Server are its public RESTful endpoints. These services are the entry for all mobile devices as well as for 3rd party business applications, when they want to issue a push notification to be delivered to the mobile devices, registered with the server.

Backend integration
Being based on the JAX-RS standard makes integration with any backend platform very easy. It just needs to speak HTTP...

Java EE
The project has a Java library to send push notification requests from any Java-based backend. The fluent builder API is used to setup the integration with the desired Unified Push Server, with the help of CDI we can extract that into a very simple factory:

@Produces
public PushSender setup() {
  PushSender defaultPushSender = DefaultPushSender.withRootServerURL("http://localhost:8080/ag-push")
    .pushApplicationId("c7fc6525-5506-4ca9-9cf1-55cc261ddb9c")
    .masterSecret("8b2f43a9-23c8-44fe-bee9-d6b0af9e316b")
    .build();
}

Next we would need to inject the `PushSender` into a Java class which is responsible to send a push request to the Unified Push Server:

@Inject
private PushSender sender;
...
public void sendPushNotificationRequest() {
   ...
   UnifiedMessage unifiedMessage....;
   sender.send(unifiedMessage);
}

The API for the `UnifiedMessage` is leveraging the builder pattern as well:

UnifiedMessage unifiedMessage = UnifiedMessage.withMessage()
    .alert("Hello from Java Sender API!")
    .sound("default")
    .userData("foo-key", "foo-value")
    ...
    .build();


Node.js
Being a restful server does not limit the integration to traditional platforms like Java EE. The AeroGear also has a Node.js library. Below is a short example how to send push notifications from a Node.js based backend:

// setup the integration with the desired Unified Push Server
var agSender = require( "unifiedpush-node-sender" ),
    settings = {
        url: "http://localhost:8080/ag-push",
        applicationId: "c7fc6525-5506-4ca9-9cf1-55cc261ddb9c",
        masterSecret: "8b2f43a9-23c8-44fe-bee9-d6b0af9e316b"
    };

// build the push notification payload:
message = {
    alert: "Hello from Node.js Sender API!",
    sound: "default",
    userData: {
        foo-key: "foo-value"
    }
};

// send it to the server:
agSender.Sender( settings ).send( message, options ).on( "success", function( response ) {
    console.log( "success called", response );
});


What's next ?
The Unified Push Server on on xPaaS is supporting Android and iOS at the moment, but the AeroGear team is looking to enhance the service for more mobile platforms. The community project is currently supporting the following platforms:
  • Android
  • Chrome Packaged Apps
  • iOS
  • SimplePush / Firefox OS
  • Windows 
There are plans for adding support for Safari browser and Amazon's Device Messaging (ADM).

Getting started To see the Unified Push Server in action, checkout the video below:

The xPaaS release comes with different demos for Android, iOS and Apache Cordova clients as well as a Java EE based backend demo. You can find the downloads here.
More information can be found on the Unified Push homepage.
You can reach out to the AeroGer team via IRC or email.
Have fun and enjoy!

If you find some more time and need a #coffee+++ make sure to watch the developer interview with Matthias about Openshift, Aerogear and how to bring Java EE to Mobiles.

_______________________
This is a guest post by Matthias Wessendorf (@mwessendorf, blog). He is working at Red Hat where he is leading the AeroGear project. Previously, he was the PMC Chair of the Apache MyFaces project. Matthias is a regular conference speaker. Thank you, Matthias!

Monday, January 12, 2015

Java EE, Docker, WildFly and Microservices on Docker

12:00 Monday, January 12, 2015 Posted by Test No comments:
, , ,
If one thing survived all the New Year parties, it is Docker. It was hot at the end of 2014 and it looks like it is getting even hotter in 2015. And Red Hat is one of the key drivers behind the adoption of this amazing container technology. This is a short summary blog post about a bunch of resources to get you started with Java EE, WildFly and Microservices on Docker.

Get A First Impression - Introduction to Docker
Red Hat's developer blog offers a practical introduction to Docker. If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. This is how Arun Gupta starts his introductory blog-post about Docker basics.

Take the Lab - Docker for Java Developers 
This lab offers developers an intro-level, hands-on session with Docker, from installation (including boot2docker on Windows/Mac), to exploring Docker Hub, to crafting their own images, to adding Java apps and running custom containers. This is a BYOL (bring your own laptop) session, so bring your Windows, OSX, or Linux laptop and be ready to dig into a tool that promises to be at the forefront of our industry for some time to come.
The Docker Common Commands Cheatsheet by Arun might also help a bit here.

Learn More - About how to use Docker on Windows with Maven
As many middleware developers are running Windows, I thought I give it a try myself and also give some more tips along the way about how to build and run images with the least possible amount of struggle with Docker containers, hosts and guests and command line options.

Get Your Hands Dirty - Working With Docker Images
Now that you've learned how to manage the basics, it is time to either create your own images the Docker-way and push them to the Registry. If you're struggling with multiple images and dependencies on your machine it is handy to know how to remove them.

Take the Lab Again - Docker with WildFly and Java EE7 HOL
The Java EE 7 Hands-on Lab has been delivered all around the world and is a pretty standard application that shows design patterns and anti-patterns for a typical Java EE 7 application. Arun took some time to Docker-ize it. Learn how to use it and take it again.

Setting Up You Own WildFly - On Docker Of Course
A pretty standard setup is to have different containers for your database and your Java EE server. Learn how to setup MySQL and WildFly on separate containers and link them. Or jump directly into setting up a WildFly cluster on OpenShift Origin v3 (which is full of Docker).
You can also have a WildFly version which contains Apache Camel as subsystem and use this instead of a plain WildFly on Origin. But you can of course also use it in plain Docker.
And you also need to know how to expose the WildFly admin console to the host.

Learn How To Test on Docker - Java EE with Arquillian and Cube
Everything is setup now and you know how to operate your Docker containers and images, it is time to get your hands on tests. Arquillian supports Docker with the Cube extension.

If I missed an important link, I would be happy to know about and read your comments or experiences. Also, please let me know if there's something special, you are missing.

Friday, January 9, 2015

JBoss Data Virtualization 6.1 Beta Now Available

12:00 Friday, January 9, 2015 Posted by Test No comments:
, ,
JBoss Data Virtualization (JDV) is a data integration solution that sits in front of multiple data sources and allows them to be treated as a single source.  Do do that, it offers data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable and unified logical data models accessible through standard SQL (JDBC, ODBC, Hibernate) and/or web services (REST, OData, SOAP) interfaces.
Yesterday the latest 6.1 Beta was made available for download. It focuses on three major areas which are Big Data, Cloud and Development and Deployment Improvements.

Big Data Improvements
In addition to the Apache Hive support released in 6.0, 6.1 will also offer support for Cloudera Impala for fast SQL query access to data stored in Hadoop.
Also new in 6.1 is the support for Apache Solr as a data source.  With Apache Solr you are able to take advantage of enterprise search capabilities for organized retrieval of structured and unstructured data. Another area of improvements is the updated support for MongoDB as a NoSQL data source. This was already introduced as a Technical Preview in 6.0 and will be fully supported in 6.1.
The JBoss Data Grid support has been further expanded and brings the ability to perform writes in addition to reads. With 6.1 it is also possible to take advantage of JDG Library mode as an embedded cache in addition to the support as a remote cache that was previously available.
Newly introduced in this release is the Apache Cassandra support which is included as a not supported, technical preview.

Cloud Improvements
The OpenShift cartridge for 6.1 will be updated with a new WebUI that focusses on ease of use for web and mobile developers.  This lightweight user interface allows users to quickly access a library of existing data services, or create one of their own in a top-down manner.
Beside that, the support for the Salesforce.com (SFDC) API has been improved. It now supports the Bulk API with a better RESTful interface and better resource handling and is now able to handle very large data-sets. Finally, the 6.1 version brings full support of JBoss Data Virtualization on Amazon EC2 and Google Compute Engine.

Productivity and Deployment Improvments
The consistent centralized security capabilities across multiple heterogeneous data sources got even better by a security audit log dashboard that can be viewed in the dashboard builder. It works with JDV’s RBAC feature and displays who has been accessing what data and when. Beside the large set of already supported data sources, JDV already allowed to create custom integrations, called translators. Those have been reworked and the developer productivity got better by providing features to improve usability including archetype templates that can be used to generate a starting maven project for custom development.  When the project is created, it will contain the essential classes and resources to begin adding custom logic. JDV 6.1 will provide support for Azul Zing JVM.  Azul Zing is optimized for Linux server deployments and designed for enterprise applications and workloads that require any combination of large memory, high transaction rates, low latency, consistent response times or high sustained throughput.  The support for MariaDB as a data source has been added. The Excel support has been further extended and allows to read Microsoft Excel documents on all platforms by using the Apache POI connector. 

Find out more:

Thursday, January 8, 2015

Integrating Microservices With Apache Camel

10:44 Thursday, January 8, 2015 Posted by Test No comments:
, , ,
Just a short head-ups, that there is an interesting and free webinar upcoming with Red Hat's Christian Posta (@christianposta) about how to use patterns from SOA to build out intelligent routing systems with Apache Camel, and centralized management, service discovery, versioning, and tooling support from JBoss Fuse.



Date: Wednesday, January 21, 2015
Time: 16:00 UTC | 11:00 am (New York) | 5:00 pm (Paris) | 9:30 pm (Mumbai)
Duration: 60 minutes

There is still plenty of time to register!

Christian is a Principal middleware architect at Red Hat and specializes in messaging-based enterprise integrations with high scalability and throughput demands. He has been in development for over 10 years, covering a wide range of domains–from embedded systems to UI and UX design and lots of integration in between.

He's passionate about software development, loves solving tough technical problems, and enjoys learning new languages and programing paradigms. Christian's preferred languages are Python and Scala, but he spends a lot of time writing Java and is a committer on Apache Camel, Apache ActiveMQ, and Apache Apollo projects as well as PMC on ActiveMQ.

If you want to know more, I recorded a developer interview with him about fabric8. You can follow his twitter stream or read his blog if you want to get even more and timely first-hand information.