About software development for the enterprise. Focus on Java EE and more general Java platforms.
You'll read a lot about Conferences, Java User Groups, Java EE, Integration, AS7, WildFly, EAP and other technologies that hit my road.

Monday, January 26, 2015

SSL with WildFly 8 and Undertow

12:02 Monday, January 26, 2015 Posted by Markus Eisele
, , ,
I've been working my way through some security topics along WildFly 8 and stumbled upon some configuration options, that are not very well documented. One of them is the TLS/SSL configuration for the new web-subsystem Undertow. There's plenty of documentation for the older web-subsystem and it is indeed still available to use, but here is the short how-to configure it the new way.

Generate a keystore and self-signed certificate 
First step is to generate a certificate. In this case, it's going to be a self signed one, which is enough to show how to configure everything. I'm going to use the plain Java way of doing it, so all you need is the JRE keytool. Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. It also allows users to cache certificates. Java Keytool stores the keys and certificates in what is called a keystore. By default the Java keystore is implemented as a file. It protects private keys with a password. A Keytool keystore contains the private key and any certificates necessary to complete a chain of trust and establish the trustworthiness of the primary certificate.

Please keep in mind, that an SSL certificate serves two essential purposes: distributing the public key and verifying the identity of the server so users know they aren't sending their information to the wrong server. It can only properly verify the identity of the server when it is signed by a trusted third party. A self signed certificate is a certificate that is signed by itself rather than a trusted authority.
Switch to a command-line and execute the following command which has some defaults set, and also prompts you to enter some more information.
$>keytool -genkey -alias mycert -keyalg RSA -sigalg MD5withRSA -keystore my.jks -storepass secret  -keypass secret -validity 9999

What is your first and last name?
  [Unknown]:  localhost
What is the name of your organizational unit?
  [Unknown]:  myfear
What is the name of your organization?
  [Unknown]:  eisele.net
What is the name of your City or Locality?
  [Unknown]:  Grasbrun
What is the name of your State or Province?
  [Unknown]:  Bavaria
What is the two-letter country code for this unit?
  [Unknown]:  ME
Is CN=localhost, OU=myfear, O=eisele.net, L=Grasbrun, ST=Bavaria, C=ME correct?
  [no]:  yes

Make sure to put your desired "hostname" into the "first and last name" field, otherwise you might run into issues while permanently accepting this certificate as an exception in some browsers. Chrome doesn't have an issue with that though.
The command generates a my.jks file in the folder it is executed. Copy this to your WildFly config directory (%JBOSS_HOME%/standalone/config).

Configure The Additional WildFly Security Realm
The next step is to configure the new keystore as a server identity for ssl in the WildFly security-realms section of the standalone.xml (if you're using -ha or other versions, edit those).
 <management>
        <security-realms>
<!-- ... -->
 <security-realm name="UndertowRealm">
                <server-identities>
                    <ssl>
                        <keystore path="my.keystore" relative-to="jboss.server.config.dir" keystore-password="secret" alias="mycert" key-password="secret"/>
                    </ssl>
                </server-identities>
            </security-realm>
<!-- ... -->

And you're ready for the next step.

Configure Undertow Subsystem for SSL
If you're running with the default-server, add the https-listener to the undertow subsystem:
  <subsystem xmlns="urn:jboss:domain:undertow:1.2">
         <!-- ... -->
            <server name="default-server">
            <!-- ... -->
                <https-listener name="https" socket-binding="https" security-realm="UndertowRealm"/>
<! -- ... -->

That's it, now you're ready to connect to the ssl port of your instance https://localhost:8443/. Note, that you get the privacy error (compare screenshot). If you need to use a fully signed certificate you mostly get a PEM file from the cert authority. In this case, you need to import this into the keystore. This stackoverflow thread may help you with that.

Friday, January 23, 2015

Developer Interview (#DI 12) - Henryk Konsek (@hekonsek) about Camel on Docker

12:00 Friday, January 23, 2015 Posted by Markus Eisele
, , ,
Fridays seem to be the Developer Interview day. Today I welcome Henryk Konsek (@hekonsek). Henryk is a software engineer at Red Hat (JBoss) who has been working with Java-related technologies for many years. His area of expertise is middleware and integration technologies. He authored the "Instant Apache ServiceMix How-to" book at Packt and is working with Red Hat customers on all kinds of solutions around integration technologies.

We've had a great chat about Apache Camel, Fabric, MongoDB, Docker and Microservices. If you want to learn more, follow his blog or watch his work on GitHub.

Sit back, relax and get a #Coffee+++! Thanks, Henryk for taking the time!


The source code of the demo can be found on GitHub and Henryk was so kind to provide a general overview diagram about the demo he was doing:

Thursday, January 22, 2015

About WildFlies, Camel and Large Enterprise Projects

12:00 Thursday, January 22, 2015 Posted by Markus Eisele
, , ,
Just wanted to quickly publish my slides from the recent JDK.io talks about WildFlies, Apache Camel, Java EE and large enterprise projects.
Thanks to all the attendees for great questions and the attention.
JDK.io is the two day conference of the Danish Java User Group. The venue is pretty unique as it is the IT-University. Which is an amazing building and a unique atmosphere in the session rooms. Check out their website for more information.

See you soon somewhere. Check out my upcoming talks!





Wednesday, January 21, 2015

Getting Started With OpenShift - A Quick Hands-On Introduction To OpenShift

12:00 Wednesday, January 21, 2015 Posted by Markus Eisele
, ,
Did you know, that there is a free ebook about OpenShift? Free, like in free beer? You’ll learn the steps necessary to build, deploy, and host a complete real-world application on OpenShift, without having to read long, detailed explanations of the technologies involved.
Though the book uses Python, application examples in other languages are available on GitHub. If you can build web applications, use a command line, and program in Java, Python, Ruby, Node.js, PHP, Perl, you’re ready to get started.
You can even run your own JBoss WildFly or EAP server on it. The book is available in mobi and PDF format and the download has slim 11 MB.

It was written by Steve Pousty (@TheSteve0, visit his blog) and Katie Miller (@codemiller, visit her website). Steve Pousty is a Developer Advocate at Red Hat. Having earned a Ph.D. in Ecology, he’s been mapping since the late 1980s and building applications for over 15 years. Steve has spoken widely on everything from developer evangelism to auto-scaling applications in the cloud. Katie Miller, an OpenShift Developer Advocate at Red Hat, is a polyglot programmer with a penchant for Haskell. A former newspaper journalist, Katie co-founded the Lambda Ladies group for women in functional programming.

Download your free copy today and get started with Red Hat's PaaS offering.

Further Readings:
Open Shift Developers
Getting Started Guide


Tuesday, January 20, 2015

NoSQL with Hibernate OGM - Part one: Persisting your first Entities

12:00 Tuesday, January 20, 2015 Posted by Markus Eisele
, ,
The first final version of Hibernate OGM is out and the team recovered a bit from the release frenzy. So they thought about starting a series of tutorial-style blogs which give you the chance to start over easily with Hibernate OGM. Thanks to Gunnar Morling (@gunnarmorling) for creating this tutorial.

Introduction
Don't know what Hibernate OGM is? Hibernate OGM is the newest project under the Hibernate umbrella and allows you to persist entity models in different NoSQL stores via the well-known JPA.
We'll cover these topics in the following weeks:
  • Persisting your first entities (this instalment)
  • Querying for your data
  • Running on WildFly
  • Running with CDI on Java SE
  • Store data into two different stores in the same application
If you'd like us to discuss any other topics, please let us know. Just add a comment below or tweet your suggestions to us.
In this first part of the series we are going to set up a Java project with the required dependencies, create some simple entities and write/read them to and from the store. We'll start with the Neo4j graph database and then we'll switch to the MongoDB document store with only a small configuration change.

Project set-up 
Let's first create a new Java project with the required dependencies. We're going to use Maven as a build tool in the following, but of course Gradle or others would work equally well.
Add this to the dependencyManagement block of your pom.xml:

...
<dependencyManagement>
    <dependencies>
        ...
        <dependency>
            <groupId>org.hibernate.ogm</groupId>
            <artifactId>hibernate-ogm-bom</artifactId>
            <type>pom</type>
            <version>4.1.1.Final</version>
            <scope>import</scope>
        </dependency>
            ...
    </dependencies>
</dependencyManagement>
...
This will make sure that you are using matching versions of the Hibernate OGM modules and their dependencies. Then add the following to the dependencies block:

...
<dependencies>
    ...
    <dependency>
        <groupId>org.hibernate.ogm</groupId>
        <artifactId>hibernate-ogm-neo4j</artifactId>
    </dependency>
    <dependency>
        <groupId>org.jboss.jbossts</groupId>
        <artifactId>jbossjta</artifactId>
    </dependency>
    ...
</dependencies>
...
The dependencies are:
  • The Hibernate OGM module for working with an embedded Neo4j database; This will pull in all other required modules such as Hibernate OGM core and the Neo4j driver. When using MongoDB, you'd swap that with hibernate-ogm-mongodb.
  • JBoss' implementation of the Java Transaction API (JTA), which is needed when not running within a Java EE container such as WildFly
The domain model
Our example domain model is made up of three classes: Hike, HikeSection and Person.

There is a composition relationship between Hike and HikeSection, i.e. a hike comprises several sections whose life cycle is fully dependent on the Hike. The list of hike sections is ordered; This order needs to be maintained when persisting a hike and its sections.
The association between Hike and Person (acting as hike organizer) is a bi-directional many-to-one/one-to-many relationship: One person can organize zero ore more hikes, whereas one hike has exactly one person acting as its organizer.

Mapping the entities
Now let's map the domain model by creating the entity classes and annotating them with the required meta-data. Let's start with the Person class:

@Entity
public class Person {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private long id;

    private String firstName;
    private String lastName;

    @OneToMany(mappedBy = "organizer", cascade = CascadeType.PERSIST)
    private Set<Hike> organizedHikes = new HashSet<>();

    // constructors, getters and setters...
}
The entity type is marked as such using the @Entity annotation, while the property representing the identifier is annotated with @Id.
Instead of assigning ids manually, Hibernate OGM can take care of this, offering several id generation strategies such as (emulated) sequences, UUIDs and more. Using a UUID generator is usually a good choice as it ensures portability across different NoSQL datastores and makes id generation fast and scalable. But depending on the store you work with, you also could use specific id types such as object ids in the case of MongoDB (see the reference guide for the details).
Finally, @OneToMany marks the organizedHikes property as an association between entities. As it is a bi-directional entity, the mappedBy attribute is required for specifying the side of the association which is in charge of managing it. Specifying the cascade type PERSIST ensures that persisting a person will automatically cause its associated hikes to be persisted, too.
Next is the Hike class:

@Entity
public class Hike {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String description;
    private Date date;
    private BigDecimal difficulty;

    @ManyToOne
    private Person organizer;

    @ElementCollection
    @OrderColumn(name = "sectionNo")
    private List<HikeSection> sections;

    // constructors, getters and setters...
}
Here the @ManyToOne annotation marks the other side of the bi-directional association between Hike and Organizer. As HikeSection is supposed to be dependent on Hike, the sections list is mapped via @ElementCollection. To ensure the order of sections is maintained in the datastore, @OrderColumn is used. This will add one extra "column" to the persisted records which holds the order number of each section.
Finally, the HikeSection class:

@Embeddable
public class HikeSection {

    private String start;
    private String end;

    // constructors, getters and setters...
}
Unlike Person and Hike, it is not mapped via @Entity but using @Embeddable. This means it is always part of another entity (Hike in this case) and as such also has no identity on its own. Therefore it doesn't declare any @Id property.
Note that these mappings looked exactly the same, had you been using Hibernate ORM with a relational datastore. And indeed that's one of the promises of Hibernate OGM: Make the migration between the relational and the NoSQL paradigms as easy as possible!

Creating the persistence.xml
With the entity classes in place, one more thing is missing, JPA's persistence.xml descriptor. Create it under src/main/resources/META-INF/persistence.xml:

<?xml version="1.0" encoding="utf-8"?>

<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
    version="2.0">

    <persistence-unit name="hikePu" transaction-type="RESOURCE_LOCAL">
        <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>

        <properties>
            <property name="hibernate.ogm.datastore.provider" value="neo4j_embedded" />
            <property name="hibernate.ogm.datastore.database" value="HikeDB" />
            <property name="hibernate.ogm.neo4j.database_path" value="target/test_data_dir" />
        </properties>
    </persistence-unit>
</persistence>
If you have worked with JPA before, this persistence unit definition should look very familiar to you. The main difference to using the classic Hibernate ORM on top of a relational database is the specific provider class we need to specify for Hibernate OGM: org.hibernate.ogm.jpa.HibernateOgmPersistence.
In addition, some properties specific to Hibernate OGM and the chosen back end are defined to set:
  • the back end to use (an embedded Neo4j graph database in this case)
  • the name of the Neo4j database
  • the directory for storing the Neo4j database files
Depending on your usage and the back end, other properties might be required, e.g. for setting a host, user name, password etc. You can find all available properties in a class named <BACK END>Properties, e.g. Neo4jProperties, MongoDBProperties and so on.

Saving and loading an entity
With all these bits in place its time to persist (and load) some entities. Create a simple JUnit test shell for doing so:

public class HikeTest {

    private static EntityManagerFactory entityManagerFactory;

    @BeforeClass
    public static void setUpEntityManagerFactory() {
        entityManagerFactory = Persistence.createEntityManagerFactory( "hikePu" );
    }

    @AfterClass
    public static void closeEntityManagerFactory() {
        entityManagerFactory.close();
    }
}
The two methods manage an entity manager factory for the persistence unit defined in persistence.xml. It is kept in a field so it can be used for several test methods (remember, entity manager factories are rather expensive to create, so they should be initialized once and be kept around for re-use).
Then create a test method persisting and loading some data:

@Test
public void canPersistAndLoadPersonAndHikes() {
    EntityManager entityManager = entityManagerFactory.createEntityManager();

    entityManager.getTransaction().begin();

    // create a Person
    Person bob = new Person( "Bob", "McRobb" );

    // and two hikes
    Hike cornwall = new Hike(
            "Visiting Land's End", new Date(), new BigDecimal( "5.5" ),
            new HikeSection( "Penzance", "Mousehole" ),
            new HikeSection( "Mousehole", "St. Levan" ),
            new HikeSection( "St. Levan", "Land's End" )
    );
    Hike isleOfWight = new Hike(
            "Exploring Carisbrooke Castle", new Date(), new BigDecimal( "7.5" ),
            new HikeSection( "Freshwater", "Calbourne" ),
            new HikeSection( "Calbourne", "Carisbrooke Castle" )
    );

    // let Bob organize the two hikes
    cornwall.setOrganizer( bob );
    bob.getOrganizedHikes().add( cornwall );

    isleOfWight.setOrganizer( bob );
    bob.getOrganizedHikes().add( isleOfWight );

    // persist organizer (will be cascaded to hikes)
    entityManager.persist( bob );

    entityManager.getTransaction().commit();

    // get a new EM to make sure data is actually retrieved from the store and not Hibernate's internal cache
    entityManager.close();
    entityManager = entityManagerFactory.createEntityManager();

    // load it back
    entityManager.getTransaction().begin();

    Person loadedPerson = entityManager.find( Person.class, bob.getId() );
    assertThat( loadedPerson ).isNotNull();
    assertThat( loadedPerson.getFirstName() ).isEqualTo( "Bob" );
    assertThat( loadedPerson.getOrganizedHikes() ).onProperty( "description" ).containsOnly( "Visiting Land's End", "Exploring Carisbrooke Castle" );

    entityManager.getTransaction().commit();

    entityManager.close();
}
Note how both actions happen within a transaction. Neo4j is a fully transactional datastore which can be controlled nicely via JPA's transaction API. Within an actual application one would probably work with a less verbose approach for transaction control. Depending on the chosen back end and the kind of environment your application runs in (e.g. a Java EE container such as WildFly), you could take advantage of declarative transaction management via CDI or EJB. But let's save that for another time.
Having persisted some data, you can examine it, using the nice web console coming with Neo4j. The following shows the entities persisted by the test:



Hibernate OGM aims for the most natural mapping possible for the datastore you are targeting. In the case of Neo4j as a graph datastore this means that any entity will be mapped to a corresponding node.
The entity properties are mapped as node properties (see the black box describing one of the Hike nodes). Any not natively supported property types will be converted as required. E.g. that's the case for the date property which is persisted as an ISO-formatted String. Additionally, each entity node has the label ENTITY (to distinguish it from nodes of other types) and a label specifying its entity type (Hike in this case).
Associations are mapped as relationships between nodes, with the association role being mapped to the relationship type.
Note that Neo4j does not have the notion of embedded objects. Therefore, the HikeSection objects are mapped as nodes with the label EMBEDDED, linked with the owning Hike nodes. The order of sections is persisted via a property on the relationship.

Switching to MongoDB
One of Hibernate OGM's promises is to allow using the same API - namely, JPA - to work with different NoSQL stores. So let's see how that holds and make use of MongoDB which, unlike Neo4j, is a document datastore and persists data in a JSON-like representation. To do so, first replace the Neo4j back end with the following one:

...
<dependency>
    <groupId>org.hibernate.ogm</groupId>
    <artifactId>hibernate-ogm-mongodb</artifactId>
</dependency>
...
Then update the configuration in persistence.xml to work with MongoDB as the back end, using the properties accessible through MongoDBProperties to give host name and credentials matching your environment (if you don't have MongoDB installed yet, you can download it here):

...
<properties>
    <property name="hibernate.ogm.datastore.provider" value="mongodb" />
    <property name="hibernate.ogm.datastore.database" value="HikeDB" />
    <property name="hibernate.ogm.datastore.host" value="mongodb.mycompany.com" />
    <property name="hibernate.ogm.datastore.username" value="db_user" />
    <property name="hibernate.ogm.datastore.password" value="top_secret!" />
</properties>
...
And that's all you need to do to persist your entities in MongoDB rather than Neo4j. If you now run the test again, you'll find the following BSON documents in your datastore:

# Collection "Person"
{
    "_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "firstName" : "Bob",
    "lastName" : "McRobb",
    "organizedHikes" : [
        "a78d731f-eff0-41f5-88d6-951f0206ee67",
        "32384eb4-717a-43dc-8c58-9aa4c4e505d1"
    ]
}

# Collection Hike
{
    "_id" : "a78d731f-eff0-41f5-88d6-951f0206ee67",
    "date" : ISODate("2015-01-16T11:59:48.928Z"),
    "description" : "Visiting Land's End",
    "difficulty" : "5.5",
    "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "sections" : [
        {
            "sectionNo" : 0,
            "start" : "Penzance",
            "end" : "Mousehole"
        },
        {
            "sectionNo" : 1,
            "start" : "Mousehole",
            "end" : "St. Levan"
        },
        {
            "sectionNo" : 2,
            "start" : "St. Levan",
            "end" : "Land's End"
        }
    ]
}
{
    "_id" : "32384eb4-717a-43dc-8c58-9aa4c4e505d1",
    "date" : ISODate("2015-01-16T11:59:48.928Z"),
    "description" : "Exploring Carisbrooke Castle",
    "difficulty" : "7.5",
    "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0",
    "sections" : [
        {
            "sectionNo" : 1,
            "start" : "Calbourne",
            "end" : "Carisbrooke Castle"
        },
        {
            "sectionNo" : 0,
            "start" : "Freshwater",
            "end" : "Calbourne"
        }
    ]
}
Again, the mapping is very natural and just as you'd expect it when working with a document store like MongoDB. The bi-directional one-to-many/many-to-one association between Person and Hike is mapped by storing the referenced id(s) on either side. When loading back the data, Hibernate OGM will resolve the ids and allow to navigate the association from one object to the other.
Element collections are mapped using MongoDB's capabilities for storing hierarchical structures. Here the sections of a hike are mapped to an array within the document of the owning hike, with an additional field sectionNo to maintain the collection order. This allows to load an entity and its embedded elements very efficiently via a single round-trip to the datastore.

Wrap-up
In this first instalment of NoSQL with Hibernate OGM 101 you've learned how to set up a project with the required dependencies, map some entities and associations and persist them in Neo4j and MongoDB. All this happens via the well-known JPA API. So if you have worked with Hibernate ORM and JPA in the past on top of relational databases, it never was easier to dive into the world of NoSQL.
At the same time, each store is geared towards certain use cases and thus provides specific features and configuration options. Naturally, those cannot be exposed through a generic API such as JPA. Therefore Hibernate OGM lets you make usage of native NoSQL queries and allows to configure store-specific settings via its flexible option system.
You can find the complete example code of this blog post on GitHub. Just fork it and play with it as you like.
Of course storing entities and getting them back via their id is only the beginning. In any real application you'd want to run queries against your data and you'd likely also want to take advantage of some specific features and settings of your chosen NoSQL store. We'll come to that in the next parts of this series, so stay tuned!

Monday, January 19, 2015

DevNation - Call For Papers, Program Committee and Raising Excitement

12:00 Monday, January 19, 2015 Posted by Markus Eisele
, ,
DevNation Pictures from 2014
You have heard about DevNation before, did you? It is a 3-day technical, open source, polyglot conference for full-stack application developers and maintainers. The inaugural edition was held last year in San Francisco and delivered a promising start. You can find my trip report on this blog. While I've just been one among others in this incredible speaker lineup, the one things that changed for me working for Red Hat is, that I now actually have the chance to help shape what DevNation looks like. And it is a real pleasure to work with the whole team on making it even better this year.

Call For Papers - Open Until 28th of January
But first things first: We want you! Give us your best talk you have. DevNation is not just a Red Hat or JBoss conference. It is about all things relevant to software development and operations that you can imagine. Not only Java, but on languages that matter today. If you have something to say about Enterprise Applications, Front-End, Mobile Development, Big Data, Application Integration, DevOps,  Continuous Delivery, Performance, Tuning, Platform Development or other cool stuff that you want people to be excited about, this is the place to talk about it. Don't wait any longer: send us your best; the call for papers for the 2015 edition is open until January 28th! More information about the location or highlights from last year can be found on the devnation.org website.



Program Committee - Independent, Open, Experienced
One of the bigger changes this year is, that we're going to have an experienced, well known and Red Hat external program committee supporting the selection process. As the head of it, I am going to work closely with:

Simon Maple (@sjmaple)
Rabea Gransberger (@rgransberger)
Christian Kaltepoth (@chkal)
David Blevins (@dblevins)
Tonya Rae Moore (@TonyaRaeMoore)
Joel Tosi (@joeltosi)

To shape the best program we ever had and set a high bar for all following editions. If you have questions or ideas regarding talks feel free to discuss them with any of us and reach out over twitter or send us an email.

Raising Even More Excitement
The location will be the The Hynes Convention Center in Boston. So, there will be plenty of space for all the amazing sessions, that you're going to see. And we also do have even more cool things planned: Hacking events, Birds-Of-A-Feather sessions, an evening event, keynotes, plenty of room for networking and discussions and even more which we're going to announce shortly on the official website.


Friday, January 16, 2015

Developer Interview (#DI 11) - Stuart Douglas (@stuartwdouglas) about WildFly9 and Undertow

12:00 Friday, January 16, 2015 Posted by Markus Eisele
, ,
You know, that I am a Java EE guy. And I love looking into what comes up with latest servers. JBoss is working on WildFly 9 these days, and one particular area that always caught my interest is scaling, clustering and failover. So, this is a great chance to look at what the new version of Untertow will have to offer. And it is my pleasure to welcome Stuart Douglas to my developer interview series.

Stuart (@stuartwdouglas) is a Senior Software Engineer at Red Hat working on the Wildfly and JBoss EAP application servers. In his 4 years at Red Hat Stuart has worked on many parts of the server, including EJB, Servlets and Websockets. Stuart currently leads the Undertow project, which is an embeddable high performance web server used by Wildfly.

Sit back, relax and get a #Coffee+++! Thanks Stuart for taking the time!



Resources:
The GitHub Repository mentioned in the recording. And for a better understanding, this is a topology diagram of what Stuart built.


Thursday, January 15, 2015

Kickstart on API Management with JBoss Apiman 1.0

12:00 Thursday, January 15, 2015 Posted by Markus Eisele
, , ,
The JBoss apiman project hit its first public milestone release (1.0.0.Final) recently, making it the perfect time to go out and have a look at it! Now that the first public release is out the door, we’re planning on iterating quickly on new features and bug fixes.  You should expect to see apiman community releases at least monthly.

Getting Started with apiman
So how can you get started with apiman?  I’m thrilled you asked!  There are already a number of articles and videos discussing apiman functionality and concepts.  So let’s start with some links:
The 1.0 release of apiman can be easily run as a standalone server, running on WildFly 8 out of the box.  However, the runtime component (policy engine) can also be embedded into other projects. This is useful if you want to add API Management functionality to your existing API platform.

There’s Already a 1.0.1?
We didn't waste any time resting on our 1.0.0.Final laurels!  We got right back to work after the first release and added a bunch of new stuff (and fixed a few bugs, for good measure).  Early in January we came with 1.0.1.Final, which adds a bunch of stuff, including:
  • Public Services (services that can be invoked without a Service Contract)
  • Support for multiple API Gateways (although a single gateway usually makes the most sense)
  • Retiring of Services and Applications (removed from the API Gateway)
  • New Policy:  Ignore Resources (use regular expressions to prevent access to specific parts of your API)
  • Version cloning (no longer a need to re-create all your configuration when making a new version of a Service, App, or Plan)
  • First stab at a highly scalable vert.x based API Gateway
(read more)

What are Public Services?
One of the new features that some users will find really helpful is the concept of a “Public” Service.  A public service is one that can be invoked without a Service Contract.  In fact, if you only use public services in apiman then there isn’t any reason to create Applications!  This can be very useful if you are only looking to add policies to your services, but not interested in tracking which applications are invoking it.
(read more)

Why is Version Cloning Important?
An important feature of apiman is the ability to have multiple versions of Plans, Services, and Apps.  But whenever a new version of one of these entities is created, it is often necessary to tweak only a small part of the configuration.  For example, if a new version of a Service is released into production, then a new version of it may need to be created in apiman.  But all of the policies and plan probably still apply to the new version - only the Service Implementation endpoint details may have changed.  Now you can clone all of this information whenever you create a new version, saving you the hassle of re-entering all of that config.  Just clone what you had and change what you need.
(read more)

Why Have a vert.x Gateway?
For many users, having the API Gateway running in WildFly 8 is no problem.  We can handle a lot of load using WildFly, and scaling it up to moderate usage levels isn’t too hard.  However, asynchronous systems are designed to scale out to very heavy load, so we designed our runtime Policy Engine to have an asynchronous API to take advantage of these types of systems.  The latest version introduces an asynchronous API Gateway based on the very nice vert.x platform.
We’ll be doing a lot more work on this in the future, but for now it’s a great start and very exciting! We’re hoping that this solution will eventually be used in very large deployments (once we work out some of the details).

Wednesday, January 14, 2015

Developer Interview (#DI 10) - Gorkem Ercan (@gorkemercan) about Mobile Dev with JBDS and Cordova

12:00 Wednesday, January 14, 2015 Posted by Markus Eisele
, , ,
New Year, new developer interviews. Yesterday evening I had the pleasure to talk to Görkem Ercan (@gorkemercan, blog) who is a Toronto based software engineer with Red Hat. has tens of years of experience working on software projects with different technologies ranging from enterprise and mobile Java to Symbian and Qt C++. He specializes on providing tools and APIs for developers. He works in the JBoss Developer Studio (JBDS) and is focused on the Cordova tooling. After my first experiences with mobile and such with the Devoxx keynote team, I thought it might be a good idea to look into what JBDS offers and if he can get me excited about it. I can tell you one thing: He made it.

Sit back, relax and get a #Coffee+++! Thanks Görkem  for taking the time!

Tuesday, January 13, 2015

Pushing the Limits - Howto use AeroGear Unified Push for Java EE and Node.js

12:00 Tuesday, January 13, 2015 Posted by Markus Eisele
, ,
At the end of 2014 the AeroGear team announced the availability of the Red Hat JBoss Unified Push Server on xPaaS. Let's take a closer look!

Overview
The Unified Push Server allows developers to send native push messages to Apple's Push Notification Service (APNS) and Google's Cloud Messaging (GCM). It features a built-in administration console that makes it easy for developers to create and manage push related aspects of their applications for any mobile development environment. Includes client SDKs (iOS, Android, & Cordova), and a REST based sender service with an available Java sender library. The following image shows how the Unified Push Server enables applications to send native push messages to Apple's Push Notification Service (APNS) and Google's Cloud Messaging (GCM):

Architecture
The xPaaS offering is deployed in a managed EAP container, while the server itself is based on standard Java EE APIs like:
  • JAX-RS 
  • EJB 
  • CDI 
  • JPA 
Another critical component is Keycloak, which is used for user management and authentication. The heart of the Unified Push Server are its public RESTful endpoints. These services are the entry for all mobile devices as well as for 3rd party business applications, when they want to issue a push notification to be delivered to the mobile devices, registered with the server.

Backend integration
Being based on the JAX-RS standard makes integration with any backend platform very easy. It just needs to speak HTTP...

Java EE
The project has a Java library to send push notification requests from any Java-based backend. The fluent builder API is used to setup the integration with the desired Unified Push Server, with the help of CDI we can extract that into a very simple factory:

@Produces
public PushSender setup() {
  PushSender defaultPushSender = DefaultPushSender.withRootServerURL("http://localhost:8080/ag-push")
    .pushApplicationId("c7fc6525-5506-4ca9-9cf1-55cc261ddb9c")
    .masterSecret("8b2f43a9-23c8-44fe-bee9-d6b0af9e316b")
    .build();
}

Next we would need to inject the `PushSender` into a Java class which is responsible to send a push request to the Unified Push Server:

@Inject
private PushSender sender;
...
public void sendPushNotificationRequest() {
   ...
   UnifiedMessage unifiedMessage....;
   sender.send(unifiedMessage);
}

The API for the `UnifiedMessage` is leveraging the builder pattern as well:

UnifiedMessage unifiedMessage = UnifiedMessage.withMessage()
    .alert("Hello from Java Sender API!")
    .sound("default")
    .userData("foo-key", "foo-value")
    ...
    .build();


Node.js
Being a restful server does not limit the integration to traditional platforms like Java EE. The AeroGear also has a Node.js library. Below is a short example how to send push notifications from a Node.js based backend:

// setup the integration with the desired Unified Push Server
var agSender = require( "unifiedpush-node-sender" ),
    settings = {
        url: "http://localhost:8080/ag-push",
        applicationId: "c7fc6525-5506-4ca9-9cf1-55cc261ddb9c",
        masterSecret: "8b2f43a9-23c8-44fe-bee9-d6b0af9e316b"
    };

// build the push notification payload:
message = {
    alert: "Hello from Node.js Sender API!",
    sound: "default",
    userData: {
        foo-key: "foo-value"
    }
};

// send it to the server:
agSender.Sender( settings ).send( message, options ).on( "success", function( response ) {
    console.log( "success called", response );
});


What's next ?
The Unified Push Server on on xPaaS is supporting Android and iOS at the moment, but the AeroGear team is looking to enhance the service for more mobile platforms. The community project is currently supporting the following platforms:
  • Android
  • Chrome Packaged Apps
  • iOS
  • SimplePush / Firefox OS
  • Windows 
There are plans for adding support for Safari browser and Amazon's Device Messaging (ADM).

Getting started To see the Unified Push Server in action, checkout the video below:

The xPaaS release comes with different demos for Android, iOS and Apache Cordova clients as well as a Java EE based backend demo. You can find the downloads here.
More information can be found on the Unified Push homepage.
You can reach out to the AeroGer team via IRC or email.
Have fun and enjoy!

If you find some more time and need a #coffee+++ make sure to watch the developer interview with Matthias about Openshift, Aerogear and how to bring Java EE to Mobiles.

_______________________
This is a guest post by Matthias Wessendorf (@mwessendorf, blog). He is working at Red Hat where he is leading the AeroGear project. Previously, he was the PMC Chair of the Apache MyFaces project. Matthias is a regular conference speaker. Thank you, Matthias!