Friday, February 26, 2010

Low Hanging Fruits in Optimizing Java EE Performance

Alois Reitbauer from dynatrace shared a presentation about Java EE performance. It's a more general overview. But it gives a great summary about things, that could go wrong.
Of course, you can track them down with a couple of different tools from different vendors ;)

Thursday, February 25, 2010

JVM 1.5 GC Tuning and WebLogic Server - Part II garbage collection algorithms

You now know a bit more about the heap sizes (part I). Next is to dive into the actuall garbage collection algorithms.

Choosing the garbage collection algorithm
The Java HotSpot JM (up to version 5) includes four garbage collectors. All the collectors are generational. Memory in the Java HotSpot virtual machine is organized into three generations: a young generation, an old generation, and a permanent generation. When the young generation fills up, a young generation collection (sometimes referred to as a minor collection) of just that generation is performed. When the old or permanent generation fills up, what is known as a full collection (sometimes referred to as a major collection) is typically done.
Commonly, the young generation is collected first, because it is usually the most efficient way to remove garbage from the young generation. Then what is referred to as the old generation collection algorithm for a given collector is run on both the old and permanent generations.

Serial Collector
With the serial collector, both young and old collections are done serially (using a single CPU), in a stop-the-world fashion. That is, application execution is halted while collection is taking place.

In the J2SE 5.0 release, the serial collector is automatically chosen as the default garbage collector on machines that are not server-class machines. The VM does an autodetection on this depending on the processors and the physical memory. Or your could even explicitly define a server or client VM by using the
-server or

The serial collector can be explicitly requested by using the
command line option.
As you can guess, this is NOT the right collector for high volume server side applications running on 64-bit linux with lot of heap. If you compare this behaviour to the problem that was described in the first part, I bet you now know about the cause for it. But looking at the Weblogic startup script makes it clear. This is not the garbage collection algorithm that is used. So, let's go on.

Parallel/Throughput Collector
But what to do with bigger applications? It is simple: try the
parallel collector. It is also known as the throughput collector and was developed in order to take advantage of available CPUs rather than leaving most of them idle while only one does garbage collection work.
The parallel collector uses a parallel version of the young generation collection algorithm utilized by the serial collector. It is still a stop-the-world collector, but performing the young collection in parallel with many CPUs. The old generation garbage collection is still done using the same collection algorithm as the serial collector. And the behaviour is still the same. If you have bigger heaps this is probably also not the right collector for you. Applications that can benefit from the parallel collector are those that do not have pause time constraints, since infrequent, but potentially long, old generation collections will occur.

The parallel collector is automatically chosen as the default garbage collector on
-server class maschines. It can be explicitly requested with the following option: -XX:+UseParallelGC

Parallel Compacting Collector
The parallel compacting collector was introduced in J2SE 5.0 update 6. It enhances the parallel collector with a new algorithm for old generation garbage collection.
With the parallel compacting collector, the old and permanent generations are collected in a stop-theworld, but mostly parallel fashion with sliding compaction.
As with the parallel collector, the parallel compacting collector is beneficial for applications that are run on machines with more CPUs. In addition, the parallel operation of old generation collections makes it more suitable for applications that have pause time constraints. You can even change the number of threads used for garbage collection via the following option:
If you want to use this collector, you have to specify the following option:

Concurrent Mark-Sweep (CMS)/low-latency Collector
For many applications, end-to-end throughput is not as important as fast response time. Young generation collections do not typically cause long pauses. However, old generation collections, though infrequent, can impose long pauses, especially when large heaps are involved. To address this issue, the HotSpot JVM includes a collector called the concurrent mark-sweep (CMS) collector.
The CMS collector collects the young generation in the same manner as the parallel collector. Most of the collection of the old generation using the CMS collector is done concurrently with the execution of the application. The CMS collector is the only collector that is non-compacting. That is, after it frees the space that was occupied by dead objects, it does not move the live objects to one end of the old generation. Another disadvantage the CMS collector has is a requirement for larger heap sizes than the other collectors. Unlike the other collectors, the CMS collector does not start an old generation collection when the old generation becomes full. Instead, it attempts to start a collection early enough so that it can complete before that happens. Otherwise, the CMS collector reverts to the more time-consuming stop-the-world algorithm used by the parallel and serial collectors. To avoid this, the CMS collector starts at a time based on statistics regarding previous collection times and how quickly the old generation becomes occupied. It will also start a collection if the occupancy of the old generation exceeds something called the initiating occupancy. The value of the initiating occupancy is set by the option
n is the percentage of the old generation size. Defaulting to 68.

Compared to the parallel collector, the CMS collector decreases old generation pauses at the expense of longer young generation pauses, some reduction in throughput, and extra heap size requirements. Use the CMS collector if your application needs shorter garbage collection pauses and can afford to share processor resources with the garbage collector when the application is running. Applications with a relatively large set of long-lived data running on multi CPU maschines tend to benefit from the CMS collector.

If you want the CMS collector to be used, you must explicitly select it by specifying the option
An incremental mode can be enabled via

Further options
Now you basically know everything about the four garbage collection algorithms. To be honest, there are plenty of additional options and tuning methods implemented in the JVM. You can for example tune the behaviour of the parallel collectors to fit definied goals (pause time, throughput or footprint) or you can even enable special tuning parameters for defined operating systems. A complete list of Java VM options is available online.
From now on, you are going to explore your system from a runtime view. There is one basic principle that should guide you:
"Premature optimization is the root of all evil." (Donald Knuth)
There is only one cause for switching or tuning the JVM GC in general. You are running into problems. In common cases this is a "java.lang.OutOfMemoryError" exception. But also very long gc pause times in a stop-the-world fashion could force you to get your hands on it.

More tools
I allready pointed you to some diagnostic and monitoring tools in the first part. But there are more available. If you are on unix/linux based systems, you can take advantage of jmap and jstat. Both provide in depth details about your running JVMs and can assist you in digging deeper into the problem. HPROF and HAT can also assist you.

If you look around for further documents and best practices, you will notice a lot of stuff out there. The best place to start are the following two documents. Choose the one, fitting your JVM version:
- Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine
- Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning

Wednesday, February 24, 2010

JVM 1.5 GC Tuning and WebLogic Server - Part I Heap

The Java SE is used for a wide variety of applications. Beginning with small applets on desktops to web services or Java EE applications on large servers. In support of this diverse range of deployments, the HotSpot virtual machine implementation provides multiple garbage collectors, each designed to satisfy different requirements. However, users, developers and administrators that need high performance are burdened with the extra step of selecting the garbage collector that best meets their needs. Because the standard is not always the best choice. This is all about tuning and configuring the JVM to fit your needs. What to do and where to look. If you read on, I assume that you have a basic understanding of the memory management in the jvm. If not, you can read more about it in the Memory Management in the Java HotSpot Virtual Machine whitepaper (PDF).
Because this is a slightly complex area, I sliced this article into two parts. This is part one. All about sizing the heap.

All the above is especially true for Java 1.5 (they are still out there ;)) and for very large systems with a lot of hits/second. We recently came across a nice system. 16 cores, 32 GB of RAM. All running on a 64-bit Linux. It should have been a sufficient box for our small WebLogic Server (10.0) running on top of it. But starting with a lot of heap and the default -server jvm in place showed some very nasty effects. Everything went smooth up to a certain point, where the JVM started with full garbage collection. The single run took up to 69.6645960 secs. Wow. Disappointing. What happend? What next?

First step is to understand what happens. Find out everything you need. Have a look at my previous post about Application Server unresponsive or stuck? Take a deeper look!. Gather every information necessary. If nothing leads you into the opposite direction, have a deeper look at the JVM.
There are some tools out there. Let's start with the JConsole. You enable it by adding the following parameters to your JVM:

Now you can have a more detailed look at what's happening.

If you are looking at the above example, you can see the system doing a stop-the-world garbage collection ("Full Garbage Collection"). In this case, it freed about 3GB heap and took nearly 30 seconds. If you do not have any "fancy" tools at hand, you probably need this small java option:


It prints every action of the garbage collector out to the console. This should be enough for analysis, even if you do not have such a nice graphic.

[GC 325407K->83000K(776768K), 0.2300771 secs]
[Full GC 267628K->83769K(776768K), 1.8479984 secs]

In this example, you see two minor collections and one major one. The numbers before and after the arrow indicate the combined size of live objects before and after garbage collection. The number in parenthesis is the total available space, not counting the space in the permanent generation, which is the total heap minus one of the survivor spaces. The minor collection took about a quarter of a second.
You need more information


The flag prints additional information about the collections. An example of the output for the J2SE Platform version 1.5 using the serial garbage collector:

[GC [DefNew: 64575K->959K(64576K), 0.0457646 secs] 196016K->133633K(261184K), 0.0459067 secs]]

At each garbage collection the virtual machine chooses a threshold number of times an object can be copied before it is tenured. This threshold is chosen to keep the survivors half full. It is helpful to show this threshold and the ages of objects in the new generation. Enable it using the following option:


The output looks like this:

Desired survivor size 1015808 bytes, new threshold 1 (max 15)
- age 1: 2031616 bytes, 2031616 total

What next? Can someone have done any mistake in here? If you are concerned that your application might have hidden calls to System.gc() buried in libraries, you should invoke the JVM with the


option to prevent calls to System.gc() and triggering a garbage collection. If this does not change a bit, move on.

All you have to do now is to change the generation sizes and choose the correct garbage collection algorithm. Of course, you could also think about moving to a different JVM first. Beside Hotspot you also can look at JRockit. In my case, I will stick to Hotspot.

Choosing the Heap Size
The JVM heap size determines how often and how long the VM spends collecting garbage. An acceptable rate for garbage collection is application-specific. If you set a large heap size, full garbage collection is slower, but it occurs less frequently. If you set your heap size in accordance with your memory needs, full garbage collection is faster, but occurs more frequently. The goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that your application can handle at a given time.
As a rule of thumb, you should have 4GB per WebLogic instance running on a 64-bin OS. You set this with the following options. Set them to a multiple of 1024 that is greater than 1MB. Setting min and max values to the same saves some time for extending the heap during runtime.

-Xms 4096
-Xmx 4096

You should adjust the size of the young generation to be one-fourth the size of the maximum heap size. Again this is a multiple of 1024 that is greater than 1MB.
Increasing the young generation becomes counterproductive at half the total heap or less (whenever the young generation guarantee cannot be met).

-XX:NewSize 1024
-XX:MaxNewSize 1024

The New generation area is divided into three sub-areas: Eden, and two survivor spaces that are equal in size. Configure the ratio of the Eden/survivor space size with the following option.


Part II will cover the garbage collection algorithms which best fit your needs. Stay tuned.

Tuesday, February 23, 2010

Article in german iX Magazin (03/2010)

Sorry about that. Some personal advertising. German iX magazin ( publishes a new article of mine about JPA 2.0 in the upcomming 03/2010 issue.

German abstract:
Java Persistence API 2.0
Fortgeschrittenes Mapping: "Überholtes Getriebe" auf Seite 114
Mit dem Erscheinen von Java EE~~6 bekam die integrierte Java Persistence API eine runde Versionsnummer. Auch wenn noch immer  Verbesserungsbedarf besteht: Die überarbeitete Schnittstelle für das objektrelationale Mapping
verkleinert viele Probleme dieser Disziplin  deutlich.

Saturday, February 20, 2010

Eclipselink 1.2.0 JPA, Spring 2.5.6, MySQL 5.4, JTA and Oracle WebLogic

I came across a nice post from They are showing how to implement a simple url-shortener called "shorty". Really nice. But the technology decision was not that awesome. I am ok with Hibernate. But Struts? No. Thanks :) That reminds me of good old times. And they are gone.

Let's modernize the software stack a bit. We give it a try with the help of:
- Oracle WebLogic
- EclipseLink 1.2.0 (1.2.0.v20091016-r5565)
- Spring 2.5.6
- MySQL 5.4
- Maven
- Eclipse

First approach was to migrate shorty completely to the new platform. After playing around a bit, I discovered some things to take care of running it on wls. Therefore I'll stick to the basic JTA/JPA/WLS parts and I am not going to re-implement the complete webtier. I am not using Spring 3.x here, because it still does not exist on :)

Create your project with maven
First step is to have a simple project setup. Maven is the best place to start. Create a new maven project:

mvn archetype:create \ \
-DartifactId=shortywls \

Next is to update the dependencies. We basically need some Java EE 5 jars, spring and some web components. An explicit EclipseLink dependency is not needed, cause we are going to stick to the JPA 1.0. Using Wls makes this your default JPA provider! Add the following lines to your pom.xml


We are missing a place where to put our java files. Change to your newly created project directory and add a src\main\java folder to it. Now you are ready to go. This is the point, where it is best to start developing in your favorite IDE. If you are using eclipse, let maven do the preparation.

maven eclipse:eclipse

After that, you only have to import the newly created project into Eclipse.

Setup your Weblogic domain
Before we actually start implementing, we have to configure our infrastructure a bit. First is to create your weblogic domain. Use whatever you like. I love to take advantage of the Configuration Wizard (win). Find it in the start menue or browse to %WLS_HOME%\wlserver_10.3\common\bin\config.exe If your have entered the basics, create the domain and fire it up. Next is to access the Weblogic Server Administration Console (http://localhost:7001/console). Make shure, you have your MySQL instance up an running and have a database and a user with all needed rights created prior to the next steps. Now browse to -Summary of Services -JDBC -JDBC Data Sources and choose "New".
Enter a Name and JNDI name for your new DataSource and choose MySQL as Database Type and the appropriate Driver. Click "Next" and choose the transaction options. Another "Next" click guids you to the connection properties. Enter your database name, hostname and port together with your username. A last "Next" click adds a "Test Configuration" to the button line on the top. Klick it and make shure, the Message says: "Connection test succeeded." After that, you have to restart your instance. Do this, even if the console tries to make you believe that this is not necessary.

If you are wondering, why we did not add either a dependency to the mysql-jconnector or a jar to the domain/server directory. This is quite simple. The Wls already ships with a mysql-connector-java-commercial-5.0.3-bin.jar. You can find it here %WLS_HOME%\wlserver_10.3\server\ext\jdbc\mysql.

configuring shortywls
Let's start over with JPA configuration. Add a persistence.xml to src\main\resources.
Insert the EclipseLink PersistenceProvider and your jta-data-source reference. If you like to know, what EclipseLink is doing also add the needed logging.level. Don't forget to tell EclipseLink about it's target server. In our case "WebLogic_10".

<persistence-unit name="shortyWeb" transaction-type="JTA">
<property name="" value="WebLogic_10" />
<property name="eclipselink.logging.level" value="FINEST" />

That's all. We are done with JPA :) The next bigger part is to do the basic configuration for spring. First step is to add the needed spring features to the web.xml located in src\main\webapp\WEB-INF. First is the place for your applicationContext.xml followed by the ContextLoaderListener and the Spring DispatcherServlet. Completed by the servlet mapping. Don't forget to add a shortyAppDispatcher-servlet.xml to your src\main\webapp\WEB-INF.


If this is done, you have to take a closer look at the specific spring configuration. Add the configured applicationContext.xml to src\main\webapp\WEB-INF. That's where all the magic happens. I'll focus on the JPA/JTA parts here. The rest is basic spring configuration, you can find a great documentation on the springsource website.

There are some Weblogic specific tweaks, you have to do. In terms of load-time-weaving you have to use the WebLogicLoadTimeWeaver.

weaver-class="org.springframework.instrument.classloading.weblogic.WebLogicLoadTimeWeaver" />

The <tx:jta-transaction-manager /> does a self discovery of the transaction manager of the actuall server. Hence we are running on Wls this forces spring to use the org.springframework.transaction.jta.WebLogicJtaTransactionManager. The <tx:annotation-driven /> let you use the org.springframework.transaction.annotation.*'s

<tx:jta-transaction-manager />
transaction-manager="transactionManager" />

If you have the jta transaction manager and the load-time-weaving in place, you need to add the jpaVendorAdapter. No Weblogic specific magic in here. Only the reference to the database platform and some debug information.

<bean id="jpaVendorAdapter"
<property name="databasePlatform"
value="org.eclipse.persistence.platform.database.MySQLPlatform" />
<property name="generateDdl" value="true" />
<property name="showSql" value="true" />

Last bit to tie all parts together is the EntityManagerFactory. Spring JPA offers three ways of setting up JPA EntityManagerFactory. The only working solution here is the LocalContainerEntityManagerFactoryBean. If you try to use the LocalEntityManagerFactoryBean or the JNDI lookup, you will experience a "Error binding to externally managed transaction" Exception
It has a reference to the jpaVendorAdapter and the persistenceUnitName.

<bean id="entityManagerFactory"
<property name="jpaVendorAdapter" ref="jpaVendorAdapter" />
<property name="persistenceUnitName" value="shortyWeb" />

Implementing shortywls
Now we are done with configuration :) Switch back to Eclipse and do the implementation work.
You can basically stick to the Adapting spring, you need at least a dao which defines the transactional attributes to the methods.
Happy coding!

Tuesday, February 16, 2010

Application Server unresponsive or stuck? Take a deeper look!

If you work with Java EE projects, you have probably seen this before. You have a big machine. Some GB of RAM, a couple of cores. And a "normal" application. Nothing to worry about. Profilling is unremarkable and everything works fine on your dev and test environments. This can change, if you do your first load test runs.
What the hell is happening? To figure this out, you have to take a deeper look at all parts of the system. Here is a very brief overview about what and where to look. Don't get me wrong. This is not a handbook or a guide. You have to attend more than a couple of optimizing session to be able to fully track down the problems and solve them. Anyway this is a good overview and could be extended to some kind of checklist.

Your infrastructure
First thing is to make a map of your infrastructure. If you are doing load tests and you experience any kind of too low throughput or even unresponsive applications, you have to make shure you know everything about your setting. Ask yourself questions like:
- How is the network structure? How fast is the network? How is the load?
- Which components are in between your load agents and your server? (switches, router, dispatcher, httpd, etc.)
- What about the database? (Separate maschine? Separate network? How is it's load?)
- What about your appserver? (How many cores? How many RAM? How many HDD? eth cards?)
- How is the cluster setup? (Loadbalancing? Failover?)
There are probably many more information you should have. Try to figure out as much as possible. Even the slightest piece of information is valuable.

Before taking a deeper look at the details, I strongly advice you to request full access to the systems you are going to examine. You should for example have transparent access to any ports (JMX/Debug), the shell, the httpd status monitor, the appserver management utilities and so on. Without beeing able to gather all required information during the runs, you will not be able to find a solution. Working in such a setting is stupid and could even guide you in the wrong direction.
This should be no problem for systems up to the integration stage. If you have to solve problems occuring in already productive systems you should think about different approaches. Beeing on-site working with operations probably is the best approach here. But let's stick to the test environment here.

Reproduce the situation
If you know anything about your infrastructure and have access to any relevant component in it, you should take some effort in reproducing the situation. I have seen different cases, when this was not too easy. Try to play around with the load scenarios. Try different combinations of usecases, different load, try shorter ramp up times, try to overload the system. Without this, you will not be able to solve the issue.

Collect metrics
If you finally reproduced your situation, you can start collecting the relevant metrics. First approach is to do this without any changes to the system. Depending on the infastructure, there are a couple of things to look at:
- Appserver console/monitoring (JVM, DB Pools, Thread Usage, Pending Requests, and more)
- Apache mod_status, mod_proxy (Thread Usage, Dispatching Behaviour, and more)
- Database Monitoring (Connections, Usage, Load, Performance, and more)
- System monitoring (I/O, Network, HDD, CPU, and more)

No.1 suspects are always any kind of external ressources. So, you should look after the db connections first. After that, look at the system ressources. Heap, Memory, CPU and further. Depending on your findings, you are able to eliminate the bottleneck.

Extended metrics
If the basic metrics did not show any problems, you have to dig deeper. This is the point, where you start enabling external monitoring and extended tracing.
- Enable JMX Management Agent and connect via JConsole or your favorite JMX monitor
- Enable verbose GC output
- Enable extended diagnostic in your appserver (e.g. Oracle Weblogic WLDF)
- Use other visualizing/tracing tools available

set screws
If you have all your metrics, you are basically on your own. There is nothing like a cookbook for solving your problems. But you did not really expected this, right? :)
Anyway, there are a couple of things to do. First is to identify the ressource, that is causing the trouble. You do this by watching out for any hint for full or close to full ressource usage. This could be a connection pool or the JVM heap. Simplest case is to experiment with increasing the size.
Some of the extended metrics support you in identifying more special situations (e.g. stuck threads, memory leak).
If none of the above works, you are going to become a specialist in optimizing or performance tuning for your environment. This means, you have to look at the product documentation and other information around to find the things to change.

time-consuming team game
Anyway, this is a team game. A time-consuming one. You have to work closely with operations, the dev team and the guys doing the load test. It is not too unusual that it takes some time. A typical load tests lasts about 60 minutes. Including ramp up and down, analysis, configuration changes and redeployment it could last 2 hours. Given an eight hour work day, this gives your time for four runs. Not too many, if you do not have a clue where to look.

Friday, February 12, 2010

JavaFX Platform the Official Rich Client Technology for 2010 Winter Games

Oracle announced on February 9, 2010 that JavaFX(TM) and Java(TM) platforms are being used as the Official Rich Client Technology by the Vancouver Organizing Committee for the 2010 Olympic and Paralympic Winter Games (VANOC).

Sport fans around the globe can now explore the historical Winter Games medal results through an innovative JavaFX application, Medal Wheel, available at

This are awesome news for the JavaFX community. Oracle is actively pushing it and to be honest, I like the fast UI :)

BUT: Even JavaFX does not prevent you from making mistakes. As a German, I had to look after our own statistics. And did not find them. Looking deeper at this shows, that there is a functional error in it :)
Look at the following screens and compare reality to the geo-view.

Thursday, February 11, 2010

CfP JavaOne, Oracle Develop and Oracle OpenWorld 2010

Ok. The run started. Even partly. The first Call for Papers (CfP) is online.

Starting with the 2010 JavaOne Call for Papers submissions open since yesterday.
The 2010 JavaOne Call for Papers is open from February 10 through 11:59 pm PDT March 14.
Submit your paper online.

Next will be the CfP for Oracle OpenWorld and Oracle Develop. The CfP submissions will also be available online. The CfP lasts from March 2-21, 2010. So, get ready for this!
The submission site is already here.

Wednesday, February 10, 2010

Java EE Development Environment - Rollout for large teams

Everybody comes across this issue from time to time. You have a fresh and exciting project with a couple of developers. The skills of the team members are different and you have to deliver a complete setup of the development environment for all of them. This is probably no big deal if you have up to five members. You just decide, what components to use and write a small howto which everybody can follow. Running around solves the rest of the open issues. But this is getting much harder if the team grows beyond this. And this is a setting you can still find. Even if more and more projects get smaller and use agile methods ;)
If you are not willing to spend weeks in team internal support for setup and configuration you have to find the right approach. I am going to summarize some thoughts about this in the following parts.

Basic Requirements
Let's look at what is needed for a minimal developer setup.

- Applicationserver (Binaries and Configuration)
- Database (Binaries and Configuration)
- Build Tool (Binaries and Configuration)
- CVS/SVN/Whatever Client (Binaries)
- Integrated Development Environment incl. Plugins (Binaries) - Basic Project Setup (Configuration)

For almost any single part in this hopefully not too incomplete list, you have some kind of binary that needs to be installed on a developers desktop and some kind of configuration. It is highly recommended to keep any project specific configurations within your source code repository. The following thoughts only apply to the binary installs.

Preliminary work
The bigger the teams get, the more you are in need of a detailed plan for what you are going to do and use. This does include everything. Beginning from the basic decision about the Java EE Appserver up to the single plugins for the IDE used by the developers. Basic rules are:
- Be as near as possible to the future production environment. If this is not possible, think about staging and possibly arising problems and how to avoid them.
- Find the right balance between the number of plugins for your IDE and make shure, they work together smoothly.
- Find the right build tool. Even if already commodity, I still like Maven. But this adds more infrastructure to your projects. (e.g., proxy, company repository)
- Think about the software design and architecture up front. You have to have an idea about which modules you will need and which parts of the team should work on them. (There is much more in/behind the team issues in a project. But I am not going to cover them here and now;))

If all  this is done, you can think about the rollout. It highly depends on the basic setting. Are you using Windows based systems for development or Unix/Linux? Do your project have special infrastructural dependencies (e.g, SSO, Host) that cannot be mocked? Make a complete list of all things that could influence the development and choose one or even combine the following approaches.
And by the way, it is always good to catch up with the most experienced members in your team to discuss your prefered solution:

The "all-in-one-solution" Rookie's Workplace
I love to call it this way. The name stands for a single image of the complete environment. Could be achieved using any kind of VM solution out there. We experiment with VMWare but there are a lot of other products available. The only task here is to setup the virual machine and install everything the way, you would like it as a developer.
After this, you have to rollout the VM runtime to the dev PCs and ship the image.

Implementation cost: probably some days
Rollout cost: should be around half a day per team member
Advantages: Very easy rollout. Highly predetermined setup.
Disadvantages: VM performance (?), no easy incremental update, cost of the VM solution

The "bit-by-bit-solution" Hacker's Workplace
The complete opposite of the previous. You define all used parts and rollout a document containing the install and setup instructions. Place the binaries in any kind of network share or provide download links and version information.

Implementation cost: probably one day
Rollout cost: easily more than a day per team member
Advantages: hardly any rollout, Highly configurable, Easy to update
Disadvantages: Cost of setup within the team, error-prone, learning curve

The "best-of-bread-solution" Developer's Workplace
If you don't like one of the above, you are in need of a combination. Such combinations are most often called the "best-of-bread" solutions, derived from a couple of projects. This is, where you start to cut the problem into pieces. Which parts of your setup is project related? Which parts change often? Which parts are common in your company? Depending on this, you have a much broader range of options to choose from. Some examples:

Software distribution for common software
IDE, build tools and source control clients are good examples of common software you could probably put into the (already in place) software distribution system of your company. This reduces the complexity in your rollout to basically two components. The appserver and the db.

Implementation cost: not your budget :-)
Rollout cost: not your budget :-)
Advantages: stable and standardized
Disadvantages: probably not the software version you like to have.

Option central server
This is a unusual but valid solution. You set up a central server instance and enable it for multiple developers. This could be done in different ways and is highly dependend on the application/db server you are going to use. You could for example
a) have separate domains for each developer (something like a multi project server)
b) use individual application deployments (beware of naming problems)
c) have separate databases or schemas
d) have separate tablenames

To get an idea what this is all about, I recommend you read "Gone fishing for Glassfish" by Sidsel Jensen

Implementation cost: Depending on your org. probably not your budget :-)
Rollout cost: none
Advantages: stable and standardized. Garanteed operation, SLA, central infrastructure, capable of big deployments with lot of data
Disadvantages: Depending on your org (hardware cost, monthly cost), hardly any flexibility

Option de-central server
The most common setting. Every developer gets his own db and appserver on his local hdd. How easy this is, depends on the used products. If you are going to use websphere and db2 you probaply have to have bigger hardware at hand and it takes slightly more time than using mysql and tomcat :-D But this should be no problem at all. The biggest issu is to rollout the project specific configuration. In most cases it is promissing to think about a scripting approach. Nearly all appserver have any kind of command line interface you can use. Or you can even use a scripting environment. Arun Gupta posted an example for the GlassFish v2 in his blog.
The database should not need any special setup or configuration for the development environment at all. You can thing about moving it to the central software distribution or try to find the silent install option.

Implementation cost: Probably some days
Rollout cost: highly dependent on the products you use. One to five days per developer.
Advantages: stable and standardized but flexible
Disadvantages: Bigger hardware required, cost of setup within the team, error-prone, learning curve

Option embedded server
To be honest, I don't like the central approach. And having some three blue character company's software stack on my notebook is also something I realy don't even want to dream of ;) If you have the time to prepare it, you could thing about using the embedded server approach. There are some containers out there, that could be run in embedded mode. If you think about H2 DB, OpenJPA or GlassFish it is definitely an option to have a local startup class for all needed containers. In most of the cases you are forced to develop on components far way from the productive environment. Therefore you have to strictly stick to the Java EE standards your container provides. If you even have different DB this gets even more complex.

Implementation cost: Probably some days
Rollout cost: should be around half a day per team member
Advantages: stable and standardized but very flexible
Disadvantages: learning curve, staging problems

Friday, February 5, 2010

New Features of JSR 317 (JPA 2.0)

Short summary presentation about the new features in JPA 2.0.

Thursday, February 4, 2010

GlassFish vs. WebLogic - 10 visions for the future

The Sun/Oracle merger raised some questions about the future of different components. One of interest to me is the GlassFish Application Server. Beside the Oracle WebLogic it is the next Java EE Application Server in Oracle's portfolio.
Not really much concrete has been said about the future coexistence of both. But some postings, slides and webcasts are around. Time to summarize them and draw some conclustions. To be honest: Non of the thoughts here are confirmed by anybody. Especially not by Oracle! I don't know if the described things will happen and I don't have any detailed insights in both products timelines or roadmaps. Happy to discuss everything and read about your thoughts.

1) "GlassFish continues as the Java EE reference implementation and as an open source project"
This is a statement, that is totally clear. Nothing will change. It will continue as an Open Source project and you will have a new RI for any of the comming Java EE versions.

2) GlassFish software licensing
Most of the components of the GlassFish plattform are available under a Dual License consisting of the Common Development and Distribution License (CDDL) v1.0 and GNU Public License (GPL) v2. Details for GFv2 kann be found on the GF Wiki. This will stay the same for most of the modules. Except for those, making the way into WebLogic Server. I expect this to be at least the following three: Metro, Jersey, Grizzly

3) Equinox will NOT be the OSGI platform for the Weblogic DM Server
As presented on last years OOW (WebLogic Server Architecture, OSGi and Java Enterprise Edition, Adam Leftik and Jeff Trent), the Equinix Platform has some drawbacks (Lacks a Generic Application Framework, Application Isolation, RFC-133 Context Class Loader). Therefore I expect the Weblogic DM server to use something else. I don't know if this will have any effect on GF. It is possible that the OSGI platform of GF will change, too.

4) There will be NO GlassFish v3 with clustering capabilities
The slide #15 of the Oracle + Sun: Application Server webcast states, that GF will be for productive and agile development. WLS is the availabillity and scalability solution. Therefore the v2 was the last GF with clustering facilities.

5) Metro, Jersey and Grizzly will make it to the WebLogic 11g
As mentioned by Thomas Kurian in the strategy webcast. These are great assets from the GF family and I believe that those three projects will make it to WLS.

6) There will be tool support for migrating GF Apps to WLS
The complete development to production staging process will be adressed by upcomming Oracle solutions. JDeveloper and/or OEPE will have plugins/support for automatic migration of GF apps to WLS. The WLS split deployment directory structure will also be enhanced with staging features. There will probably also be new maven plugins supporting dev and productive builds with GF and WLS.

7) Embedded GlassFish will be bundled with the OEPE
Beeing the development platform of the future, it is obvious that OEPE will bundle an embedded GF in the future.

8) JDeveloper will get support for GF
Beeing the development platform of Oracle could lead to having build in support for GF development in JDeveloper.

9) NetBeans will become the Java ME IDE
Having more and more GF support in JDeveloper and OEPE leads to a further specialization of NetBeans. It will become the Java ME IDE of the Future.

10) There will be a complete ADF implementation for GF
ADF will become available on GF, too.

Monday, February 1, 2010

Need new boxes ;) Oracle, please help ;)

As you might have read: I was switching office rooms today. Somehow, I really do not like moving at all. But one good thing happened. I found some (very) old boxes and put them on my cabinet. Do you remember them? I absolutely need new ones :)
@Oracle: Can you help? :-D