Thursday, December 17, 2009

Weblogic 11g calling GlassFish v3, Foreign JNDI Provider

And here is the second post about WLS/GF interoperability.
After answering the question How to call an EJB deployed on Weblogic 11g (WLS 10.3.2.0) from GlassFish v3? I now wanted to know, how do it the other way around.

Obvisously it would work the same way, I did on the GF side. Anyway, there is another feature build into Weblogic that makes this more easy for the developer. It is called Foreign JNDI. Foreign JNDI is an API that allows you to access objects on a remote JNDI tree without having to connect directly to the remote tree. It enables you to make links to a JNDI tree on another server or provider including, but not limited to, WebLogic Server, or a JNDI tree in a java program. Once you have configured Foreign JNDI, you can use an object that is somewhere else with the same ease that you would use an object bound in your WebLogic server instance. This is again using RMI-IIOP communication. Ok. Let's start.

You need an EJB to deploy to GF. Of course, this has to be a EJB 3.x. I wrote a quite simple one called GFTest. It has one business method public void sayHelloOnServer() which puts a text to stdout.
And again, you need EJB 2.x style remote interfaces for this.
That should look similar to this:

@Stateless(mappedName="GFTest")
@Remote(GFTestRemote.class)
@RemoteHome(GFTestRemoteHome.class)
public class GFTest implements GFTestRemote

GFTest is the EJB 3.x business interface. GFTestRemoteHome is the EJB 2.x EJBHome which creates the GFTestRemoteObject EJBObject. it is best to put all interfaces into a separate client project.
If everything is ready, deploy the ejb. In my previous post I recommended to run the appc compiler to generate stubs and skeletons. As proven today, this is not realy needed. You only need the interface classes on your client project. Thats totally suffient.

Now let's move over to the Weblogic administration console (http://localhost:7001/console) and configure the foreign JNDI Provider. The steps to take are described in the admin console online help.
First you have to create a foreign JNDI provider. (Summary of Services / Summary of Foreign JNDI Providers / New) and give it a name you like. Now click on the newly created provider and configure it. Settings are as follows:
Initial Context Factory: weblogic.jndi.WLInitialContextFactory
Provider URL: iiop://your_host:your_port (standard case is iiop://localhost:3700)

Next is to create a foreign JNDI object link. Switch from the general to the links tab and click new. Give it a name, you like, it is for administrational proposes only. Then configure the following settings:
Local JNDI Name: The local binding name for the object (in my case GFTest2)
Remote JNDI Name: corbaname:iiop:your_host:your_port#remote_binding_name (in my case corbaname:iiop:localhost:3700#GFTest).

After that, you have to restart the server instance.

Start your favorite IDE for your Weblogic projects and setup a client project. Again, I am one of those webguys and therefore I setup a simple Dynamic Webproject with OEPE. The remote interfaces should be available to the project (client.jar, or classess folder).

The only thing you have to do now, is to get an InitialContext and lookup your EJBHome.

InitialContext context = new InitialContext();
GFTestRemoteHome home =
(GFTestRemoteHome) context
.lookup("GFTest2");

The only thing left to do is to create your GFTestRemoteObject on which to call your business method.

GFTestRemoteObject object = home.create();
object.sayHelloOnServer();

Now you are done!Compared to the manual lookup this is extremely simple and does not force you to take care of the connection handling, urls and jndi names in the code at all.

GlassFish v3 calling Weblogic 11g

I was playing around with interoperability problems lately. The question was: How to call an EJB deployed on Weblogic 11g (WLS 10.3.2.0) from GlassFish v3? What looks simple on the first hand, is not too easy.
Here are the steps to take.

First of all, it is not handy to put any wls libraries into GF's classloader. Therefore you should stick to the most basic communication method available. This is RMI-IIOP. Ok. Let's start.

You need an EJB to deploy to WLS. Of course, this has to be a EJB 3.0. I wrote a quite simple one called Logger. It has one business method void logString(String message) which gets a string and puts it to stdout.
If you are going to use RMI-IIOP, you have to have EJB 2.x style remote interfaces.
That should look similar to this:


@Stateless(mappedName = "Logger")
@Remote(LoggerRemote.class)
@RemoteHome(LoggerRemoteHome.class)
public class Logger implements LoggerRemote


LoggerRemote is the EJB 3.x business interface. LoggerRemoteHome is the EJB 2.x EJBHome which creates the LoggerRemoteObject EJBObject. Put all interfaces into a separate client project (ejb-client-jar).
If everything is ready, deploy the ejb. Now it is time to compile the Stubs and Skeletons for your ejb-client-jar.
Take your jar or ear and put it in weblogic.appc compiler.

java weblogic.appc ejbinteropEAR.ear

This will generate the ejbinteropClient.jar with all needed additional classes. Start your favorite IDE for your GF projects and setup a client project. I badly needed to try some Servlet 3.0 features and wrote a small
@WebServlet(name = "InterobTest", urlPatterns = {"/InterobTest"}) servlet for that.
The ejbinteropClient.jar needs to be in your projects classpath.
First thing to do is, to build an InitialContext.

// Build Properties Object
Properties h = new Properties();
h.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.cosnaming.CNCtxFactory");
h.put(Context.PROVIDER_URL,
"iiop://localhost:7001");
h.setProperty("org.omg.CORBA.ORBInitialHost",
"localhost");
h.setProperty("org.omg.CORBA.ORBInitialPort",
"7001");
// Get InitialContext with Properties
InitialContext ic = new InitialContext(h);

Now you are half way through. Now lookup your remote object

Object home = ctx.lookup(JNDI_NAME);

This took some time. Ok, not having the java.net forums online since a few days did realy not speed this up.
Anyway, the magic in this is the JNDI_NAME.
It is constructed out of several parts:


String JNDI_NAME =
"corbaname:iiop:localhost:7001#"+
"Logger#ejbinterop.LoggerRemoteHome";


First part is the static part containing your host and port of the target WLS instance. Second is the binding name for your RemoteHome interface in Weblogic Server's JNDI tree.



The only thing left to do is to narrow your RemoteHome from the stubclass


LoggerRemoteHome home =(LoggerRemoteHome)
PortableRemoteObject
.narrow(home,
LoggerRemoteHome.class);


and create your LoggerRemoteObject on which to call your business method on.


LoggerRemoteObject obj= home.create();
obj.logString("Testlog");


Now you are done! A few simple lines of code, which could cause you to get grey hair if you are missing any part.
For me it was quite helpfull, to turn on all related debug settings in WebLogic server. Go to Environment / Servers / Your Server / Debug and enable all needed scopes and attributes. If the call is successfull, you see some detailed debug information on the stdout (don't forget to change the log level!).

Wednesday, December 16, 2009

Oracle Season's Greetings - Peace, Hope, Joy and Happy Holidays

Got a very nice eCard today. It contained a link to a small flash movie. And there were more to discover on Oracle.com. Just wanted to share them with you, cause I do love them soo much. Great art!

Have a perfect holiday and I wish you all the best for 2010!


View movie


View movie


View movie


View movie


View movie


View movie


View movie


View movie


View movie


View movie


View movie


View movie

Software Quality: JSF Component Libraries - Checkstyle Results

This is part V of the software quality analysis series about common jsf component libraries. After a more detailed look at possible bugs (discovered by findbugs) we will take a deeper look at how the projects comply with coding conventions.
The analysis was done with Checkstyle 5.0. Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard. It automates the process of checking Java code to spare humans of this boring (but important) task.

A note as preface: Checkstyle is highly configurable and can be made to support almost any coding standard. I did NOT look at any project settings. Checkstyle was configured to use the Standard Checks which do cover the Sun coding conventions.


PrimeFaces
With the small codebase, PrimeFaces has the smallest ammount of findings in general. 18762 places in 334 files. Only 61 of them are considered as an error. This makes 0,18 problems per file.
The 61 errors occure within two categories (EmptyBlockCheck, IllegalCatchCheck). Looking randomly at the details for does not reveal any real issue.

The classes with the highest number of findings can be found among the generated files (jsf-plugin). The handcrafted ones are:
- org.primefaces.component.datatable.DataTableRenderer.java (265)
- org.primefaces.component.calendar.CalendarRenderer.java (217)
- org.primefaces.component.chart.UIChart.java (181)


RichFaces
The next biggest codebase is provided by the RichFaces. 713 files with 38.587 findings. 552 of them considered as an error. An average of 0,77 problems per file. This is the worst result of the three. The errors separate into three categories (EmptyBlockCheck, IllegalCatchCheck, IllegalThrowsCheck). Some spot tests on the results did not reveal any bigger problems, which is not surprising at all because findbugs would already have shown them.

The classes with the highest number of findings are located in the ajax4jsf package.
- org.ajax4jsf.org.w3c.tidy.TidyUtils.java (632)
- org.ajax4jsf.renderkit.RendererUtils.java (593)
- org.ajax4jsf.xml.serializer.ToStream.java (574)

The hotspots from the RichFaces package:
- java.org.richfaces.model.StackingTreeModel.java (207)
- org.richfaces.json.JSONObject.java (189)
- org.richfaces.json.JSContentHandler.java (159)


ICEFaces
The biggest codebase is provided by ICEFaces. 877 files with a surprising low number of findings. "only" 22.959 with a total of 287 errors. That makes an average of 0,3 problems per file.
The errors separate into the already known three categories (EmptyBlockCheck, IllegalCatchCheck, IllegalThrowsCheck).

The hotspots from the ICEFaces package:
- com\icesoft\faces\component\inputfile\InputFile.java (387)
- com\icesoft\faces\component\selectinputdate\SelectInputDateRenderer.java (378)
- com\icesoft\faces\component\paneltabset\PanelTabSet.java (357)

Conclusion:
With an average of 0,18 problems per file, this round goes to PrimeFaces. Even if the total number of warnings is quite high. Followed by ICEFaces and RichFaces. Even if the last one is the only project having codechecks in place. Somehow surprising results. The hot spots can be found within the components. This is true for every library except for RichFaces. What is also true is, that the most complex components are more likely the ones with the biggest problems.

Monday, December 14, 2009

Software Quality: JSF Component Libraries - Findbugs Results

This is part IV of the software quality analysis series about common jsf component libraries.
Today we are looking at the findbugs results for them in more detail.
To make this more transparent to you, I am running the Java Webstart version of findbugs. Feel free to give it a try on your own version of the codebase of the projects.



PrimeFaces
You find a total of 126 errors within the PrimeFaces codebase. Eight of them are classified as high prio. 118 have a normal priority. If you look at the eight high prio errors you find one possible multithreading bug and seven malicious code vulnerabilities. The malicious code vularabilities araise from fields that are mutable arrays. This are final static field referencess to arrays which can be accessed by malicious code or by accident from another package. You find all occurences in the same utility class org.primefaces.util.HTML.
The multithreading bug can be found in the org.primefaces.component.media.player.MediaPlayerFactory. The static Map players is layzily initialized but not synchronized. After the field is set, the object stored into that location is further accessed. The setting of the field is visible to other threads as soon as it is set. If the futher accesses in the method that set the field serve to initialize the object, then you have a very serious multithreading bug, unless something else prevents any other thread from accessing the stored object until it is fully initialized.

The normal prio bugs devide into all categories. Five bad practice warnings, 82 correctness warnings (all null pointer dereference in method on exception path, which could be a false warning), some more (3) malicious code vulnerabilities warnings (stored references, exposing internal representation), one performance bug in org.primefaces.component.messages.MessagesRenderer.encodeEnd(FacesContext, UIComponent) where the value of a Map entry is accessed using a key that was retrieved from a keySet iterator. It is more efficient to use an iterator on the entrySet of the map, to avoid the Map.get(key) lookup. Last but not least there are 19 dodgy stile warnings of which 18 are useless control flow statements. Which look like this for example:

if(resourceHolder != null) {
}


Conclusion:
That is not bad at all. Even if we take into account, that the codebase for PrimeFaces is the smallest one of the three, this is not the place to blame anyone about the results! And to remember, this is still a SNAPSHOT version!


RichFaces
The Richfaces code base provides 291 errors. 15 classified as high prio. 261 normal priority. Seven high prio bugs fall into the bad practice category (possible problems around serialization). This most likely will only be seen in clustered failover scenarios. There are five correctnes warnings. (possible infinite loop, null pointer dereferences, problem with equals and a bad comparison of signed byte. There are also two malicious code vulnerabilities (mutable static fields) and one possible multithreading bug (wait() with two locks held). Things to look at and to remove.

The normal prio bugs also devide into all categories. Hotspot are the 91 performance warnings. Most of the warnings result in unnecessary object creation and therefore memory consumption. Second on the list are the possible malicious code vulnerabilities with a count of 70. Followed by 47 bad practice warnings and 41 dodgy stlyle warnings.

Conclusion:
With hardly the same code size of PrimeFaces this is also a good result. Having seen more RichFaces projects lately, I would be happy if someone takes a deeper look at the performance warnings. Memory consumption seems to be an issue.



ICEFaces
The biggest code base is provided by the ICEFaces. To have the most FindBugs warnings in it, no surprise at all. 373 in total. 106 High priority warnings followed by 267 normal priority warnings.
Bad practice warnings (49) and malicious code vulnerabilities (47) cover the most bugs. As the only candidate, ICEFaces is ignoring exceptions in three places. That is bad.


} catch (Exception e) { }


But you can also find some serialization issues. All malicious code vulnerability warnings relate to mutable static fields. The majority of them shoud be final but are not. Three correctness warnings, two possible multithreading issues and five dodgy style warnings make the rest of the high prio bugs. All more or less smaller things to change.

The normal priority bugs are lead by the performance warnings (77) and followed by bad practices (44) and malicious code vulnerability warnings (46). What is impressive are the 80 dodgy style warnings. Last bigger category are the possible multithreaded correctness warnings (17).

Conclusion:
Compared with the other two candidates, this looks much on the first sight. But the codebase is by far the biggest and therefore it is not too bad at all.

Bottom line
Compared with each other, all code bases look identically in terms of findbugs. You find more or less all categories in some lines of any candidate. What is to note is, that the bigger the code base gets, the more performance issues you can find. I expected the bigger frameworks to watch out for them more carefully. The bug density for all is nearly the same.
Only looking at the metrics itsel does not give you a clue about the quality. You need to have a deeper look at every single warning and check the severity.
What is interesting to some extend is, that it is hard to setup findbugs for the projects manually. The source folders are quite distributed along the development structure and you have to spend some time to set it up.
Only RichFaces has included some quality checks (PMD, Findbugs, Checkstyle) into their nightly builds. PrimeFaces and ICEFaces are still in need of this.

Thursday, December 10, 2009

Celebrating GlassFish v3 final

Roberto Chinnici wrote about this on his blog a few days ago.

The final release will happen on December 10, when GlassFish v3 will be available.

Today finaly is the day! What we all have been waiting for since so long is there.
Download GlassFish v3 (the Java EE 6 referenz implementation)

About Java EE 6:
Java EE provides a standard for portable, robust, multi-tiered server side applications. Java EE 6, improves on the Java EE 5 developer productivity features, breaks the "one size fits all" approach with Web Profiles, adds extensibility, and more. GlassFish v3 delivers the modularity, extensibility and rightsizing capabilities of the new Java EE 6 platform and provides a lightweight, modular, and extensible platform for your Web and Enterprise applications.

Major new features:
  • Profiles
  • Pruning
  • Pluggability/Extensibility
  • Continuing push for ease of development

My Java EE 6 Articles:
Watch out for the german iX magazin issue no.1/2010. It will contain my introductionary article about Java EE 6. Available from the 17.12.09.


Further links:

Manifesto for Cloud Computing published

Found this recently and what should I tell you: I love it! :-)

Manifesto for Cloud Computing
  • Cloudy approach with any kind of process over foggy substance-driven rain
  • Warm front anticyclones over Agile SaaS
  • Extrinsic data-driven decoupling over bug-driven intrinsic minimalism
  • Extremely pragmatic cloud beans over generic security coffee beans

Want to participate? Learn more? Visit http://www.cloud-manifesto.org/

Wednesday, December 9, 2009

Software Quality: JSF Component Libraries - The Candidates, #primefaces, #icefaces, #richfaces

We did a lot of theory about static code analysis in part I and part II of my software quality posts.
Now it is time to start over with the real work. In this post (part III) we will look at the candidates in more details. I have choosen to examine three of the most popular JSF component libraries today.

Preparation and preface
In preperation for the analysis I did a checkout from the project repositories. After that I setup the local build and made shure, that all projects could be compiled and packaged locally.
The review itself was done by some tools. One is the already mentioned msg java measuring station (JMP). The second one is XDepend which I was thankfully given a license from it's creator (thanks for that!!). I am not going to publish the complete reports (If the project leads are interested, I will hand them out to them, of course. Let me know!). The goal of this series is to provide a brief overview of the projects and highlight some hotspots. This is not going to be a beauty contest nor am I going to bash anybody with this. It should give my readers a brief understanding about static code analysis and the typical findings.
The NCSS results from the JMP and XDepend vary a bit in some places. This is because of the fact, that XDepend does a more complete analysis than the JMP does. Further more the configuration of the JMP is not that straight forward. This gets even worser, if you have lot's of subprojects (e.g. RichFaces, one project per component...). Therefore I decided to focus on the core projects with the JMP analysis and do a complete analysis with XDepend.

PrimeFaces
PrimeFaces is an open source component suite for Java Server Faces featuring 70+ Ajax powered rich set of JSF components. Additional TouchFaces module features a UI kit for developing mobile web applications.
It is a quite fresh and new library which grows very quickly. It caught my attention cause I am using FacesTrace since some time.
With every result I publish here about primefaces, you should have in mind, that his is a SNAPSHOT release and not intendet for productive use. Therefore it may not be representative.

Basics:
Version: 1.0.0-SNAPSHOT (readonly checkout at 27.11.09)
Jar Name: primefaces-1.0.0-SNAPSHOT.jar
Jar Size: 1,60 MB (1.682.956 Bytes)
Dependent libraries: 30 with a total size of 7,56 MB (7.929.734 Bytes)
JMP included subprojects: complete primefaces-read-only
XDepend jars: primefaces-1.0.0-SNAPSHOT.jar



Size (JMP):
NCSS - Lines of code: 8340
Number of packages: 86
Number of classes (w/o inner classes): 160
Number of functions: 796

Metrics (Xdepend):
Number of ByteCode instructions: 74741
Number of lines of code: 15784
Number of lines of comment: 245
Percentage comment: 1%
Number of jars: 1
Number of classes: 306
Number of types: 324
Number of abstract classes: 3
Number of interfaces: 17
Number of value types: 0
Number of exception classes: 0
Number of annotation classes: 0
Number of enumerations classes: 1
Number of generic type definitions: 0
Number of generic method definitions: 0
Percentage of public types: 99,07%
Percentage of public methods: 85,04%
Percentage of classes with at least one public field: 26,54%

RichFaces
RichFaces is a component library for JSF and an advanced framework for easily integrating AJAX capabilities into business applications.
It is one of the more mature libraries around. Mature in terms of age. Have seen it in a couple of projects and it is the favorite implementation for some of my employer's customers.
This is the latest GA version of RichFaces.

Basics:
Version: 3.3.3 (readonly checkout at 27.11.09)
Jar Name: richfaces-api-3.3.3-SNAPSHOT.jar and richfaces-impl-3.3.3-SNAPSHOT.jar
Jar Size: 1,65 MB (1.737.826 Bytes)
Dependent libraries: 20 with a total size of 4,50 MB (4.726.588 Bytes)
JMP included subprojects: framework/impl and framework/api
XDepend jars: richfaces-api-3.3.3-SNAPSHOT.jar and richfaces-impl-3.3.3-SNAPSHOT.jar



Size (JMP):
NCSS - Lines of code: 9186
Number of packages: 21
Number of classes (w/o inner classes): 225
Number of functions: 1526

Metrics (Xdepend):
Number of ByteCode instructions: 166694
Number of lines of code: 35227
Number of lines of comment: 34938
Percentage comment: 49%
Number of jars: 2
Number of classes: 790
Number of types: 938
Number of abstract classes: 69
Number of interfaces: 133
Number of value types: 0
Number of exception classes: 11
Number of annotation classes: 1
Number of enumerations classes: 15
Number of generic type definitions: 0
Number of generic method definitions: 0
Percentage of public types: 74,09%
Percentage of public methods: 74,73%
Percentage of classes with at least one public field: 9,49%


ICEfaces
ICEfaces as a leading open source Ajax framework, ICEfaces is more than a Ajax JSF component library, it's an J2EE Ajax framework for developing and deploying rich enterprise applications (REAs).
To be honest, I have not seen it in the wild too much. It was suggested to include this via a fellow twitter follower and I thought three is better than two :)
This is the latest GA version of ICEFaces.

Basics:
Version: 1.7.1 (readonly checkout at 27.11.09)
Jar Name: icefaces.jar and icefaces-comps.jar
Jar Size: 2,64 MB (2.770.861 Bytes)
Dependent libraries: 42 with a total size of 10,4 MB (10.993.491 Bytes)
JMP included subprojects: icefaces/core and icefaces/component
XDepend jars: icefaces.jar and icefaces-comps.jar



Size (JMP):
NCSS - Lines of code: 38109
Number of packages: 67
Number of classes (w/o inner classes): 521
Number of functions: 5606

Metrics (Xdepend):
Number of ByteCode instructions: 177269
Number of lines of code: 41710
Number of lines of comment: 16161
Percentage comment: 27%
Number of jars: 2
Number of classes: 664
Number of types: 714
Number of abstract classes: 29
Number of interfaces: 50
Number of value types: 0
Number of exception classes: 5
Number of annotation classes: 0
Number of enumerations classes: 0
Number of generic type definitions: 0
Number of generic method definitions: 0
Percentage of public types: 82,77%
Percentage of public methods: 82,52%
Percentage of classes with at least one public field: 14,15%


This is all for now. Part IV will cover the details for checkstyle, findbugs and simian for all candidates. Stay tuned.

Software Quality: The Basics II - CCD, ACD, Ca, Ce, I, A, I/A and D

The second part of the theory behind static code analysis takes a more detailed look at the remaining metrics, not already covered in part I.
I am not shure, if I bother you with all this theory at all. But to me it seems necessary to talk about the basics, before presenting the results for the individual JSF frameworks.

Part I introduced the analysis results and metrics from quite popular developer tools like JavaNCSS, Checkstyle and Findbugs.
These are usefull during development and can give you a brief idea about the code base in general in terms of implemented source quality. Up to now, we only know about how much the developers care about maintainability and readability of their sources. And we do possibly know a little about the general development skills of the teams, because we know, how many bug pattern they have been taken care of.
What is still missing are metrics to judge about the software design. But of course you can analyze software with this scope, too. Even if the results are far less concrete than the already presented ones, they are not less important. This will only give you a very brief summary about it. If you want to know more about
it, you have to take the time and take a look at all those great theory behind. And of course you have to look at many projects and findings to know, what the results are going to tell you!
A good place to start is the XDepend Homepage. Beside the fact, that they deliver an awesome tool, you also find many details about the collected metrics.
Some of the analysis I am going to do with the JSF Frameworks are generated by XDepend.

Cumulative Component Dependency (CCD)
The metric of the metrics itself. CCD, is the sum over all components Ci in a subsystem of the number of components needed in order to test each Ci incrementally. This is true for direct and indirect dependencies.

Impact:
If you try to change code which has a high CCD, you probably have to change a lot of other dependend classes too.
This is error-prone and will generate new bugs in your software.

Threshold:
The major design goal is to keep the CCD low in general.

Derived metrics are the avarage component dependency ACD and the normalized cumulative component dependency, NCCD, which is the CCD devided by the CCD of a perfectly balanced binary dependency tree with the same number of components. The CCD of a perfectly balanced binary dependency tree of n components is (n+1) * log2(n+1) - n.


Average Component Dependency (ACD)
Expresses the average component (equivalent to 'compilation unit') coupling. It is the sum of all component depedencies divided by the number of components (SIZE).
Average Component Dependency = CCD(N)/N (N is # of components).

Impact:
An ACD of 20 indicates for example, that on an average the component depends directly and indirectly
(transitively) upon 19 other components +1 for itself.

Threshold:
The major design goal is to keep the ACD low in general. it should not exceed 20.


Afferent Coupling (Ca)
There are a number of coupling metrics available. Two of the well know ones are Afferent Coupling (Ca) and
Efferent Coupling (Ce). These integer based metrics represent a count of related objects.
Ca counts how many classes are using another package to compile sucessfully.

Impact:
Afferent Coupling signify an architectural maintenance issue, that an object has too much responsibility
(high Afferent Coupling).

Threshold:
There is no treshold for Ca in general. Using other packages is not bad in general. But you should keep in mind,
who is using what.

Efferent Coupling (Ce)
Ce counts how many classes used by the other package are needed to compile sucessfully.

Impact:
Afferent Coupling signify an architectural maintenance issue, that the object is not independent
enough (high Efferent Coupling).

Threshold:
There is no treshold for Ce in general. Beeing used by other packages is not bad in general.
But you should keep in mind, who is using what.


Instability (I)
The ratio of efferent coupling (Ce) to total coupling. I = Ce / (Ce + Ca).

Impact:
This metric is an indicator of the package's resilience to change. If I is near 1 it means the package
is instable. You can change it with nearly any effect on dependenc classes. If I is near 0 the package
is stable. Changes will have a big impact on other packages.

Threshold:
The range for this metric is 0 to 1.


Abstractness (A)
The ratio of the number of internal abstract types (i.e abstract classes and interfaces) to the number
of internal types.

Impact:
This metric is an indicator of the package's abstractness. If A is near =, you have a complete concrete
package without any abstract types. If A is near 1 you only have abstract types.


Threshold:
The range for this metric is 0 to 1


Distance (D) and I/A-Diagrams
D stands for Distance from main sequence. The perpendicular normalized distance of an assembly
from the idealized line A + I = 1 (called main sequence).
This metric is an indicator of the assembly's balance between abstractness and stability.


Impact:
An assembly squarely on the main sequence is optimally balanced with respect to its abstractness and
stability. Ideal assemblies are either completely abstract and stable (I=0, A=1) or completely
concrete and instable (I=1, A=0).


Threshold:
The range for this metric is 0 to 1, with D=0 indicating an assembly that is coincident with the
main sequence and D=1 indicating an assembly that is as far from the main sequence as possible.
D should have a value between -0.6 and 0.7.

If you draw a diagram from this, you can see if an assembly is in the zone of pain (I and A both close to 0)
or in the zone of uselessness (I and A both close to 1).





That's all theory for now. The third part will start delivering the results of the code analysis for PrimeFaces, RichFaces and ICEFaces.Stay tuned.

Tuesday, December 8, 2009

Software Quality: The Basics I - preface for looking at primefaces, richfaces and icefaces

As announced in my previous post, I am going to take a deeper look at some of the most popular JSF Frameworks these days. A deeper look means, I am going to do some static code analysis with it. Before publishing the details, I have to give you a brief introduction into what static code analysis is, and what to expect from the results.

Wikipedia defines static code analysis like this:
Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software [...].
In most cases the analysis is performed on some version of the source code [...].
The term is usually applied to the analysis performed by an automated tool,
with human analysis being called program understanding, program comprehension or code review.
To make it short: Static analysis typically finds mistakes. But but some of the mistakes don't matter at all.
What is most important is to find the intersection of stupid and important. Further on, it highly depends on the context, if bugs matter or not. Static code analysis, at best, might catch 5-10% of the software quality problems in code. This may extend to 80+% for certain specific defects but overall it ist not the the magic bullet you are looking for. Anyway, using static analysis is cheaper and more
effective than any other techniques for catching bugs. If you want to catch more bugs, you need to take a fullblown approach on testing (see picture one, taken from a JavaOne presentation by William Pugh (findbugs lead).



My employer has kindly given me the permission to use the msg java measuring station (lets call it JMP for short) to do the analysis (Thanks Rainer!). And a co-worker of mine is kindly supporting me in hunting bugs in it and doing configurations (Thanks Jochen!).
The JMP is a collection of popular code analysis tools. Three tools focus on static code analysis. One on architectural compliance and one tries to find out about test coverage.
Beside the simple results from the tools, I also try to add some expert views (code review :-)).

What metrics are covered?
The JMP covers all metrics generated by the individual tools. This is the enormous count of about 52 different numbers to interpret. To make this more convenient for the readers, I picked the most common ones.
Having a part I available indicates, that there will be a part II :) If you are looking for CCD, ACD, Ca, Ce, I, A, I/A and D you have to wait for the next post.

Non Commenting Source Statements (NCSS)
Determines complexity of methods, classes and files by counting the Non Commenting Source Statements (NCSS). Statements for NCSS are not statements as specified in the Java Language Specification but include all kinds of declarations too.
Roughly spoken, NCSS is approximately equivalent to counting ';' and '{' characters in Java source files.
The NCSS for a class is summarized from the NCSS of all its methods, the NCSS of its nested classes and the number of member variable declarations.
The NCSS for a file is summarized from the ncss of all its top level classes, the number of imports and the package declaration.

Impact:
Too large methods and classes are hard to read and costly to maintain. A large NCSS number often means that a method or class has too many responsabilities and/or functionalities which should be decomposed into smaller units.

Threshold:
Derived from this you can set some default values from experiences that define tresholds like the following.

The maximum count of classes per package must not exceed 40.
The maximum count of functions per class must not exceed 20.
The maximum ncss per function must not exceed 25.

Cyclomatic Complexity Number (CCN)
CCN is also know as McCabe Metric. It defines the complexity of classes and files by counting the control flow statements like 'if', 'for', 'while, etc. in methods. Whenever the control flow of a method splits, the "CCN counter" gets incremented by one. Each method has a minimum value of 1 per default.

Impact:
Too complex methods and classes are hard to understand and test and costly to maintain. A high CCN often stands for methods or classes that respond to too many responsabilities and possibly wrong design.

Threshold:
Having a CNN below 10 is quite normal. The maximum count must not exceed 25.


Findbugs total warnings and density
FindBugs is a program to find bugs in Java programs. It looks for instances of "bug patterns". This are code instances that are likely to be errors. A complete list of Bugpattern is available http://findbugs.sourceforge.net/bugDescriptions.html.
The metric itself is the density. It refers to the Defects per Thousand lines of non-commenting source statements.

Impact:
The total density sums up the overall code impression. Code which has a high density most probably has many errors in it.

Threshold:
The quality of this metric depends on the distribution of the bugs along the categories. Most projects allign around a density of 10.

There are several categories of bugs reported by this tool. Therefore it is not only the metric that makes it but every single bug. Only looking at the metric is not enough. With this tool you should do a deeper look at the reported bugs.
First, review the correctness warnings. Developers would want to fix most of the high and medium priority correctness warnings reported. Once you've reviewed those, you might want to look at some of the other categories.
Next on with the bad practice warnings which are violations of recommended and essential coding practice. Examples include hash code and equals problems, cloneable idiom, dropped exceptions, serializable problems, etc.
Dodgy warnings summarize code that is confusing, anomalous, or written in a way that leads itself to errors. Examples include dead local stores, switch fall through, unconfirmed casts, and redundant null check of value known to be
null.


CheckStyle Errors
Checkstyle reviews Java code for it's compliance to coding standards. The standard checks are applicable to general Java coding style. The optional checks are available for JEE artifacts.

The single metric generated here is the number of errors the tool finds. This is highly dependent on the configuration used with checkstyle. The config used for analyzing the JSF libraries are based on the complete set of Sun's Java Coding Standards whithout any tailoring. This is not practical to follow all of them, but for a comparison this should be a good place to start. Even if some projects tailored some of the checks, it should be visible, if and how
the projects care about code style.

Impact:
Code conventions are important to programmers for a number of reasons. 80% of the lifetime cost of a piece of software goes to maintenance. Hardly any software is maintained for its whole life by the original author.
Following coding conventions improves the readability of the software, allowing engineers to understand new code more quickly and thoroughly.

Threshold:
Depending on the real project setting and it's commitment to the set of checks, this should find no errors at all.


The second part will cover CCD, ACD, Ca, Ce, I, A, I/A and D. Stay tuned.

Monday, December 7, 2009

Software Quality: JSF Component Libraries

Working for a larger sofware company has some advantages. One of them is, to have a software quality checker at hand, which can be easily configured to run on any java project. What is called "msgJavaMessplatz" (Java measuring station) depends on quite popular tools.

- JavaNCSS - A Source Measurement Suite for Java
JavaNCSS is a simple command line utility which measures two standard source code metrics for the Java programming language. The metrics are collected globally, for each class and/or for each function.

- Checkstyle 4.4
Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard.

- FindBugs
Is a program which uses static analysis to look for bugs in Java code.

- Simian v 2.2.17
Simian (Similarity Analyser) identifies duplication in Java Code.

- Dependometer
Dependometer performs a static analysis of physical dependencies within a software system.

- Cobertura
Cobertura is a tool that calculates the percentage of code accessed by tests.

This was planned for quite some time now. Today, I finaly managed to setup the testing environment and run a few quality checks with the first candidates.
To give you a brief overview of the candidates, here are the basic metrics in terms of size and quality. I will compile more detailed results during the week and publish selected results. So stay tuned for more ....

PrimeFaces 1.0.0-SNAPSHOT
RichFaces 3.3.X
ICEfaces 1.7.1

ICEfaces
RichFaces
PrimeFaces
Package Depth
7
5
5
Type Inheritance
6
4
3
NCSS
38087
9186
8340
# Classes
520
225
160
# of Functions
5601
1526
796
# Packages
66
21
86
# design rule violations
243
39
25
# import rule violations
127
15
6
Findbugs Total Warnings
760
173
217
Findbugs Density
13,2
3,69
9,13


Screenshot of the "msgJavaMessplatz".
..oO(you can not buy the tool ... )

Friday, December 4, 2009

WLS 10.3.2.0, JScaLite TechPreview .. examples ... working :)!

I had some quite dissapointing trials running the new JScaLite examples, shipped with the Oracle WebLogic Server 11g R1 Patch Set 1 (WLS 10.3.2.0).
Seems as if my calls were heared at Oracle and they started to write some blogposts about the new Weblogic SCA container tech preview.
Raghav Srinivasan posted "Getting Started with SCA" and this finaly gave me the missing hints about what to do, getting the examples running.

Here is the complete HowTo:

Follow the steps, already described in my previous post
1) Install WLS 10.3.2.0 (with examples)
2) setup environment variables
3) change example.properties
4) deploy the weblogic-sca-1.0.war
5) point the war files to the library (weblogic.xml)

Now, here comes the music:
6) Change the spring-context.xml files
one located in both example war files:
- JSca_GetTotQty_EAR\JSca_GetTotQty_WAR\WEB-INF\classes\META-INF\jsca
- JSca_GetTotPrice_EAR\JSca_GetTotPrice_WAR\WEB-INF\classes\META-INF\jsca

Replace the <beans tag at the beginning with the following:

<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:sca="http://xmlns.oracle.com/weblogic/weblogic-sca"
xmlns:wlsb="http://xmlns.oracle.com/weblogic/weblogic-sca-binding"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://xmlns.oracle.com/weblogic/weblogic-sca
http://xmlns.oracle.com/weblogic/weblogic-sca/1.0/weblogic-sca.xsd
http://xmlns.oracle.com/weblogic/weblogic-sca-binding
http://xmlns.oracle.com/weblogic/weblogic-sca-binding/1.0/weblogic-sca-binding.xsd">

replace all <binding with <wlsb:binding

7) Now your are ready to build the examples (ant build)
8) and deploy them (ant deploy)

Working behind a corporate proxy leads to a fancy warning:

<Warning> <org.springframework.beans.factory.xml.XmlB
eanDefinitionReader> <BEA-000000> <Ignored XML validation warning
org.xml.sax.SAXParseException: schema_reference.4: Failed to read schema documen
t 'http://xmlns.oracle.com/weblogic/weblogic-sca-binding/1.0/weblogic-sca-wlsb:b
inding.xsd', because 1) could not find the document; 2) the document could not b
e read; 3) the root element of the document is not <xsd:schema>.

This is because of the fact, the the WLS does not have access to the needed schema files. Therefore the deployment takes quite some time until the timeouts appear.

Anyhow, if you switch your configuration and use a proxy in your setDomainEnv.bat/.sh JAVA_PROPERTIES
like this: -Dhttp.proxyHost=myproxyserver.com -Dhttp.proxyPort=80

you run into another problem, a SAX Parser exception:

org.xml.sax.SAXParseException: White spaces are required between publicId and systemId

That is not too funny, as the deployment will fail with this. Therefore, you better accept the warnings and get the app deployed :)


9) You can run the examples from the command line (ant run) this opens up your favorite browser and
points it to the URL http://<host&gt;:<port&gt;/ShoppingCartCtx/ShoppingCart

Tuesday, December 1, 2009

Java EE 6 Specification summary - Zoom Text

Having the final Java EE 6 in place it is time for a nice presentation of all included technologies. Here you are. This is done with a little help of the "ZoomText Tool" provided by Timo Elliott. Navigate and have fun. Java EE 6 includes 38 JSRs.


Breaking news: Java EE 6 is done!

As posted 30 minutes ago by Roberto, the JEE 6 spec is finaly done!

Breaking news: Java EE 6 is done! The final approval ballot closed 9 minutes ago.
http://jcp.org/en/jsr/results?id=5025
(source: http://twitter.com/robc2/status/6229390753)

Intel and SAP abstained from the vote. SpringSource did not vote at all.
The Apache Software Foundation voted with no. All others voted yes. Therefore the Executive Committee for SE/EE has approved the final approval ballot.

As usual, there are many complaints about the licensing model (missing "full license terms") . This is also the base for the ASF vote.

On 2009-11-30 Apache Software Foundation voted No with the following comment:
The Apache Software Foundation's vote is based on the point of view that this spec lead - Sun - is in violation of the JSPA

http://www.apache.org/jcp/sunopenletter.html

and therefore shouldn't be allowed to lead other JSRs until the above matter is resolved.

This vote is not a comment on the technical merits of the JSR. If not for the issue of the spec lead, the ASF would have otherwise voted "yes".

IBM complained about the newest JSRs included very late into the umbrella specification.

With the exception of the JSR 330 and JSR 299 injection support defined by the EE 6 platform, we believe that this new specification brings value to the industry. We remain concerned that the injection support defined by the platform will create unnecessary difficulties for the community. IBM will continue to support both expert groups in the development of a single integrated and extensible injection programming model.