Posts Tagged ‘java’

Using system properties in with Spring

This is actually pretty easy to pull off. First you must use a property-placeholder like this:

<context:property-placeholder location=""

The system-properties-mode supports several options including:

  • FALLBACK – Indicates placeholders should be resolved against any local properties and then against system properties
  • NEVER – Indicates placeholders should be resolved only against local properties and never against system properties
  • OVERRIDE – Indicates placeholders should be resolved first against system properties and then against any local properties
  • ENVIRONMENT – Indicates placeholders should be resolved against the current Environment and against any local properties. This is the default.

The location is an optional parameter so depending on your needs you can skip configuring it.

There are some other optional parameters available including:

  • ignore-resource-not-found – Specifies if failure to find the property resource location should be ignored. Default is “false”, meaning that if there is no file in the location specified an exception will be raised at runtime.
  • ignore-unresolvable – Specifies if failure to find the property value to replace a key should be ignored. Default is “false”, meaning that this placeholder configurer will raise an exception if it cannot resolve a key. Set to “true” to allow the configurer to pass on the key to any others in the context that have not yet visited the key in question.

A system property can then be utilized like this:

<property name="username" value="#{systemProperties.username}"/>

The # is not a mistake; this indicates that the bean systemProperties should be used.


Hudson Startup Trigger 0.1

Last night I released a trigger for Hudson that allows a build to be triggered when Hudson first starts up. While I don’t have any use for it, it was created in response to HUDSON-3669.

Hopefully someone will be able to make good use of it.


Google App Engine (OKCJUG lighting talk slides)

Here are my slides for the OKC Java User Group lightning talks tomorrow on Google App Engine. Enjoy!


Issues with IntelliJ IDEA 9 M1 (Maia) in Linux and OSX (build #10372)

Unlike the godess, IntelliJ 9 M1 isn't shy about being buggy

Unlike the godess, IntelliJ 9 M1 isn't shy about being buggy

IntelliJ 9 (codenamed Maia) looks promising with lots and lots of great features. There seems to be an endless list of newly supported technologies, tweaks, and usability features.

Maia has been superb in Mac OSX Snow Leopard. Unfortunately, my Ubuntu 9.04 desktop is an entirely different story.

A big feature of Maia that I’m looking forward to is background file indexing. It sounds like a great idea, be able to edit and browse projects instantly while you wait for the indexing to finish. The catch is advanced browsing and editing features are not available until after indexing finishes.

Both in OSX and Linux I ran into issues with the background file indexing.

Comparatively, my experience with background file indexing with OSX was less severe, so I’ll start there. Like when I normally load a project with IntelliJ, a loading dialog pops up with a status bar and I can watch the name of files zoom by as they’re being indexed. Unlike previous versions of Intellij, there’s a button to put the indexing into the background.

Instantly I click the button. I mean, why waste time, right? Unfortunately browsing was completely unusable with everything being sloooooow. In IntelliJ’s defense, I have only booted up IntelliJ once so far on my Mac. And this is a pre-release. (I usually just leave IntelliJ running and beyond this, everything has been wonderful).

The issues with Ubuntu were a bit more troubling. To my surprise, the background indexing did not bring the UI to a screeching halt like it did on my Mac. What I wasn’t prepared for was worse.

With background indexing appearing to run smooth, I was pumped to get the most out of the new feature. Immediately I dug into the directory with the project’s JSPs. Annoyingly every minute or so the listing of files would disappear and be rebuilt. After a few times, I realized the background indexing would finish then start right back up in a minute.

This restarting of the background indexing went on for about a half hour I was busy with some activities that didn’t require a PC so I just let it do its thing hoping it would stop. Of course the constant indexing did not stop. The directory listing did not stop reloading.

Giving up, I restarted IntelliJ and everything loaded just fine every time since.

The next two problems showed up intermittently in Linux.

Firstly, on more than one occasion and seemingly randomly. If I had two files in two tabs, selecting the unselected tab would do nothing except show that the second tab was selected. The displayed file contents would not show up. I could open up additional files and switch to them just fine, but I could never get the other file’s contents to show up even then. The tab issue occurred on more than one occasion. Closing and reopening the misbehaving tabs fixed the issue each time.

The next issue was more troubling: I would simply lose the ability to edit files. I could type until I was blue in the face and text would not show up. The inability to edit files cropped up frequently and was not solvable without restarting IntelliJ.

After poking around at IntelliJ IDEA 9 M1 on Linux I gave up and reverted back to using IntelliJ 8. I am still successfully using IntelliJ 9 M1 on OSX, but my usage has been very light lately due to other events.

I sincerely hope IntelliJ 9 gets to a stable point. IntelliJ is such a time saver I really do not want to program in Java with anything less feature full.

Bottom line: The new features in Maia are wonderful. Feel free to download and use Maia, just don’t expect to be throwing out IntelliJ 8 just yet.


UnsatisfiedLinkError: checkExtVer when unit testing gwt-ext

I ran into an issue while trying to test some innocent looking GWT-EXT code for a Google Web Toolkit app:

    public void onActivate( com.gwtext.client.widgets.Panel panel )
        if ( !( panel instanceof MyCustomGwtExtPanel ) )
            throw new IllegalArgumentException( "Panel must be a MyCustomGwtPanel." );

        ... snip ...

The test

I had one test to test that an exception is thrown:

    @Test(expectedExceptions = IllegalArgumentException.class)
    public void testOnActivate_unexpectedPanel()
        Panel panel = new Panel();
        new TripReportsPanelListener().onActivate( panel );

Test failures

It was so simple, I guess it just had to fail:

Caused by: java.lang.UnsupportedOperationException: ERROR: GWT.create() is only usable in client code! It cannot be called, for example, from server code. If you are running a unit test, check that your test case extends GWTTestCase and that GWT.create() is not called from within an initializer or constructor.
… 23 more

By using GWTMockUtilities.disarm in the setup and GWTMockUtilities.restore in the teardown, I made some progress by ended up with this error instead:

java.lang.UnsatisfiedLinkError: checkExtVer
at com.gwtext.client.widgets.Component.checkExtVer(Native Method)
at com.gwtext.client.widgets.Component.(

The solution

The solution is right there in the stacktrace from Google.

I extended GWTTestCase but continued to get the same errors. The reason is simple enough, but has bitten me a a couple of times before. GWTTestCase is written using the old JUnit 3 style tests. My tests were using TestNG annotations which tells TestNG to run the tests as TestNG tests. Consequently the setup and teardown methods in GWTTestCase never get called. JUnit acts the same way if you mix and match JUnit 3 and 4 style test declarations.

It’s been a frustrating experience with UnsatisfiedLinkErrors and NoClassDefFoundErrors, but I eventually got what should be a simple unit test working. And it sucks that my seemingly innocent unit test have so many dependencies. It also stinks that GWT must be started up to run the tests resulting in a feedback loop increase of 10-15 seconds.


Five code commenting anti-patterns

Bad comments can be a little bit of a problem, even if they appear harmless.

Bad comments can be a little bit of a problem, even if they appear harmless.

In some projects (most?) comments multiply like tribbles creeping up everywhere. Sometimes the comments are especially useful but other times comments useless are destructive to maintainability. Contrary to some beliefs, bad and inaccurate comments are not mostly harmless.

Bad comments take many forms. There are four forms that I think could be considered anti-patterns:

1. Programming 101 comments

Consider the following code snippet:

// Print "Hello World" to
// standard output without
// a new line.
System.out.print("Hello World");

No doubt this sort of commenting is useful to students in a class, but in the real world even non-Java programmers can figure this one out. Programming 101 comments are most commonly used by students and recent graduates after learning a new language feature or library.

2. Stating the obvious comments

Consider the following code snippet:

// can use express checkout
if (shoppingCart.getItems().size() > 10)
System.out.println("Please be courteous; don't use express checkout.");

Consider stating the obvious comments as a code smell. In fact we can improve this method by making it self-documenting. Instead of writing a comment, make the code say it instead:

public static final int MAX_EXPRESS_CHECKOUT_ITEMS = 10;

if (canUseExpressCheckout(shoppingCart))
System.out.println("Please be courteous; don't use express checkout.");

public canUseExpressCheckout(ShoppingCart shoppingCart)
return shoppingCart.getItems.size() <= MAX_EXPRESS_CHECKOUT_ITEMS;

3. Comment Me! comments

“Comment Me!” comments are the lazy man’s way of commenting. If you’re getting paid by the number of lines or the number of comments, than this type of commenting is great. Otherwise, it’s a waste of your time to put them and a waste of time for the person who has to read this junk.

Don't let your code look like this.

Don't let your code look like this.

4. Code memorial comments

Code memorial comments proclaim “The code is dead. Long live the dead code!” Not exactly the kind of proclamation any new king would like to hear.

Rather than letting bad or good code die and receive a proper burial in your favorite code repository, these comments are embalmed and enshrined for all to see. When old code is enshrined, it takes away precious screen real estate from the living code.

I’ve heard several defenses of this anti-pattern. Requirements keep changing, we’ll just have to put it back. Make it easy to put it back.

I suspect the reason for holding onto dead code is deeper than any of that. Imagine spending hours or days working on a problem. No doubt it can be emotional to throw away your hard work.

Delete the old code anyhow. The new found cleanliness will be extremely liberating. Besides, you can always lay flowers by its grave in your version control system’s history.

5. I WAS HERE comments

Comments can take on the form of being a replacement for version control features (like CVS’ annotate command). These can take on forms similar to this:

// ARL 010109 begin
... some code here ...
// ARL 010109 end

There are several problems with I WAS HERE comments. First, it clutters code unnecessarily. Second, it duplicates functionality that already exists in version control systems. Third, it’s error prone.

the cake is a lie.

the cake is a lie.

The comment is a lie.

Comments always lie. If not now, they will soon. Often code changes but the comments do not. I have seen countless cases of comments that lied literally the second they were committed.

Bad/incorrect documentation is worse than no documentation. I suppose Stack Overflow isn’t exactly the best place to a consensus, but the comments and stories are interesting. In my not-so-humble opinion, misleading documentation – especially that that appears authoritative – really sucks.

Striving to make clean code, simple code (low cyclomatic complexity) and self-documenting code (good class, method, variable names and unit tests) do wonders to lessen the need for comments.


Hudson and the Sonar plugin fail: MavenInstallation NoSuchMethodError

No. Not this Hudson.

No. Not this Hudson.

We ran into an interesting and less than informative error when configuring Maven with our Hudson installation. Maven worked great, as expected, but the Sonar plugin stopped working and were causing builds to fail.

The error message wasn’t terribly helpful:

FATAL: hudson.tasks.Maven$MavenInstallation.forNode(Lhudson/model/Node;Lhudson/model/TaskListener;)Lhudson/tasks/Maven$MavenInstallation;
java.lang.NoSuchMethodError: hudson.tasks.Maven$MavenInstallation.forNode(Lhudson/model/Node;Lhudson/model/TaskListener;)Lhudson/tasks/Maven$MavenInstallation;
at hudson.plugins.sonar.SonarPublisher.getMavenInstallationForSonar(
at hudson.plugins.sonar.SonarPublisher.executeSonar(
at hudson.plugins.sonar.SonarPublisher.perform(
at hudson.model.AbstractBuild$AbstractRunner.performAllBuildStep(
at hudson.model.AbstractBuild$AbstractRunner.performAllBuildStep(
at hudson.model.Build$RunnerImpl.post2(
at hudson.model.AbstractBuild$
at hudson.model.ResourceController.execute(

A little Googling, I found just two hits.

One result was helpful in which it said the Sonar plugin is compatible with Hudson 1.306+. Currently we’re running 1.303. We’re not exactly far behind, but apparently far enough behind.

Backing up Hudson

While there is a backup plugin for Hudson. The plugin would be ideal, but in case just installing the plugin screws something up, best do a manual backup.

The easiest way to manually backup Hudson is to just copy your Hudson working directory. However, space is limited for us, so a backup that was more selected was necessary. This script seemed to backup the most important configuration files (it wouldn’t make for a pretty recovery, but it’d work):


for job in $HUDSON_HOME/jobs/*; do
        echo "Processing $job"
        mkdir -p "$NEWHUDSON_HOME/jobs/$job"
        cp "$job/config.xml" "$NEWHUDSON_HOME/jobs/$job/config.xml"
All aboard!

All aboard!

All aboard the fail boat

The upgrade appeared to go well, but after manually starting the Windows service, I get an error. Amusingly, the hudson.err.log showed some slight inconsistencies:

Jul 27, 2009 2:38:03 PM hudson.model.UpdateCenter$DownloadJob run
INFO: Installation successful: hudson.war
Invalid or corrupt jarfile C:\hudson\hudson.war

Hudson the Butler can’t make up his mind; it’s claiming success before imploding on itself.

Hudson recuperated

Skimming around and very annoyed my butler had blatantly lied to me, I noticed hudson.war was sitting at only 2 MB. Yah, that can’t be right.

Luckily the fix was easy: Hudson was successfully fixed by manually downloading the newest hudson.war and replacing the messed up version.

It turns out I really did not need to backup Hudson. Though naturally if I hadn’t backed up Hudson I will have needed my backup!

Upgrading to Hudson 1.317 solved the mysterious java.lang.NoSuchMethodError error. I would not have thought configuring a Maven installation rather than using Hudson’s default would cause issues. Go figure.


Unit testing private methods using reflection (and other solutions)

Unit testing isn't for dummies

When it comes to unit testing you might find yourself wanting to test private methods. Here’s four solutions, some much better than others.

1. Don’t test private methods (refactor!)

If you find yourself needing to test private methods, you’re code is trying to tell you something – listen up!

Unit tests should test the behavior of classes and not the implementation details. Unit tests should be able to naturally cover your private methods. If you find this difficult or impossible to do, consider it a code smell.

Testing implementation will become a barrier to refactoring since you will be unable to change the implementation without updating tests. As Charles Miller put it “The amount of work you have to do to improve your code becomes the amount needed to change the private methods, _plus_ that required to change the tests. As such, you’re less likely to make the improvement.”

If you choose to test private methods keep in mind you are likely taking on some technical debt.

Pros: You’re not testing private methods. Best solution. Results in better design.
Cons: Can sometimes be tricky to implement, especially in legacy applications.

2. Use reflection

Thanks to reflection, access levels are more of a suggestion than a requirement. With a little bit of code, we can access any private method (or field for that matter).

First use reflection to change the access level:

MySuperCoolDao mySuperCoolDao = new MySuperCoolDao();
// get method by name "isThisCool" and signature
Method isThisCoolMethod = mySuperCoolDao.getClass().
getDeclaredMethod( "isThisCool", String.class );
// make method accessible
getCoordiantesMethod.setAccessible( true );

Then we use the newly accessible method:

boolean isWindowsCool = isThisCoolMethod.invoke(mySuperCoolDao, "Windows");
boolean isUbuntuCool = isThisCoolMethod.invoke(mySuperCoolDao, "Ubuntu");

assertFalse("Windows is not cool...", isWindowsCool);
assertTrue("Ubuntu is very cool", isUbuntuCool);

A big negative to this method is maintainability. In fact this just happened to me today: I refactored out a private method and as expected the test failed. Unlike other solutions (like the package level solution below), the failure was at runtime – not compile time.

Pros: Relatively easy and quick. Does not break encapsulation.
Cons: Tests become busier. More difficult to keep tests updated with production code. Security settings could prevent this technique from working.

3. Use package level access

This solution is very easy, simply remove private to get the (default) package level access. As long as your tests are in the same package as the production code, no problem. Since it’s good practice for your unit tests to be in the same package (but in a separate source folder), this works well.

Right out of college, this was the first way I learned to handle these situations. Almost everything method was package level. It never really smelled quite right, but the postives out weighed the negatives. In hindsight it might not have been the best solution, but there are far worse things.

Pros: Quick and easy.
Cons: Breaks encapsulation. Classes become a little noisier (although not as bad as using public).

4. Mix production and test code

This is the worst of all the solutions, so I hesitate to even bring it up. By making your test class an inner class in your production code, your tests will have access to private methods. Unfortunately this solution leaves your production code dependent on your test code.

Challenge: is there a legitimate reason to do this? I can’t think of one.

Pros: At least you’re writing tests! Weaksauce, I know.
Cons: Production code contains more than production code.

Quick update: Chad Bradley informed me that it is possible to test private methods using Groovy. I would be hesitant to use the technique without knowing for certain if it’s a bug that will get fixed or an actual language feature.


Forking JUnit tests in Ant (watch out!)

Turtles are surprisingly fast.

Turtles are surprisingly fast.

I’ve noticed for a while our unit tests run very fast through my IDE but take forever on our build box. At first I attributed this to our severely overloaded build box, but I was wrong.

In our particular case the tests take 5 minutes 3 seconds to run through Ant while forking for each test class. Very not cool.

If you’re using fork to run your JUnit tests in Ant, there are two attributes to be concerned with: fork and forkmode.

The forkmode attribute is the big one here. Possible values are “perTest” (default), “perBatch”, and “once”.

It turns out that because “perTest” is the default; meaning forking is done for each test class. While that might be what you want, it can make your tests significantly slower. Using the “once” option instead means forking happens just once; all your tests will be ran together in a single JVM.

After switching the forkmode to “once”, our test running time plummeted to 33 seconds for a net gain of 4.5 minutes. As Borat would say, “very niiiice”. (In this case 33 seconds is still longer than it should be, but that’s for another post).

There’s some very good technical reasons to run all your tests in the same JVM. As Chris at pointed out, it might reveal some issues with your production code or test code:

Having the tests run each in its own JVM also covers up problems. You can create whatever sort of mess you like and it’ll all be swept away before the next test runs. While going through a setting the forkmode to “once” I found a database connection leak in some test code. This sort of problem would be much more visible if all the test code was running in a single JVM and reminded me of Martin Fowler’s advice on testing resource pools.

Of course Martin Fowler comes through again with good advice.


Solved: Digester.getParser: org.xml.sax. SAXNotRecognizedException:

I ran into a nice little problem trying to get richfaces deployed on Oracle’s OC4J:

	at oracle.xml.jaxp.JXSAXParserFactory.setFeature(
	at org.apache.commons.digester.parser.XercesParser.configureXerces(
	at org.apache.commons.digester.parser.XercesParser.newSAXParser(
        ... snip ...

From a little bit of googling, I found several people running into this problem or something very similar. Frustratingly, not a whole lot of solutions. One solution that did not work (but was confirmed by some to do the trick) was to run the OC4J standalone server with the following parameters:


Luckily a co-worker (who is waaaaaaay to humble for his own good and won’t let me use his name here) figured out that adding the following to orion-application.xml does the trick:


		<remove-inherited name="oracle.xml"/>
		<remove-inherited name=""/>
		<remove-inherited name="oracle.toplink"/>


Oracle has details on what the above does in their documentation on utilizing the OC4J class loading framework. Good luck reading that, it looks terribly boring. :-)