I was working on a technical presentation with the topic “Performance Engineering in Java platform”. This presentation is to emphasize on various techniques and tools used for performance optimization and tuning. While I was working on this presentation, I googled for examples showing memory leak in Java application. I couldn’t find any convincing examples and moreover, the discussions in Java forums are touting “there is no such thing as memory leak in Java”, due to its automatic memory management. That kept me wondering… Is it true ? May be memory leak is impossible in Java ?
I have worked on various profiling tools like JProfiler, Introscope, Rational Quantify etc. etc. But, lately, I am hooked on to JConsole, a JMX compliant performance monitoring tool available since JDK 5. The good thing about this tool is, it doesn’t need any license, very simple to setup and straight to the point. It would let you monitor memory pools, class loading status, thread status and also manage MBeans.
I picked one of my current project which is running on Tomcat 5.0.28 and using Spring IOC & XStream serialization library as case study for this presentation. I started monitoring this app using JConsole and captured the memory usage & statistics on various scenarios and use cases. During this course, I had to re-deploy this app to Tomcat so many times. And, to my surprise, Voila… Tomcat was broken and it spitted out the most dreadful “OutOfMemoryError” permGen space in its console. Initially, I didn’t know who to blame, but, I listed out the usual suspects. May be the application code has some memory leaks or may be Spring is unnecessarily instantiating all the beans, or may be XStream which does xml serialization using reflection is the culprit. I started analyzing carefully from all those angles. I wrote a simple stress/load test using ANT script, which would just un-deploy and deploy the app to Tomcat for 25 times. While the script is running, I monitored the “Perm Gen” memory pool usage. Theoretically when the app is un-deployed the “Perm Gen” memory pool usage should come down, but, it wasn’t coming down, and the memory usage was climbing higher and higher for every un-deploy & deploy cycle. Same is the case with number of classes being unloaded. Only trivial amount of classes were getting unloaded for the whole test run. PermGen pool is used by JVM to store the metadata information about classes and methods. This obviously proved that there is something wrong in Tomcat class loaders, it was not releasing or unloading the classes.
So, what’s the solution, I could increase the maximum perm size by setting -XXMaxPermSize JVM option. Is that going to be a permanent fix? Obviously not, If Tomcat dies in 24 successive deployments now, after I increase the max perm size, it might break at 24+nth deployment. I started doing some research online regarding this issue. There are some recommendations like using JDK 1.6 instead of JDK 1.5 (I was using jdk1.5.0_06 for this exercise). I tried that, but it was the same. Then, I thought, lets try the latest version of Tomcat (At the time of this writing v6.0.10 is the latest stable release). I ran my test script with the same application code against the new version of Tomcat. Wow… the test passed, the memory pool usage & no. of classes getting loaded was going up & down as it is supposed to and there was NO “OutOfMemoryError”. Tomcat 5.0.28 supposedly had a bug in “WebAppClassLoader”, which was unnecessarily holding references to the classes being loaded dynamically.
Here are the lessons learned… No software is 100% bug free. I have been using Tomcat for long time and it’s one of the very strong and stable open-source product I have ever used. But, still, you can find things like this.
Garbage Collection is just a “Machine Intelligence” process. Never underestimate “Human Intelligence” over “Machine Intelligence”.