Busting PermGen Myths
In my latest post I explained the reasons that can cause the java.lang.OutOfMemoryError: PermGen space crashes. Now it is time to talk about possible solutions to the problem. Or, more precisely, about what the Internet suggests for possible solutions. Unfortunately, I can only say that I felt my inner Jamie Hyneman from MythBusters awakening when going through the different “expert opinions” on the subject.
I googled for current common knowledge about ways to solve java.lang.OutOfMemoryError: PermGen space crashes and went through a couple dozen pages that seemed more appropriate in Google results. Fortunately, most of the suggestions have already been distilled into this topic of the very respected StackOverflow. As you can see, the topic is truly popular and has some quite highly voted answers. But the irony is that the whole topic contains exactly zero solutions I could recommend myself. Well, aside from “Find the cause of the memory leak”, which is absolutely correct, of course, but not very helpful to respond to the question “How to solve a memory leak”. Let us review the suggestions put forward on the SO page.
There can be two reasons that cause the java.lang.OutOfMemoryError: PermGen space error.
One is that the application server and/or application really does use so many classes that they do not fit into the default-sized Permanent Generation. It is definitely possible and not that rare in fact. In this case increasing the size of Permanent Generation can really save the day. If your only problem is how to fit too much furniture into a house too small, then buy a bigger house!
But what if your over-caring mother sends you new furniture every week? You cannot possibly continue to move to bigger houses over and over again. That is exactly the situation with memory leaks – and also with the classloader leaks, as described in my previous post that I mentioned above. Let me be clear here: no increase in Permanent Generation size will save you from the classloader leak. It can only postpone it. And make it harder to predict how many re-deployments your server will outlive.
The most popular answer on StackOverflow was to add these options to the server’s command line. And, they say, “maybe add -XX:+UseConcMarkSweepGC too. Just to be sure”. My first problem with these JVM flags is that there is no explanation available on what they really do. Neither in the SO answer (and I don’t like answers that tell you to do something without the reasoning why you should do it) nor actually on the whole Internet.
Really, I was unable to find any documentation about these options, except for this page. But, in fact, that does not even matter. In no way will any tinkering with the Garbage Collector options help you in case of a classloader leak. Because, by definition, a memory leak is a situation where GC falls short. If there is a valid live hard reference from somewhere within your server’s classloader to an object or class of your application, then the GC will never think of it as garbage and will never reclaim it. Sure, all these JVM flags look very smart and magical. And they really may be required in some situations. But they are certainly not sufficient and won’t solve your Permanent Generation leak.
The next proposition was to switch to the JRockit JVM. The rationale was that as JRockit has no Permanent Generation, one cannot run out of it. Surely, an interesting proposition. Unfortunately, it will not solve our problem either.
The only result of this “solution” will be getting a java.lang.OutOfMemoryError: Java heap space instead of the java.lang.OutOfMemoryError: PermGen space. In the absence of separate generation for class definitions, JRockit uses the usual Java heap space for them. And as long the root cause of the leak is not fixed, those class definitions will fill even the largest heap, given enough time.
Did you know that 20% of Java applications have memory leaks? Don’t kill your application – instead find and fix leaks with Plumbr in minutes.
Restart the server
Yet another way to pretend that the problem is solved is to restart the application server from time to time. E.g. instead of redeploying the application, just restart the whole server. But the first time you see an application server with more than one application deployed, you will know that this is rarely possible in production environment. And this is not really a solution. It is a way to hide your head in the sand.
This one is actually not that hopeless as the previous ones – recent Tomcat versions really do try to solve classloader leaks. See for yourself in their documentation. IF you can use Tomcat as your target server, and IF your leak is one of those Tomcat can successfully fight against, then maybe, just maybe you are lucky and the problem is solved for you.
Use Your favorite profiler tool
May be a viable solution too. But again with a couple of IFs. Firstly, you should be able to use that profiler in the affected environment. And as I have previously mentioned in my other post, profilers impose overhead of the level that might not be acceptable in the (production) environment. And secondly, you must know how to use the profiler to extract the required information and conclude the location of the leak. And my 10+ years of experience show that it is very rarely the case.
So far we haven’t seen any definite solution to the java.lang.OutOfMemoryError: PermGen space error. There were a few that can be viable in some cases. But I was astounded by the fact that the majority of proposals were just plain invalid! You could waste days or weeks trying them and not even start to solve the real problem: find that rogue reference that is the root cause of the leak!
Fortunately, as of the 1.1 release, Plumbr also discovers PermGen leaks. And it tells you the very reason that keeps the classloader from being freed, sparing you the time of hunting down the leak. So next time, when facing the java.lang.OutOfMemoryError: PermGen space message, download Plumbr and get rid of the problem for good.