Splunk Completes Acquisition of Plumbr Learn more

To blog |

You cannot predict the way you die

April 16, 2014 by Vladimir Šor Filed under: Memory Leaks

After spending a day with yet another Heisenbug which seemed to change its shape whenever I got close to the cause, I thought my lessons learned from the case could be worth sharing.

To demonstrate this behaviour, I wrote a simplified example. In the example, I initialize a Map and start adding key-value pairs into the map in unterminated loop:

class Wrapper {
  public static void main(String args[]) throws Exception {
    Map map = System.getProperties();
    Random r = new Random();
    while (true) {
      map.put(r.nextInt(), "value");

As you might guess, compiling and executing the code above cannot end well. And indeed, when launched with:

java -Xmx100m -XX:+UseParallelGC Wrapper

I am facing  “java.lang.OutOfMemoryError: GC overhead limit exceeded” message in my shell. But when launched with different heap size or different GC, my Mac OS X 10.9.2 with Oracle Hotspot JDK 1.7.0_45 will choose to die differently.

For example, when running with smaller heap, such as the following:

java -Xmx10m -XX:+UseParallelGC Wrapper

the application will die with more familiar “java.lang.OutOfMemoryError: Java heap space” message being thrown on Map resize.

And when running with other garbage collection algorithms besides ParallelGC, such as the -XX:+UseConcMarkSweepGC or -XX:+UseG1GC, the error you face will be caught by the default exception handler and is without stacktrace as the heap is exhausted to the extent where the stacktrace even cannot be filled on Exception creation:

My Precious:examples vladimir$ java -Xmx100m -XX:+UseConcMarkSweepGC Wrapper

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"

Moral of those story? You cannot choose the way your application is going to die when the resources are running low, so do not base your expectations on specific sequence of actions. As seen from the example above, you can fail in three completely different ways:

  1. Falling to the safety check built into the GC: whenever the GC is spending more than 98% of the time in GC with little to show for it (less than 2% of heap cleaned), the JVM surrenders with the java.lang.OutOfMemoryError: GC overhead limit exceeded message.
  2. Failing to allocate more memory for the next operation: whenever the next command  tries to allocate more memory when currently available in heap, you will receive a  “java.lang.OutOfMemoryError: Java heap space” message
  3. And you might have constructed the situation where your memory is used up to the extent where the JVM fails to create a new OutOfMemoryError() instance, fill its stack trace and send the output to the print stream. In this case the error will be caught by the UncaughtExceptionHandler and not by regular control flows. This handler as the name states, will step into play when thread is about to terminate due to an uncaught exception.  In such case the Java Virtual Machine will query the thread for itsUncaughtExceptionHandler and will invoke the handler’s uncaughtException method.

Did you know that 20% of Java applications have memory leaks? Don’t kill your application – instead find and fix leaks with Plumbr in minutes.

So whenever you think you have it covered when you catch errors indicating lack of resources, think again. The system can be in such a fragile state that the symptoms whose presence you thought you can rely upon disappear or change. And leave you with the same bedazzled face as I for the last 12 hours.

If you made it this far with the reading, I can only recommend subscribing to our Twitter feed. We keep posting insight into our performance tuning life on a weekly basis. And keep the marketroids away from the channel. Titles asides, honestly, this dying thing in the header is due to our marketing dudes.