Splunk Completes Acquisition of Plumbr Learn more

To blog |

Stacktraces are telling the truth. But not the whole truth.

November 21, 2013 by Vladimir Šor Filed under: Java

Our company is all about making the cause of software errors transparent to developers and operations. As opposed to alternative solutions, we surface the location of the problem pointing you towards the malicious line in source code. Even though we are currently best known by our capabilities in detecting memory leaks, we are expanding into other areas as well. To give you a hint about our research directions, we decided to share it via three examples.

The examples boil down to the JVM capabilities to give meaningful stacktraces. In many occasions the stacktrace does contain all the information required to fix the problem. In other situations it only surfaces the symptoms without a clue to what could be causing the underlying the problem. Let me illustrate this with three examples triggering the following infamous error messages:

  • java.lang.OutOfMemoryError: unable to create new native thread
  • java.io.IOException: Too many open files in system
  • java.lang.OutOfMemoryError: Java heap space

All the examples are illustrated with simple code snippets making the underlying problem easier to understand.

Too Many Threads

static void createTooManyThreads() {
	try {
		for (int i = 0; i < TOO_MANY; i++) {
			Thread t = new Thread(new Runnable() {

				public void run() {
					try {
					} catch (InterruptedException e) {
	} catch (OutOfMemoryError e) {

In the code above, we keep launching threads until we hit the system limit and encounter the “java.lang.OutOfMemoryError: unable to create new native thread” message. What is wrong with understanding that the problem is related to the thread limit being exhausted? Lets take a closer look to the stacktrace:

java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at eu.plumbr.demo.MisleadingStacktrace.createTooManyThreads(MisleadingStacktrace.java:34)
	at eu.plumbr.demo.MisleadingStacktrace.main(MisleadingStacktrace.java:16)

The problem is staring right into our face – we are being told where the camel back was broken by adding the last straw. At the same time we do not have a clue whose fault is that the camel is already loaded to the full extent. If the error message contained also a way to see the different traces about which call stacks were consuming the threads before our last attempt to launch the last one resulted in the stacktrace above, it would make lives of developers a lot easier.

But lets look at the same problem – resource consumption from another point:

Too many open files

Again, lets start with the sample code:

static void createTooManyFiles() {
	try {
		for (int i = 0; i < TOO_MANY; i++) {
			File f = new File(PATH_TO_FILE + i + ".txt");
			OutputStream s = new FileOutputStream(f);
	} catch (IOException e) {

The sample tries to create many files and write just one integer into each of the files without closing the previous ones. And again, running the code above results in a not-too-helpful stacktrace:

java.io.IOException: Too many open files in system
	at java.io.UnixFileSystem.createFileExclusively(Native Method)
	at java.io.File.createNewFile(File.java:947)
	at eu.plumbr.demo.MisleadingStacktrace.createTooManyFiles(MisleadingStacktrace.java:45)
	at eu.plumbr.demo.MisleadingStacktrace.main(MisleadingStacktrace.java:17)

The same problem is now disguised differently – we do get the message that I now tried to open one file too much, but – who has opened the other files to stress the JVM to the extent where it cannot complete the run?

If you are still not convinced – take a look at the third example, our current bread and butter:

Too much memory consumed

The code sample is simple again – we take a data structure and keep increasing it until the available heap is exhausted:

static void createTooLargeDataStructure() {
	try {
		List l = new ArrayList();
		for (int i = 0; i < TOO_MANY; i++) {
	} catch (OutOfMemoryError e) {

Running the code gives you the infamous java.lang.OutOfMemoryError: Java heap space message. Which again is difficult to interpret if the data structure in question was populated via different possible locations in the source code.

I know that all the fellow C – developers now shrug helplessly because in their world all of the above would just take the form of a segfault, but – if we have come so far, why couldn’t we do better? I am sure we can and thats what we are about to do – find the performance bottlenecks for you.

If you are interested in the full code sample, it can be downloaded here. The code was ran on my Macbook Pro with OS X 10.9 on JDK 7u25. After this, make sure you are not going to miss future updates on interesting content, and subscribe to our Twitter feed.




In many cases you can assume, that the JRE classes or the classes of the framework you use are not to be blamed. So looking at the stacktrace, you can see that last two calls are in java.* – whereas the previous calls are from your classes. You can start searching there. 🙂


Sure, with these simple examples it’s easy to start troubleshooting. Once you get a larger application with several places wastefully consuming resources, then the stack traces become not so obvious anymore.

Vladimir Šor

Today I encountered one useless and confusing stack trace. Two lines of code:

BufferedInputStream in = new BufferedInputStream(Main.class.getResourceAsStream(“/text-file.txt”));
while (in.available() > 0) {

throws the following Exception when in.available() gets called:

Exception in thread “main” java.io.IOException: Stream closed
at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:151)
at java.io.BufferedInputStream.available(BufferedInputStream.java:399)
at my.package.Main.main(Main.java:21)

Finally I discovered that file I was trying to read did not exist (actually had different extension). First two minutes I tried to read my 5 lines of code and understand what closes that stream.

Priit Liivak