Splunk Completes Acquisition of Plumbr Learn more

To blog |

Most popular memory configurations

March 26, 2013 by Vladimir Šor Filed under: Java

The post is the third in the series where we publish statistical data about the Java installations. The dataset used originates from the free Plumbr installations out there totalling 1,024 different environments we have collected during the past six months.

First post in the series analyzed the foundation – on what OS the JVM is run, whether it is a 32 or 64-bit infrastructure and what JVM vendor and version were used. Second post focused on the different application servers used. The one you are reading now sheds some light upon the heap sizes used by Java applications.

For the task at hand we dug into the 1,024 environments we had gathered statistics from. 662 out of those had overridden the default maximum heap size by setting the -Xmx parameter. Also, 414 had felt that the permanent generation is not properly sized and had specified -XX:MaxPermSize by themselves.

First lets take a look at the heap sizes. The data when sampled and summarized took the following form:

Java memory used
When you are interpreting the data then – the nine in the 64MB section has to be read that “There were nine environments in the sample with 32MB < maximum allowed heap size <= 64MB.”

But what makes this graph interesting are the outliers. Or, more specifically, the environments with either too large or too small heaps as the results do not actually correspond well to our initial assumptions that 75% of the environments are using between 0.5 and 4GB of heap size. Our guess was close near the reality – 70% of the environments are indeed in this range. But the rest was surprising. Only 4% ran on bigger heap sizes with the biggest heap in our sample being set to 42G.

And the surprise lies within the remaining 26% of the JVMs running with less than 512MB heaps. As a matter of fact, 95% of them are running on 256MB and less. Our initial guess was that those have to be the non – Java EE applications. But the results are not verifying the guess – instead of -client switches and libraries indicating desktop applications we faced a mixture of everything. Even with some Weblogic instances running with 256MB. If you can hint us how are you able to run your Java EE apps on such small heaps, we are all eager to listen.

Second data set in our hands was the permgen. We summarized the 414 environments containing this information:

Java maximum permgen
Again, when interpreting the data looking at the eight samples in 2,048MB column – we had eight environments running with -XX:MaxPermSize larger than 1,024MB and smaller than or equal to 2,048MB.

Some surprises in this diagram as well. First – why do 160 people think that exactly 256MB is the best possible size for the permgen? This constitutes roughly 40% of the environments. And those two who think 258MB is better – are you just bad at calculating the powers of two or have you done some real fine-tuning? And the extremes are also interesting – if you can describe me the applications which are ok with less than a MB of permgen or which would require 6GB of it, I would again be all ears.

While interpreting the data we also got some confirmation to our assumptions that people are solving java.lang.OutOfMemoryError: PermGen space errors via tossing more RAM towards the problem. It just cannot be that 25% of the environments need 512MB or more in the permgen for anything else but covering up permgen leaks.

If you managed to get this far, then you are most likely interested to hear about our forthcoming posts. Stay tuned by subscribing to our Twitter feed.

ADD COMMENT

Comments

We are working with hazelcast nodes where each node has a max of 128GB Heap. We are using multiple nodes (12 to 24) to create huge amounts of (risk-exposure) matrices. Then we aggregate them using different distributed rulesets. We usually have a fillup factor of 60% to 80% of the heap and never, ever experienced a gc-pause of over 3-5 seconds – and even that is very rare. The machines have 48 cores -> the garbage collector works like a charm (even if we trigger System.gc(), via profiling tools, it takes 15 seconds max for a stop the world).

The times of “heaps bigger 8GB are unusable” are long over … The trick seems to be to invest in data-structure rather than overly complex object graphs. Avoiding garbage, and if it cant be avoided create it in its own context (so it can be removed quickly and does not linger around to long).

jens Kleemann

For large enterprise applications 2 to 4 GB high memory is common. if you need more than that you should create more JVM’s rather than make the heap higher due to garbage collection.

Idon't Know

Indeed, this is what we also see – oftentimes it makes more sense to scale horizontally instead vertically.

Ivo Mägi

I think it is embarassing to be bragging that you only need to throw 1024 Mb or RAM away to start the JVM. I understand that 1 Gb is nothing compared to what you can buy (16 Gb is what, $100 these days?). But the needless (and sometimes needed) exponential growth of complexity in environments like this is exactly why people are coming up with exploits for Java constantly. It may not be too big to run on a computer, but it’s way too big to justify, or to secure.

David

It’s not 1024MB to start the JVM, it’s 1024MB to run the entire application, in theory forever. I don’t see how heap size relates to exploits in the JVM implementation(s); exploits are the result of programming errors, the same errors found in any piece of software, developed by anyone and anywhere.

If you’re saying that the JVM “uses a lot of memory”; feel free to educate all of us and provide meaningful examples of platform-independent programming environments that you think are better.

Peter

What is with you people making comments who clearly have no idea how a JVM and memory settings work?

Idon't Know

plus I think its funny and embarrassing that you are measuring memory size in Megabytes. Do you know what year this is? I can buy a server with 256GB of ram for very little money. So Java with its 1024 megabytes is like the basic interpreter of servers now? Yeah I think that fits.

Solomon Capistrano

We are just demonstrating the data which is gathered from the JVMs out there. Nothing more, no smoke and mirrors while interpreting it. And we do have the experience from large heaps (10+G) which tend to lead to long GC pauses. And possibly other issues. But for example if your apps are completely fine with GC pauses and can be optimized just for throughput and latency is not important – great!

Ivo Mägi

For a large enterprise sure, a few gigs of RAM is nothing, but there are a lot of small businesses and hobbyists who run on small VPS instances that have significantly less than a gig of RAM available. Most of the VPS instances I’m currently running only have a guarantee of 256M of RAM (although in reality they tend to report closer to 300M+ of RAM).

Most of my professional work is done in Java, and yes most of the servers I deploy to have at least 8 gigs of RAM, but for my personal site and the site of a small business I do development for, I’m specifically avoiding Java (or any JVM based language) as I don’t think it would be performant in the 256M of RAM I’m guaranteed access to. In my case this is a extremely interesting article because it provides some solid data on what the bare minimum environment for Java might actually be.

orclev

You’re an idiot who knows nothing about JVM’s yet you commented anyway.

Idon't Know

I did fiddle around with Java memory settings and garbage collection switches to run Minecraft (the game) with mods.

Patrice Boivin

In fact, we follow different alerts about OutOfMemoryErrors around the globe. And Minecraft is by far the most common application triggering the errors in this field. In fact, the messages were so frequent from Minecraft community we had to set up special filters to clear our channels …

Ivo Mägi

Run a real app – Launch a JVM with 100 GB heap. Enjoy your 3 minute GC Pauses. And yes permgen will ALWAYS OOM a JVM, thats why everyone reboots them nightly. You clearly have no exp in java in an operational state

Solomon Capistrano

I fail to see your point. There is huge gap between 4GB and 100GB, so 3min pauses of 100GB do not explain why there are so few JVMs with heap larger than 4GB

If you reboot your application nightly, why do you need 6GB of PermGen? And no, permgen WILL NOT always OOM. Not if you take care of your code and of the code of libraries you use. And if you report found permgen leaks to their authors.

iNikem

Ops people don’t have code to take care of – they have to use whatever the vendor gives them. And vendors generally put their developers in a secret castle defended by a moat with alligators, which neither you nor “support” will ever succeed in breaching. From an ops perspective, the code is immutable and developers are mythical fiends who spend their days torturing kittens and figuring out new ways to make ops people’s lives miserable.

In that context, it does often make sense to do a nightly reboot.

ghjm

Ouch, but being a developer I concur, I was pretty much kept out of reach, which means we often deliver something of less utility than if we and the end user could actually converse.

Derek Williams

And Solomon, you are completely correct. None of us is with the background from operations – its just ~50 years of development experience we are having as founders together. In that sense, without all irony, we definitely would be grateful if you could give some insights into the ops world, esp considering the experience with what we consider to be truly large heaps.

Ivo Mägi

Ive been running services that process 1000 events/sec for a continuous 1k+ hours with no java issues, the only time we need to reboot the service is when we do maintainence on the servers. Profiling and taking care of code is quite important in java

anonymous coward