Splunk Completes Acquisition of Plumbr Learn more

To blog |

Why does my JVM have access to less memory than -Xmx specifies?

February 11, 2015 by Nikita Salnikov-Tarnovski Filed under: Memory Leaks

“Hey, can you drop by and take a look at something weird”. This is how I started to look into a support case leading me towards this blog post. The particular problem at hand was related to different tools reporting different numbers about the available memory.

In short, one of the engineers was investigating the excessive memory usage of a particular application which, by his knowledge was given 2G of heap to work with. But for whatever reason, the JVM tooling itself seemed to have not made up their mind on how much memory the process really has. For example jconsole guessed the total available heap to be equal to 1,963M while jvisualvm claimed it to be equal to 2,048M. So which one of the tools was correct and why was the other displaying different information?

It was indeed weird, especially seeing that the usual suspects were eliminated – the JVM was not pulling any obvious tricks as:

  • -Xmx and -Xms were equal so that the reported numbers were not changed during runtime heap increases
  • JVM was prevented from dynamically resizing memory pools by turning off adaptive sizing policy (-XX:-UseAdaptiveSizePolicy)

Sometimes your application runs out of memory regardless of Xmx settings. Did you know that 20% of Java apps have memory leaks? Plumbr finds leaks automatically.

Reproducing the difference

First step toward understanding the problem was zooming in to the tooling implementation. Access to available memory information via standard APIs is as simple as following:


And indeed, this was what the tooling at hand seemed to be using. First step towards having an answer to question like this is to have reproducible test case. For this purpose I wrote the following snippet:

package eu.plumbr.test;
//imports skipped for brevity

public class HeapSizeDifferences {

  static Collection<Object> objects = new ArrayList<Object>();
  static long lastMaxMemory = 0;

  public static void main(String[] args) {
    try {
      List<String> inputArguments = ManagementFactory.getRuntimeMXBean().getInputArguments();
      System.out.println("Running with: " + inputArguments);
      while (true) {
    } catch (OutOfMemoryError e) {

  static void printMaxMemory() {
    long currentMaxMemory = Runtime.getRuntime().maxMemory();
    if (currentMaxMemory != lastMaxMemory) {
      lastMaxMemory = currentMaxMemory;
      System.out.format("Runtime.getRuntime().maxMemory(): %,dK.%n", currentMaxMemory / 1024);

  static void consumeSpace() {
    objects.add(new int[1_000_000]);

  static void freeSpace() {

The code is allocating chunks of memory via new int[1_000_000] in a loop and checking for the memory currently known to be available for the JVM runtime. Whenever it spots a change to the last known memory size, it reports it by printing the output of Runtime.getRuntime().maxMemory() similar to the following:

Running with: [-Xms2048M, -Xmx2048M]
Runtime.getRuntime().maxMemory(): 2,010,112K.

Indeed – even though I had specified the JVM to use 2G of heap, the runtime somehow is not able to find 85M of it. You can double-check my math by converting the output of Runtime.getRuntime().maxMemory() to MB by dividing the 2,010,112K by 1024. The result you will get equals 1,963M, differentiating from 2048M by exactly 85M.

Finding the root cause

After being able to reproduce the case, I took the following note – running with the different GC algorithms also seemed to produce different results:

GC algorithm Runtime.getRuntime().maxMemory()
-XX:+UseSerialGC 2,027,264K
-XX:+UseParallelGC 2,010,112K
-XX:+UseConcMarkSweepGC 2,063,104K
-XX:+UseG1GC 2,097,152K

Besides G1, which is consuming exactly the 2G I had given to the process, every other GC algorithm seemed to consistently lose a semi-random amount of memory.

Now it was time to dig into the source code of the JVM where in source code of the CollectedHeap I discovered the following:

// Support for java.lang.Runtime.maxMemory():  return the maximum amount of
// memory that the vm could make available for storing 'normal' java objects.
// This is based on the reserved address space, but should not include space
// that the vm uses internally for bookkeeping or temporary storage
// (e.g., in the case of the young gen, one of the survivor
// spaces).
virtual size_t max_capacity() const = 0;

The answer was rather well-hidden I have to admit that. But the hint was still there for the truly curious minds to find – referring to the fact that in some cases one of the survivor spaces might be excluded from heap size calculations.

Java heap and permgen structure

From here it was tailwinds all the way – turning on the GC logging discovered that indeed, with 2G heap the Serial, Parallel and CMS algorithms all set the survivor spaces to be sized at exactly the difference missing. For example, on the ParallelGC example above, the GC logging demonstrated the following:

Running with: [-Xms2g, -Xmx2g, -XX:+Use>ParallelGC, -XX:+PrintGCDetails]
Runtime.getRuntime().maxMemory(): 2,010,112K.

... rest of the GC log skipped for brevity ...

 PSYoungGen      total 611840K, used 524800K [0x0000000795580000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 524800K, 100% used [0x0000000795580000,0x00000007b5600000,0x00000007b5600000)
  from space 87040K, 0% used [0x00000007bab00000,0x00000007bab00000,0x00000007c0000000)
  to   space 87040K, 0% used [0x00000007b5600000,0x00000007b5600000,0x00000007bab00000)
 ParOldGen       total 1398272K, used 1394966K [0x0000000740000000, 0x0000000795580000, 0x0000000795580000)

from which you can see that the Eden space is set to 524,800K, both survivor spaces (from and to) are set to 87,040K and Old space is sized at 1,398,272K. Adding together Eden, Old and one of the survivor spaces totals exactly to 2,010,112K, confirming that the missing 85M or 87,040K was indeed the remaining Survivor space.

Don’t let your application run out of memory because of a memory leak – monitor your application with Plumbr to avoid troubleshooting hassle.


After reading the post you are now equipped with new insight into Java API implementation details. The next time certain tooling visualizes the total available heap size to be slightly less than the Xmx-specified heap size, you know the difference to be equal to the size of one of your Survivor spaces.

I have to admit the fact is not particularly useful in day to day programming activities, but this was not the point for the post. Instead I wrote the post describing a particular characteristic I am always looking in good engineers – curiosity. Good engineers are always looking to understand how and why something works the way it does. Sometimes the answer remains hidden, but I still recommend you to attempt to seek answers. Eventually the knowledge built along the way will start paying out dividends.



I am getting OOM Error.
By Observing, came to know that at a certain point of time Eden space is 0 byte.
Kindly suggest is there any way to handle this..


Thanks for the insight. Still wondering why G1GC alone adds up entire? is this a bug?


Hi, Sundar. If I understand the question correctly, you are wondering why G1GC has its `Runtime.getRuntime().maxMemory()` exactly equal to `Xmx`. The reason for other collectors giving something different is their implementation details, such as survivor spaces. G1 is designed differently, and thus these details are not present for it.

Gleb Smirnov

If i understand correctly G1 still logically allocates some region for Survivor_to region which cannot be allocated right (which means it should not be counted towards available memory).



I see what you mean. Indeed, the to-space is, much like the Eden Space, just a logical set of regions. The original G1 Paper [1] has the following passage:

> First, in all modes we choose a
> fraction h of the total heap
> size M: we call h the hard
> margin, and H = (1 − h) * M
> the hard limit. Since we use
> evacuation to reclaim space,
> we must ensure that there is
> sufficient “to-space” to
> evacuate into; the hard margin
> ensures that this space exists.

> Therefore, when allocated space
> reaches the hard limit an evacuation
> pause is always initiated, even if doing
> so would violate the soft real-time goal.

The paper goes on to include some implementation details at the time of writing and some possible optimizations like figuring out the size of this “to-space” dynamically.

You can try a simple experiment to see how much memory G1 really does allow the application to use by simply trying to occupy all the heap with data and enabling detailed GC logs [2]. You will notice that the heap usage at the time of the OOME being thrown is indeed smaller than Xmx by a very small margin. A good way to dig deeper would be to look into the G1 sources in hotspot [3]. You will see a number of cases in which NULL will be returned even if the heap is not entirely used up.

I would not call such behaviour a bug, as the Javadoc on `Runtime#maxMemory()` says this:

> Returns the maximum amount of memory
> that the Java virtual machine will attempt to use

As I read it, it does not mandate that the JVM will necessarily use up all of that space before an OOME is thrown. Especially if the chunk of memory that is being allocated is larger than the remaining free size.

P.S. The original G1 Paper may be easier to comb through after you familiarize yourself with the general functioning of G1. There are multiple good talks and articles about it. We ourselves have a section dedicated to in in our Java Garbage Collection Handbook [4].

[1] David Detlefs, Christine Flood, Steve Heller, Tony Printezis. Garbage-First Garbage Collection
[2] https://github.com/gvsmirnov/java-perv/commit/3d9cb2b4e7c69bf112be2f5eff26c418cbb67997
[3] http://hg.openjdk.java.net/jdk9/jdk9/hotspot/file/efe1782aad5c/src/share/vm/gc/g1/g1CollectedHeap.cpp#l475
[4] https://plumbr.eu/handbook/garbage-collection-algorithms-implementations/g1

Gleb Smirnov

Hi Gleb,
Thanks a lot. Helped me to understand better G1GC.

Another question on similar, Now with Runtime.freeMemory()
When i call freeMemory with G1GC it is less compared to when i use CMSGC.
For ex: In a loop i keep allocating 500Kb and keep printing Runtime.freeMemory()
Result is like this,
G1GC Free memory goes down to 1G (max heap – 4G) whereas CMSGC never goes below 3G.
Since my application have some logic (like evicting cache) when memory utilization goes to 85%. i see more of this happening in G1GC but in CMSGC it doesn’t happen.


Sundar, glad to help! I can’t be certain without looking at the detailed logs, but very likely the reason is as follows. Since G1 is an incremental garbage collector, it does not collect all the garbage at once. Instead, it collects as much it can while satisfying the pause duration goals (that’s why it’s called soft real-time). As a result, at any given moment, there may be more not-yet-collected garbage in the G1 heap than in CMS heap. In some cases, e.g. when allocation rate increases further, G1GC would have to break the soft realtime goals to keep up, perhaps even degrading to full stop-the-world GC iterations because of e.g. Evacuation Failure. G1GC is full of tradeoffs like this.

Gleb Smirnov

Hello, thanks for your useful post.

I have a question: After the process started, Runtime.getRuntime().maxMemory() never changes the returned value, is it right?

Thanks again.

Hoang-Mai Dinh

Yes, maximum amount of memory that JVM can potentially consume will not change during the runtime of the JVM

Nikita Salnikov-Tarnovski

That’s not exactly true. It might slightly change because `UseAdaptiveSizePolicy` is enabled by default.


As far as I know, `UseAdaptiveSizePolicy` can change the sizes of individual generations in accordance with the actual memory usage. But Runtime.getRuntime().maxMemory() is determined by Xmx setting.


Thanks Nikita, another useful article.


Hi Nikita,
Interesting. I am looking at the array example for OOM error on this plumbr page:

I think the array takes only around 8 MB while Xmx allocates 12 MB and the OOM is still there. Survivor space seems to be taking up only 512 KB. So, not sure why the program would give an OOM. Works when 13MB is allocated. Any idea?


Dheeru Mundluru

Hello Dheeru. Glad we could help you 🙂