To blog Previous post | Next post
Weak, Soft and Phantom references: Impact on GC
There is an entire class of issues, impacting either the latency or throughput aspects of GC, caused by the use of non-strong references in the application. While using such references may help to avoid an unwanted OutOfMemoryError in many cases, heavy usage of the non-strong references may significantly impact the way garbage collection can affect the performance of your application.
Why should I care?
When using weak references, you should be aware of the way the weak references are garbage-collected. Whenever GC discovers that an object is weakly reachable, that is, the last remaining reference to the object is a weak reference, it is put onto the corresponding ReferenceQueue, and becomes eligible for finalization. One may then poll this reference queue and perform the associated cleanup activities. A typical example for such cleanup would be the removal of the now missing key from the cache.
The trick here is that at this point you can still create new strong references to the object, so before it can be, at last, finalized and reclaimed, GC has to check again that it really is okay to do this.
Weak references are actually a lot more common than you might think. Many caching solutions build the implementations using weak referencing, so even if you are not directly creating any in your code, there is a strong chance your application is still using weakly referenced objects in large quantities.
When using soft references, you should bear in mind that soft references are collected much less eagerly than the weak ones. The exact point at which it happens is not specified and depends on the implementation of the JVM. Typically the collection of soft references happens as a last ditch effort before running out of memory. What it implies is that you might find yourself in situations where you face either more frequent or longer full GC pauses than expected.
When using phantom references, you have to literally do manual memory management in regards of flagging such references eligible for garbage collection. It is dangerous, as a superficial glance at the javadoc may lead one to believe they are the completely safe to use:
In order to ensure that a reclaimable object remains so, the referent of a phantom reference may not be retrieved: The get method of a phantom reference always returns null.
Surprisingly, many developers skip the very next paragraph in the same javadoc (emphasis added):
Unlike soft and weak references, phantom references are not automatically cleared by the garbage collector as they are enqueued. An object that is reachable via phantom references will remain so until all such references are cleared or themselves become unreachable.
That is right, we have to manually clear() up phantom references or risk facing a situation where the JVM starts dying with an OutOfMemoryError. The reason why the Phantom references are there in the first place is that this is the only way to find out when an object has actually become unreachable via the usual means. Unlike with soft or weak references, you cannot resurrect a phantom-reachable object.
Give me some examples
Let us take a look at a demo application that allocates a lot of objects, which are successfully reclaimed during minor garbage collections. Bearing in mind the trick of altering the tenuring threshold on promotion rate, we could run this application with -Xmx24m -XX:NewSize=16m -XX:MaxTenuringThreshold=1 and see this in GC logs:
2.330: [GC (Allocation Failure) 20933K->8229K(22528K), 0.0033848 secs] 2.335: [GC (Allocation Failure) 20517K->7813K(22528K), 0.0022426 secs] 2.339: [GC (Allocation Failure) 20101K->7429K(22528K), 0.0010920 secs] 2.341: [GC (Allocation Failure) 19717K->9157K(22528K), 0.0056285 secs] 2.348: [GC (Allocation Failure) 21445K->8997K(22528K), 0.0041313 secs] 2.354: [GC (Allocation Failure) 21285K->8581K(22528K), 0.0033737 secs] 2.359: [GC (Allocation Failure) 20869K->8197K(22528K), 0.0023407 secs] 2.362: [GC (Allocation Failure) 20485K->7845K(22528K), 0.0011553 secs] 2.365: [GC (Allocation Failure) 20133K->9501K(22528K), 0.0060705 secs] 2.371: [Full GC (Ergonomics) 9501K->2987K(22528K), 0.0171452 secs]
Full collections are quite rare in this case. However, if the application also starts creating weak references (-Dweak.refs=true) to these created objects, the situation may change drastically. There may be many reasons to do this, starting from using the object as keys in a weak hash map and ending with allocation profiling. In any case, making use of weak references here may lead to this:
2.059: [Full GC (Ergonomics) 20365K->19611K(22528K), 0.0654090 secs] 2.125: [Full GC (Ergonomics) 20365K->19711K(22528K), 0.0707499 secs] 2.196: [Full GC (Ergonomics) 20365K->19798K(22528K), 0.0717052 secs] 2.268: [Full GC (Ergonomics) 20365K->19873K(22528K), 0.0686290 secs] 2.337: [Full GC (Ergonomics) 20365K->19939K(22528K), 0.0702009 secs] 2.407: [Full GC (Ergonomics) 20365K->19995K(22528K), 0.0694095 secs]
As we can see, there are now many full collections, and the duration of the collections is an order of magnitude longer! An obvious case of premature promotion, but a tad bit tricky one. The root cause, of course, lies with the weak references. Before we added them, the objects created by the application were dying just before being promoted to the old generation. But with the addition, they are now sticking around for an extra GC round so that the appropriate cleanup can be done on them. A simple solution would be to increase the size of the young generation by specifying -Xmx64m -XX:NewSize=32m:
2.328: [GC (Allocation Failure) 38940K->13596K(61440K), 0.0012818 secs] 2.332: [GC (Allocation Failure) 38172K->14812K(61440K), 0.0060333 secs] 2.341: [GC (Allocation Failure) 39388K->13948K(61440K), 0.0029427 secs] 2.347: [GC (Allocation Failure) 38524K->15228K(61440K), 0.0101199 secs] 2.361: [GC (Allocation Failure) 39804K->14428K(61440K), 0.0040940 secs] 2.368: [GC (Allocation Failure) 39004K->13532K(61440K), 0.0012451 secs]
The objects are now once again reclaimed during minor garbage collection.
The situation is even worse when soft references are used as seen in the next demo application. The softly-reachable objects are not reclaimed until the application risks getting an OutOfMemoryError. Replacing weak references with soft references in the demo application immediately surfaces many more Full GC events:
2.162: [Full GC (Ergonomics) 31561K->12865K(61440K), 0.0181392 secs] 2.184: [GC (Allocation Failure) 37441K->17585K(61440K), 0.0024479 secs] 2.189: [GC (Allocation Failure) 42161K->27033K(61440K), 0.0061485 secs] 2.195: [Full GC (Ergonomics) 27033K->14385K(61440K), 0.0228773 secs] 2.221: [GC (Allocation Failure) 38961K->20633K(61440K), 0.0030729 secs] 2.227: [GC (Allocation Failure) 45209K->31609K(61440K), 0.0069772 secs] 2.234: [Full GC (Ergonomics) 31609K->15905K(61440K), 0.0257689 secs]
And the king here is the phantom reference as seen in the third demo application. Running the demo with the same sets of parameters as before would give us results that are pretty similar as the results in the case with weak references. The number of full GC pauses would, in fact, be much smaller because of the difference in the finalization described in the beginning of this section.
However, adding one flag that disables phantom reference clearing (-Dno.ref.clearing=true) would quickly give us this:
4.180: [Full GC (Ergonomics) 57343K->57087K(61440K), 0.0879851 secs] 4.269: [Full GC (Ergonomics) 57089K->57088K(61440K), 0.0973912 secs] 4.366: [Full GC (Ergonomics) 57091K->57089K(61440K), 0.0948099 secs] Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
One must exercise extreme caution when using phantom references and always clear up the phantom reachable objects in a timely manner. Failing to do so will likely end up with an OutOfMemoryError. And trust us when we say that it is quite easy to fail at this: one unexpected exception in the thread that processes the reference queue, and you will have a dead application at your hand.
Could my JVMs be affected?
As a general recommendation, consider enabling the -XX:+PrintReferenceGC JVM option to see the impact that different references have on garbage collection. If we add this to the application from the WeakReference example, we will see this:
2.173: [Full GC (Ergonomics) 2.234: [SoftReference, 0 refs, 0.0000151 secs]2.234: [WeakReference, 2648 refs, 0.0001714 secs]2.234: [FinalReference, 1 refs, 0.0000037 secs]2.234: [PhantomReference, 0 refs, 0 refs, 0.0000039 secs]2.234: [JNI Weak Reference, 0.0000027 secs][PSYoungGen: 9216K->8676K(10752K)] [ParOldGen: 12115K->12115K(12288K)] 21331K->20792K(23040K), [Metaspace: 3725K->3725K(1056768K)], 0.0766685 secs] [Times: user=0.49 sys=0.01, real=0.08 secs] 2.250: [Full GC (Ergonomics) 2.307: [SoftReference, 0 refs, 0.0000173 secs]2.307: [WeakReference, 2298 refs, 0.0001535 secs]2.307: [FinalReference, 3 refs, 0.0000043 secs]2.307: [PhantomReference, 0 refs, 0 refs, 0.0000042 secs]2.307: [JNI Weak Reference, 0.0000029 secs][PSYoungGen: 9215K->8747K(10752K)] [ParOldGen: 12115K->12115K(12288K)] 21331K->20863K(23040K), [Metaspace: 3725K->3725K(1056768K)], 0.0734832 secs] [Times: user=0.52 sys=0.01, real=0.07 secs] 2.323: [Full GC (Ergonomics) 2.383: [SoftReference, 0 refs, 0.0000161 secs]2.383: [WeakReference, 1981 refs, 0.0001292 secs]2.383: [FinalReference, 16 refs, 0.0000049 secs]2.383: [PhantomReference, 0 refs, 0 refs, 0.0000040 secs]2.383: [JNI Weak Reference, 0.0000027 secs][PSYoungGen: 9216K->8809K(10752K)] [ParOldGen: 12115K->12115K(12288K)] 21331K->20925K(23040K), [Metaspace: 3725K->3725K(1056768K)], 0.0738414 secs] [Times: user=0.52 sys=0.01, real=0.08 secs]
As always, this information should only be analyzed when you have identified that GC is having impact to either the throughput or latency of your application. In such case you may want to check these sections of the logs. Normally, the number of references cleared during each GC cycle is quite low, in many cases exactly zero. If this is not the case, however, and the application is spending a significant period of time clearing references, or just a lot of them are being cleared, then further investigation is required.
What is the solution?
When you have verified the application actually is suffering from the mis-, ab- or overuse of either weak, soft or phantom references, the solution often involves changing the application’s intrinsic logic. This is very application specific and generic guidelines are thus hard to offer. However, some generic solutions to bear in mind
- Weak references – if the problem is triggered by increased consumption of a specific memory pool, an increase in the corresponding pool (and possibly the total heap along with it) can help you out. As seen in the example section, increasing the total heap and young generation sizes alleviated the pain.
- Phantom references – make sure you are actually clearing the references. It is easy to dismiss certain corner cases and have the clearing thread not being able to keep up with the pace the queue is filled or to stop clearing the queue altogether, putting a lot of pressure to GC and creating a risk of ending up with an OutOfMemoryError.
- Soft references – when soft references are identified as the source of the problem, the only real way to alleviate the pressure is to change the application’s intrinsic logic
The post is an example of the content to be published in the missing sections of our Garbage Collection Handbook during the next week. If you found the material to be interesting, subscribe to our RSS or Twitter feeds and be notified on time.