
Rapid Allocation Surge I
In this example, we see that querying Solr results in returning over 2 million documents. Moreover, multiple queries are being run concurrently. The end user, however, does not require the whole lot of 2,000,000 documents. Adding simple pagination or only returning top-100 search results would be enough to get rid of the OutOfMemoryError.
Rapid Allocation Surge II
Here we see an attempt to load a large number of archive entries for processing. The sheer volume of the data quickly fills the entire java heap, and the processing fails with an OutOfMemoryError. There are multiple approaches to solving this issue. The simplest would be to increase the amount of RAM available to the java application by setting the Xmx parameter in proportion with the archive size. Alternatively, the work could be split into smaller chunks, or loading the whole archive may be avoided by stream processing. Another option would be to optimize the data structures so that the archive took up less space in memory.
Slowly growing leak
The previous examples showed a rapid increase in memory usage that resulted in an OutOfMemoryError. Another common cause is a slowly accumulating memory leak that, over time, consumes more and more of the java heap. Here we have two memory snapshots taken from the same application with a time difference of 20 hours:
The second snapshot was taken from the application post-mortem after it died with and OutOfMemoryError. As we can see, within those 20 hours, about 26,000 of new jdbc connections were opened, never to be closed. This is a typical example of what could go wrong with using your own hand-rolled connection pools. In this case, switching to a robust 3rd-party connection pool solution resolved the OutOfMemoryError issue. This, along with unbounded caches, is one of the most common slowly accumulating leaks that we see. For a more in-depth exploration of memory leaks, check out this page.