Splunk Completes Acquisition of Plumbr Learn more

GC Inefficiencies

Garbage Collection process is identifying and discarding unused objects to reclaim memory. While doing so, GC is stopping JVM every once in a while. Duration of those stops is unpredictable and can exceed tens of seconds, during which end users perceive the application as unresponsive.
Expose GC pauses that are affecting your applications by installing Plumbr.

How Plumbr will help you

The screenshot above is taken by Plumbr and is exposing the root cause for the long GC pause. The user experience was impacted during the SearchController.search() invocation in the SearchEngine JVM. What Plumbr has exposed is that the pause took over four seconds to complete and has failed to clean any significant amount of data in memory. As seen, there was 4.6GB worth of data residing in the heap after this major GC pause.

To understand how to proceed, , one would have to find out what was actually residing inside the heap. Getting this information without Plumbr tends to be complex and expensive.

To make matters easier for you, Plumbr agent captures memory snapshots at carefully selected moments, e.g. when garbage collection is failing to free up space in the heap. Here’s what we would get by going to the “memory snapshots” tab in the previous example:

This immediately shows that we have loaded 36,729 Book objects worth 4.6G from the database while processing the search query.

Expanding the full stack trace would lead us to the exact line in application code that has triggered the loading:

at java.lang.Thread.run:745
at (...)
at com.example.search.SearchController.search:24
at com.example.search.SearchEngine.searchByName:24
at org.springframework.jdbc.core.JdbcTemplate.query:777
at (...)
at o.s.j.c.RowMapperResultSetExtractor.extractData:93

And we could see the method in question:

public Collection searchByName(String namePattern) {
    return jdbcTemplate.query("SELECT * FROM Book b WHERE b.name LIKE ?",
        (rs, n) -> new Book(rs.getString("id"),rs.getString("name"), rs.getString("content")),
        namePattern.replace('*', '%')
    );
}

The Solution

This case is in fact quite typical for applications that perform searching. All the user has to do to bring an application to its knees is type “*” in the search bar. The simple solution in this case would be to add pagination, and only load a small number of books at a time:

public Collection searchByName(String namePattern, int offset, int count) {
    return jdbcTemplate.query("SELECT * FROM Book b WHERE b.name LIKE ? LIMIT ?, ?",
        (rs, n) -> new Book(rs.getString("id"),rs.getString("name"), rs.getString("content")),
        namePattern.replace('*', '%'), offset, limit
    );
}

Another notable point is that a mere 30K books are taking up over 4G of space. It looks like we are loading the whole content for each book. But does the user actually need all the hundreds of thousands of words on the search result page?