Splunk Completes Acquisition of Plumbr Learn more

To blog |

Spring Cache – Profiling

January 31, 2019 by Ivo Mägi Filed under: Blog Performance Programming

At Plumbr, we’re constantly working on how software can be made faster, and more reliable. The promise we’ve made to our customers is to avoid 100 million failures & save 100 million hours for their users per year.

While we’re making things better for engineers around the world, we’re also improving our software. Recently I invested time in tuning the performance of one particular part of the Plumbr codebase. It is quite a tight loop, reading data from Kafka, performing several computations and then writing data to a file. After several rounds of optimization, an unexpected code path started appearing on the profiler output.

It was unexpected because it was already considered ‘optimized’. At the beginning it went to DB asking for some relatively stable data. As it is clearly suboptimal, this method’s invocation was wrapped in a cache using Spring Cache. Thus it was a surprise to see it contributing a substantial portion of latency to the tight loop I was optimising. This led me to investigating this further, and I’m sharing the findings here.

Let us take a look at minimal example: https://github.com/iNikem/spring-cache-jmh.
We have a trivial method there:

public long annotationBased(String dummy) {
return System.currentTimeMillis();

It is doing some work, which in here is just asking for a current time, and it is annotated with Spring’s @Cacheable annotation. This will result in Spring wrapping this method in a proxy and caching the result of method invocation using method’s input parameters as a key cache. Very straightforward and convenient optimisation: you see a slow method, slap an annotation on it, configure your cache provider ( ehcache.xml in my case) and you can pat yourself on the back.

Compare it with the same end-result but using cache manually:

public long manual(String dummy) {
Cache.ValueWrapper valueWrapper = cache.get(dummy);
long result;
if (valueWrapper == null) {
result = System.currentTimeMillis();
cache.put(dummy, result);
} else {
result = (long) valueWrapper.get();
return result;

Much more going on, actual useful work is almost lost in accidental complexity of infrastructure-related boilerplate. Why should anyone prefer manual work to Spring’s magic? The answer lies in this JMH benchmark and its results:

Benchmark                       Mode  Cnt    Score    Error  Units
CacheBenchmark.annotationBased  avgt    5  245.960 ± 27.749  ns/op
CacheBenchmark.manual           avgt    5   16.696 ±  0.496  ns/op
CacheBenchmark.nocache          avgt    5   44.586 ±  9.091  ns/op

As you can see, a custom solution for the specific problem runs 15 times faster than a general purpose one. The aim of this investigation is by no measure to accuse Spring of being a slow framework! The takeaway is “Spring is pre-emptively undertaking some heavy lifting to support a broader range of general-purpose use-cases”. Note, that we still speak about several hundreds of nanoseconds, which could be a negligible time difference in most scenarios. But in rare cases, when your actual profiling data shows that Spring Cache abstraction adds too much of overhead, don’t be afraid to get your hands dirty and roll out custom solutions that are specifically tailored to meet your needs.

A farewell note – About the ‘nocache’ row from above. In this particular case the actual work our method does is so small, that adding caching actually slows it down. A perfect, albeit synthetical example of premature optimization: don’t optimize anything until actual measurement proves the need for this. And then don’t forget to measure again after you optimize it. Cheers!

Cross posting from: https://medium.com/@nikem/is-spring-cache-abstraction-fast-enough-for-you-a6a5ea1542a9

For more interesting insights, you could follow me on Twitter @iNikem

Icons made by prettycons from www.flaticon.com is licensed by CC 3.0 BY