To blog |

More about Spring Cache Performance

April 3, 2019 by Ivo Mägi Filed under: Blog Performance Programming

This is a follow up to our last post about Spring’s cache abstraction.

As engineers, you gain valuable experience by understanding the internals of some of the tools that you use. Understanding the behaviour of tools helps you become more mature when making design choices.  In this post, we describe a benchmarking experiment and the results which will help you understand Spring’s built-in annotations for caching.

Take a look at the following two methods:

@Cacheable(value = "time", key = "#p0.concat(#p1)")
public long annotationWithSpel(String dummy1, String dummy2) {
 return System.currentTimeMillis();
@Cacheable(value = "time")
public long annotationBased(String dummy1, String dummy2) {
 return System.currentTimeMillis();

Here we have two very similar methods, each annotated with the built-in @Cacheable annotation from Spring Cache. The first one includes an expression written in the Spring Expression Language. The expression is used to configure how to calculate cache key using method parameters. The second one relies on Spring’s default behaviour which is “all method parameters are considered as a key”. Effectively, both the methods above actually result in exactly the same external behaviour.

We ran some benchmark tests, that allowed us to measure their performance:

Benchmark                       Mode Cnt Score Error Units
CacheBenchmark.annotationBased  avgt 5 271.975 ± 11.586 ns/op
CacheBenchmark.spel             avgt 5 1196.744 ± 93.765 ns/op
CacheBenchmark.manual           avgt 5 16.325 ± 0.856 ns/op
CacheBenchmark.nocache          avgt 5 40.142 ± 4.012 ns/op

It turns out that the method which had a manually configured cache runs 4.4 times slower! In hindsight, this outcome seems to make sense because of overheads. The Spring framework has to parse an arbitrarily complex expression and some cycles are consumed in this computation.

Why are we writing this story? Well –

  1. We care deeply about software performance.
  2. Our own codebase has a few of these instances where we have had to trade-off performance for zero benefits.

You should examine your code base and conduct a review or audit too. Jettison some of these instances as well and gain performance improvements. You could very well have some instances where you have manually configured cache keys as well. Remember, this is exactly the same behaviour that Spring Cache would provide you by default. A definite win-win situation!

While you’re at it, give Plumbr monitoring a try. It will put bottlenecks impacting your users at the forefront and help you fix them.



The followup is very useful.
We observed the same problem as well.

The solution in our case was a throwing away slow Sping cache solution and using a hashmap.

We would appreciate if You could include a reference for Your experiments. Be that a Github link or some other reproducible resource.

Viktor Reinok