Splunk Completes Acquisition of Plumbr Learn more

Excessive Number Of ...

Executing many similar operations during one user interaction is a common source for performance issues. Each individual operation can be short in duration, but the excessive number of such operations will result in poor performance.
Keep track of overheads by attaching Plumbr to your applications.

How Plumbr will help you

The example above is extracted from a real-world situation where there was 1,050 database calls performed during a single user interaction. The total duration of such calls was 7 seconds and 992 milliseconds.

It is also visible that this is not an isolated case, as during the last 30 days there have been 12,449 transactions impacted by the very same query being repeatedly executed in the same transaction context.

The information needed to fix the problem is embedded in the following two blocks. First of these exposes the SQL statements executed, along with the number of times a particular statement was executed, exposing for example that the first SELECT query visible on the screen was executed 500 times:

500 times, 05s total
select
    identity0_.id as id8_1_,
...
from
    IDENTITY identity0_
left outer join
    ...
where
    identity0_.id=?

The block next to it exposes the call stacks in the source code, zooming in to the exact line where the statements were executed. To keep the overhead at bay, Plumbr uses sampling and captures the snapshots only from some of the calls being made. As the calls originate from multiple locations, Plumbr aggregates the snapshots to a tree-like structure, making it easy to understand where most of the calls originate from:

java.lang.Thread.run():745
org.apache.tomcat.util.net.JIoEndpoint$Worker.run():446
...
org.hibernate.loader.Loader.list():2099
org.hibernate.loader.Loader.listIgnoreQueryCache():2104
...
org.hibernate.jdbc.AbstractBatcher.getResultSet():186
org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery():237

In addition, Plumbr exposes you the full view of the transaction where the number of excessive calls participated. Opening the transaction link in the header exposes the following information:

From above it is clear that the 7,992 milliseconds the 1,050 JDBC calls took too almost all of the total eight seconds the transaction took to complete, indicating it is indeed the major source for the problem at hand.


The Solution

Solving the particular issue used in the example involved batching the similar statements together and executing them in a single batch, reducing the number of statements sent to the database from 1,050 to mere seven.

In general, the solutions for the excessive number of similar operations include taking either of the following paths:

  • Batching the individual calls together and executing the calls in one go. In such case, the network latency related overhead will be removed and instead of thousands of individual network round trips only one remote call is carried out.
  • Reducing the amount of data requested. In situations where the retrieved results are not immediately consumed, the data might not need to be fetched all at once. Pagination of large lists where only the data on a currently active page is fetched is one such situation.