To blog Previous post | Next post
Expensive JDBC Operation Monitoring – private beta
Ever since we repositioned Plumbr to be used for JVM monitoring instead of troubleshooting, we have been receiving requests to expand the functionality and start tracking other performance bottlenecks. During the past months we have been working on the next major product update in this regard and are now ready to surface the results.
Say hello to JDBC monitoring. Plumbr Agents are now capable of detecting expensive JDBC operations, alert you when such operations are discovered, pinpoint you to the root cause in the source code and group similar incidents together.
We have carefully polished the functionality – during the past months we have analyzed the behaviour of more than 200 different JVMs when instrumenting the JDBC operations. We have tested the functionality with all major database vendor and driver combinations to make sure the instrumentation will work smoothly from the technical standpoint. This also included thorough performance testing to make sure our instrumentation is carried out with low enough overhead to justify production deployments.
The main challenge for us was to build a solution where the severity and root cause of the incident would be clearly communicated. When the application threads are blocked for the external IO, we wished to make the incident transparent in several dimensions:
- Has this happened before or is this an isolated incident?
- How severe is the particular incident and what is the total impact to end users?
- What was the call stack of the thread when it was blocked for the expensive call?
- What was the SQL being executed?
We succeeded in building a solution that answers all the above questions. In the following screenshot you see a situation where Plumbr
- detected an expensive JDBC operation blocking a thread for close to 9 seconds;
- confirmed that this was a recurring issue (where in total of 127 times the very same operations stalled for 23 minutes and 31 seconds;
- outlined that the wait occurred during the very same SQL query being executed:
Equipped with this information, you can quickly triage the issue based on the severity of the problem. The next step is finding the actual root case, where we again equip you with the necessary information:
From the above you can see that all the 127 expensive JDBC operations were trying to execute the very same query against the very same datasource. Also, you can immediately drill down to the single line in source code executing the query – in this particular case the culprit was a prepared statement executed by the eu.plumbr.portal.incident.history.JpaProblemHistoryDao.findAccountProblems() method on line 74.
The above example illustrates the insights you start receiving from your application when using Plumbr’s Expensive JDBC Operation monitoring. As we have already detected thousands of expensive JDBC operations in hundreds of different JVMs, I can only recommend to find out how your own JVMs are performing in this regard. Apply for our private beta program and start monitoring your JVMs for those evil JDBC calls!