Splunk Completes Acquisition of Plumbr Learn more

To blog |

Looking back to 2012

January 3, 2013 by Ivo Mägi Filed under: Plumbr

This one is going to be one of the rare non-technical post. But we just closed our first year and did an internal review of what did we manage to deliver throughout 2012. So our technical readers – do not be afraid. We continue with in-depth technical articles throughout 2013.

When we were done with Plumbr 2012 review, it looked like something we could also openly publish. So here we go, hopefully those of you building your own tech company can learn a thing or two from the next 900 words.

What did we accomplish in 2012?

Most importantly – we got a strong verification from our users and customers. The problem we are solving is real. And we do reduce the pain in solving memory leaks.

But detail-wise:

  • The number of supported platforms grew significantly. We started the year with only Windows and Linux boxes on our test matrix. Running Tomcat and Jetty. Now the test matrix covers a whole variety of platforms, including an ancient Solaris box, which also takes care of heating the office.
  • Leak detection precision grew. Or to be fully honest – we didn’t even know how to measure leak detection quality at first. Until the early September releases when our machine learning experts defined and started to properly measure the decision quality. Now we can say that we find nearly 9 out of 10 memory leaks out there. And only up to 13% of our Plumbr leak reports are false positives.
  • Overhead was reduced. Again, the year started with not-so-systematic guesses on the overhead. By now we have metrics in place tracking the overhead imposed by Plumbr and we can safely estimate that with typical heaps (500MB – 4GB) your application consumes between 5-20% more heap with Plumbr attached. We do not have enough data to make conclusions for smaller and larger heaps, but it seems to be that the larger the heap the less overhead % we add. CPU cycles burned when tracing all object creation and destruction were also significantly reduced. Throughput-wise you will face 20-30% of overhead on object creation.
  • We started the year with just three of us. We finished with a team of six. By now we also have a data mining, a UX and a marketing specialist on board, in addition to the founders.
  • We started selling Plumbr. It is possible to buy a single or floating license directly from home page, and we have closed the first enterprise deals. My CEO did not allow me to publish the exact number of our paying customers, but he did allow me to thank all of you. Early adopters make new products like Plumbr possible.
  • The content we published proved to be interesting. The most popular posts in 2012 gathered more than 10,000 different readers. Which, if you consider we have 8,311,000 Java developersin the world is not too bad.
  • Plumbr had been downloaded by 5,000 different users by the end of 2012. Not quite in the Apache Tomcat league yet, which seems to have been downloaded more than 7,000,000 times last year. But beware, our numbers are growing!
  • 800 memory leaks were detected by Plumbr. We would hesitate a minute to draw a quick conclusion from the numbers 5,000 and 800 before saying out loud that one in six applications out there is leaking memory.  Not everyone who downloaded Plumbr, used it – which makes the ration even smaller. What we learned is that many of our customers actually turned to us only after they faced performance problems.
  • We have gathered 400,000+ anonymous memory statistics snapshots from thousands of different applications. Those snapshots are used to train our leak detection algorithms. With the training data set growing on weekly basis, we are sure to deliver even better algos next year.
  • And most importantly – our customers have verified that we are solving a truly painful problem. Thank you for the kind emails, tweets and Skype calls. It builds a lot more confidence for going forward.

What to expect from us in 2013?

  • Further improvements on precision. Our goal is to finish 2013 with at least 93% precision on leak detection with less than 10% of false positives.
  • Less overhead. Cannot say the goals out loud on this one. But one of our core engineers has already mumbled for weeks about some clever tricks he could pull out of his sleeves. We do have faith in him, so prepare for some good news in this regard.
  • More supported platforms. We do have a bunch of work to do on our testbed – there are still many application servers that we have not managed to add into our nightly build test cycles. As well as additional JDK versions.
  • Next problem domain(s). The most intriguing challenge for us is to expand the product offering. We want to keep Plumbr the “automated performance consultant”, who gives advice and proof. We do not aim to push you ton of data that needs to be interpreted by a performance guru. We acknowledge that we need to do a lot of research before we can announce the next problem domains that Plumbr can solve for you. But the experience with memory leaks has been pure joy, and we are hungry to do more. Is there anything in particular that you miss, in the world of Java application performance troubleshooting / tuning? Let us know in the comments below or at support@plumbr.io

ADD COMMENT