To blog Previous post | Next post
How to read the revamped Plumbr reports
Creating a positive feedback loop is an important goal for Plumbr.
Engineers make contributions to the codebase. This is deployed to production where the software is used. The outcome from usage needs to be made visible to engineers. This makes it possible for them to understand the experience that they are providing to their users. Unless this loop comes to a full-circle, it is not possible for engineers to understand the implications of their own work.
One of the ways we complete the loop is by providing monthly reports from Plumbr.
When engineers see data in Plumbr, they see it in the context of a filter. Data is collected and processed from every application that Plumbr is attached to. This information is then filtered and presented by type (interactions/API calls) and date-time ranges. Within this context set by a user, the information from each application/service is then presented within the product. When creating this context/filter, users can opt for a monthly report sent for the same filter.
Monthly reports contain consolidated information from Plumbr.
How to create a report?
Reports are based on filters that you create. By creating a filter, you can automatically enable monthly reports to be sent based on the filter.
Here are the steps:
- To create a filter, click on the Filter menu.
- Then, click on the “Create new filter” button at the bottom.
- You will now be shown the various options available for creating a filter.
- You can choose to create a filter based on user interactions or APIs.
- You can choose which applications will be a part of your filter.
- There is a checkbox labeled “REPORT”. By checking the box, you will effectively be creating the corresponding monthly report as well.
- Once you’ve selected the options, you can click Save filter to complete creating a filter.
Alternately, you can subscribe to reports from existing filters. This option is available under Settings>Manage filters. If you’ve opted for a report to be sent when creating a filter, Plumbr will include a report containing the summary of the data corresponding to that filter. These reports contain separate sections for Availability and Performance.
The consolidated numbers are visible on the top with individual breakdowns in the sections that follow.
Plumbr defines users as the number of unique users who used your application during the month. An ‘interaction’ is any click, scroll, tap, or keystroke that triggered a server-side call from your application.
Information about Availability
The major questions that we’re answering with this section are:
1. What is the degree to which we’re affected?
2. To what degree are we fixing these defects?
3. Are we fixing the right things?
4. At what pace are the fixes being made?
The first section of the report contains information about the availability of the application. Two important pieces of information are represented.
1. The total number of errors that were observed
2. The number of fixes that were made during the month
These numbers are further explained in the context of their impact. The number of users experiencing errors in the month, and the consequent number of interactions that are affected are shown. Also, the number of errors mitigated in the same period is shown in the next row. This gives the other side of the picture where the result of efforts taken by the team is understood.
Two tables, that show the breakdown from the same data follow. Unresolved errors, currently impacting the most users accssing the application are shown. This gives an idea about what the team should take up as top priority during the next iteration/sprint. Again, the result of the efforts put by your team and the result of these efforts are understood by looking at this table. The columns of the table are the volume of the impact and the duration for which the error has been affecting users.
The final part of the availability report is trends. It shows the same data points over the past months. This puts the health of the application and efficiency of the team into perspective. 12-month trends of errors, fixes, and impacted interactions are available in two separate sections to be able to make a comparative analysis of application behaviour.
Information about Performance
Plumbr performance reports are designed to give you an idea about what time your users spend waiting for your applications to respond. For the same filters, the performance section shows which parts of the application make your users wait the most. Making users wait beyond acceptable levels gives the impression of poorly responding applications. This part of the report helps you get an idea of how poorly the application is responding for users.
The first part of the report* tells you the total fraction of time your users spent idly, waiting for the application to respond. It is calculated using the idle time and the total duration of all sessions. This section also gives the response times for the median (50th percentile) and the 99.9th percentile. What do these mean? These values will help you understand what the experience is like for your users. Medians are the best single-value summaries for large datasets. The 99.9th percentile communicates an extreme value, with which you can determine the range across which idle times are spread.
* note: This is for reports based on user interactions only.
The section that follows shows you a table with the performance issues that need your attention the most. These are the issues in your application that are causing your users to wait the longest. Ideally, you should be resolving the issues within a short time period of when they first appear, before they affect a large number of users, or cause large amounts of waiting time. The three columns reflect this information and tell you if your engineering team has been resolving poorly responding parts of the application quickly enough with minimum impact.
Finally, two charts show trends over time. The first chart shows the total number of hours your users spent waiting for the application to respond. This data is split month-over-month for the past year. The second chart shows the median and 99.9th percentile waiting times of your application. These trends reflect the improvements (or degradations) in your application over the past year. You should be able to tell if the latest changes to your application are within acceptable levels of quality for your users.
Our mission at Plumbr is to help make your software faster and more reliable. We would like to help you avoid wasting your users’ time and make your software more useful for them. Understanding our reports is one critical step towards achieving this. Should you have any questions or need further clarification of any aspect, please email us at email@example.com.