The 4 Operational Metrics You Should Be Tracking

The 4 Operational Metrics You Should Be Tracking

Living in a data-rich world is a blessing and a curse. Flexible monitoring systems, open APIs, and easy data visualization resources make it simple to graph anything you want, but too much data quickly becomes noisy and un-actionable.

We’ve blogged, spoken, and thought hard about what you should monitor and why from a systems perspective, but what about monitoring data on your operations performance? We worked with with a large number of PagerDuty customers as we built out our new Advanced Reporting feature, including some of the most sophisticated operations teams out there. We’d like to share some specific metrics and guidelines that help teams measure and improve their operational performance.

Top Metrics to Track

1. Raw Incident Count

A spike or continuous upward trend in the number of incidents a team receives tells you two things: either that team’s infrastructure has a serious problem, or their monitoring tools are misconfigured and need adjustment.

Incident counts may rise as an organization grows, but real incidents per responder should stay constant or move downward as the organization identifies and fixes low-quality alerts, builds runbooks, automates common fixes, and becomes more operationally mature.

“We were spending lots of time closing down redundant alerts.” – Kit Reynolds, IS Product Manager, thetrainline.com

When looking at incidents, it’s important to break them down by team or service, and then drill into the underlying incidents to understand what is causing problems. Was that spike on Wednesday due to a failed deploy that caused issues across multiple teams, or just a flapping monitoring system on a low-severity service? Comparing incident counts across services and teams also helps to put your numbers in context, so you understand whether a particular incident load is better or worse than the organization average.

2. Mean Time to Resolution (MTTR)

Time to resolution is the gold standard for operational readiness. When an incident occurs, how long does it take your team to fix it?

Downtime not only hurts your revenue but also customer loyalty, so it’s critical to make sure your team can react quickly to all incidents. For Major League Soccer, their fans expect their 20 web properties to be up during live matches. Justin Slattery, Director of Engineering, and his team are constantly working to improve their resolution times because “the cost of an outage during a middle of a game is incalculable.”

While resolution time is important to track, it’s often hard to norm, and companies will see variances in TTR based on the complexity of their environment, the way teams and infrastructure responsibility are organized, industry, and other factors. However, standardized runbooks, infrastructure automation, reliable alerting and escalation policies will all help drive this number down.

3. Time to Acknowledgement / Time to Response

This is the metric most teams forget about– the it time to takes a team to acknowledge and start work on an incident.

“Time to Respond is important because it will help you identify which teams and individuals are prepared for being on-call. Fast response time is a proxy for a culture of operational readiness, and teams with the attitude and tools to respond faster tend to have the attitude and tools to recover faster.”- Arup Chakrabarti, Operations Manager, PagerDuty

While an incident responder may not always have control over the root cause of a particular incident, one factor they are 100% responsible for is their time to acknowledgement and response. Operationally mature teams have high expectations for their team members’ time to respond, and hold themselves accountable with internal targets on response time.

If you’re using an incident management system like PagerDuty, an escalation timeout is a great way of enforcing a response time target. For example, if you decide that all incidents should be responded to within 5 minutes, then set your timeout to 5 minutes to make sure the next person in line is alerted. To gauge the team’s performance, and determine whether your target needs to be adjusted, you can track the number of incidents that are escalated.

4. Escalations

For most organizations using an incident management tool, an escalation is an exception – a sign that either a responder wasn’t able to get to an incident in time, or that he or she didn’t have the tools or skills to work on it. While escalation policies are a necessary and valuable part of incident management, teams should generally be trying to drive the number of escalations down over time.

There are some situations in which an escalation will be part of standard operating practice. For example, you might have a NOC, first-tier support team or even auto-remediation tool that triages or escalates incoming incidents based on their content. In this case, you’ll want to track what types of alerts should be escalated, and what normal numbers should look like for those alerts.

Track your Operations Performance with PagerDuty

“Before PagerDuty, it might take a day to respond to incidents. Now, it’s seconds.” – Aashay Desai, DevOps, Inkling.

PagerDuty has always supported extracting rich incident data through our full-coverage API, and we’ve also offered limited in-app reporting to all customers.

Monitoring_Ebook_728_90

 

Share

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

  • Dan Slimmon
    Posted at 3:07 pm, August 24, 2014

    Great post. Succinct and thoughtful. Most people (myself included) don’t pay enough attention to the health of their monitoring system itself, especially as it relates to the humans in the organization. This is important stuff.

    I’m wary, though, of oversimplifying the meaning of Mean-Time-To-Resolution. Yes, an ideal MTTR would be zero: every incident would end at the same instant it began. And since our MTTR is always higher than zero, it’s tempting to think that we should always be trying to drive that number down.

    But sometimes we make unambiguously good changes to our alerting that raise our MTTR! Getting rid of non-actionable probes, for example. We want every alert to signify an issue that requires human investigation, and that we haven’t seen before. But as we approach that goal by automating repairs and pruning useless alerts, it takes us longer and longer on average to investigate and resolve the issues we’re notified of. Our MTTR goes up as a result of progress.

    This just serves as a reminder that a trend in a summary statistic, like MTTR, needs to be examined in context. If your MTTR is trending down, make sure it’s not just because your raw incident count is skyrocketing!

  • Dan Slimmon
    Posted at 3:07 pm, August 24, 2014

    Great post. Succinct and thoughtful. Most people (myself included) don’t pay enough attention to the health of their monitoring system itself, especially as it relates to the humans in the organization. This is important stuff.

    I’m wary, though, of oversimplifying the meaning of Mean-Time-To-Resolution. Yes, an ideal MTTR would be zero: every incident would end at the same instant it began. And since our MTTR is always higher than zero, it’s tempting to think that we should always be trying to drive that number down.

    But sometimes we make unambiguously good changes to our alerting that raise our MTTR! Getting rid of non-actionable probes, for example. We want every alert to signify an issue that requires human investigation, and that we haven’t seen before. But as we approach that goal by automating repairs and pruning useless alerts, it takes us longer and longer on average to investigate and resolve the issues we’re notified of. Our MTTR goes up as a result of progress.

    This just serves as a reminder that a trend in a summary statistic, like MTTR, needs to be examined in context. If your MTTR is trending down, make sure it’s not just because your raw incident count is skyrocketing!

  • Dan Slimmon
    Posted at 3:07 pm, August 24, 2014

    Great post. Succinct and thoughtful. Most people (myself included) don’t pay enough attention to the health of their monitoring system itself, especially as it relates to the humans in the organization. This is important stuff.

    I’m wary, though, of oversimplifying the meaning of Mean-Time-To-Resolution. Yes, an ideal MTTR would be zero: every incident would end at the same instant it began. And since our MTTR is always higher than zero, it’s tempting to think that we should always be trying to drive that number down.

    But sometimes we make unambiguously good changes to our alerting that raise our MTTR! Getting rid of non-actionable probes, for example. We want every alert to signify an issue that requires human investigation, and that we haven’t seen before. But as we approach that goal by automating repairs and pruning useless alerts, it takes us longer and longer on average to investigate and resolve the issues we’re notified of. Our MTTR goes up as a result of progress.

    This just serves as a reminder that a trend in a summary statistic, like MTTR, needs to be examined in context. If your MTTR is trending down, make sure it’s not just because your raw incident count is skyrocketing!

  • Posted at 3:35 pm, August 26, 2014

    Great point re: putting MTTR in context. We definitely want to be reducing our time to resolve similar or identical alerts, as we share knowledge and automate common fixes or preventative measures. But as you improve our system and start fixing some issues before they actually become alertable, MTTR on alertable incidents would rise.

  • Posted at 3:35 pm, August 26, 2014

    Great point re: putting MTTR in context. We definitely want to be reducing our time to resolve similar or identical alerts, as we share knowledge and automate common fixes or preventative measures. But as you improve our system and start fixing some issues before they actually become alertable, MTTR on alertable incidents would rise.

  • Posted at 3:35 pm, August 26, 2014

    Great point re: putting MTTR in context. We definitely want to be reducing our time to resolve similar or identical alerts, as we share knowledge and automate common fixes or preventative measures. But as you improve our system and start fixing some issues before they actually become alertable, MTTR on alertable incidents would rise.

Comments are closed at this time

Please visit our Community to engage with other PagerDuty users.