Blog

Best practices to make your metrics meaningful in PagerDuty

by Julie Arsenault September 11, 2014 | 4 min read

This post is the second in our series about how you can use data to improve your IT operations. Our first post was on alert fatigue.

A few weeks ago, we blogged on key performance metrics that top Operations teams track. As we’ve spoken with our beta testers for Advanced Reporting, we’ve learned quite a bit about how teams are measuring Time to Acknowledge (MTTA) & Time to Respond (MTTR). The way your team uses PagerDuty can have a significant impact on how these metrics look, so we wanted to share a few best practices to make the metrics meaningful.

1. Develop guidelines for acknowledging incidents

The time it takes to respond to an incident is a key performance metric. To understand your time to response in PagerDuty, we recommend that you acknowledge an incident when you begin working on it. Furthermore, if you’re on a multi-user escalation policy, this practice is even more important – we’ve just released an update so that once you acknowledge an incident, your teammates will be notified that they no longer need to worry about the alert.

Many high-performing Operations teams set targets for Ack time because it is one metric teams typically have a lot of control over. PagerDuty’s Team Report can show you trends in your TTA so you can see whether you are falling within your targets, and how the TTA varies with incident count.

2. Define when to resolve

We recommend resolving incidents when they are fully closed and the service has resumed fully operational status. If you’re using an API integration, PagerDuty will automatically resolve incidents when we receive an “everything is OK” message from the service. However, if you’re resolving incidents manually, make sure your team knows to resolve incidents in PagerDuty when the problem is fixed. To make incident resolution even easier, we’ll soon be releasing an update to our email-based integrations to auto-resolve incidents from email.

3. Use timeouts carefully

When you create the settings for a service, you can set two timeouts: the incident ack timeout and the auto resolve timeout. These timeouts can have an impact on your MTTA and MTTR metrics, so it’s important to understand how they are configured.

An incident ack timeout provides a safety net if an alert wakes you up in the middle of the night and you fall back asleep after acknowledging it. Once the timeout is reached, the incident will re-open and notify you again. If falling asleep after acking an incident is a big problem for your team, you should keep the incident ack timeout in effect – however, it can make your MTTA metrics more complex. The incident ack timeout can be configured independently for each service, and the default setting is 30 minutes.

If you’re not in the habit of resolving incidents when the work is done, auto resolve timeouts are in place to close incidents that have been forgotten. This timeout is also configurable in the Service settings, and the default is 4 hours. If you’re using this timeout, you’ll want to make sure it is longer than the time it takes to resolve most of your incidents (you can use our System or Team Reports to see your incident resolution time). To make sure you don’t forget about open incidents, PagerDuty will also send you an email every 24 hours if you have incidents that have been open for longer than a day.

4. Treat flapping alerts

A flapping alert is one that is triggered, then resolves quickly thereafter. Flapping is typically caused when the metric being monitored is hovering around a threshold. Flapping alerts can clutter your MTTR & MTTA metrics – on the Team Report, you may see a high number of alerts with a low resolution time, or a resolve time lower than ack time (auto-resolved incidents never get ack-ed). It’s a good idea to investigate flapping alerts since they can contribute to alert fatigue (not to mention causing annoyance) – many times they can be cured by adjusting the threshold. For more resources on flapping alerts, check out these New Relic and Nagios articles.