Blog

Let's talk about Alert Fatigue

by Julie Arsenault September 3, 2014 | 5 min read

This is the first post in our series on how you can use data to improve your IT operations. The second post is on about best practices to make your metrics meaningful in PagerDuty.

Screen Shot 2014-08-28 at 4.44.31 PMAlert fatigue is a problem that’s not easy to solve, but there are things you can start doing today to make it better. Using data about your alerts, you can seriously invest in cleaning up your monitoring systems and preventing non-actionable alerts.

To help, we’ve compiled a 7-step process for combatting alert fatigue.

Reducing Alert Fatigue in 7 Steps

1. Commit to action

Cleaning up your monitoring systems is hard, and It’s easy to become desensitized to high alert levels. But the first step is to decide to do something about it. Take a quick look at your data. How many alerts are you getting off hours, and what’s the impact of those for the team?

Screen Shot 2014-08-28 at 4.49.01 PMThen, as a team, commit time to cleaning up your alerting workflows. Etsy designated a “hack week” to tackle their big monitoring hygiene problem, but setting aside a few hours a week or one day each month could also work.

 

2. Cut alerts that aren’t actionable & adjust thresholds

Start by reviewing your most common alerts (Hint: you can drill into incidents in PagerDuty’s new Advanced Reports). Gather the people who were on call recently, and for each alert, determine whether it was actionable.

Once you find non-actionable alerts, cut them.

It’s common to monitor and alert on CPU and memory usage because these are indicators that something is wrong. However, the metrics by themselves are NOT actionable because they don’t give specific information about what’s wrong. Etsy stopped monitoring these metrics, and focused instead on checks that gave more specific, actionable information.

You may also need to adjust the thresholds on your checks. Dan Slimmon from Exosite shared a great talk “Smoke Alarms and Car Alarms”, which details how two concepts from medical testing can help you alert only when there is a problem. The concepts are sensitivity and specificity, and together they give you a positive predictive value (PPV) – the likelihood that something is actually wrong when an alert goes off. The talk also shares strategies for improving your PPV using hysteresis (looking at historical values in addition to current values), as well as other techniques.

3. Save non-severe incidents for the morning

While all alerts are important, some may not be urgent. These non-urgent issues shouldn’t be waking you or your team up in the middle of the night. Consider creating separate workflows for non-severe incidents so these don’t interrupt your sleep or your workday. In PagerDuty, don’t forget to disable “Incident Ack Timeout” and “Incident Auto-Resolution” on low severity services.

4. Consolidate related alerts

When something goes wrong, you may get several alerts related to the same problem. Take advantage of monitoring dependencies if you can set them, and leverage our best practices for alert consolidation in PagerDuty:

  • Use an incident key to tell PagerDuty that certain events are related. For example, if you have multiple servers that go down, each individual one may generate a notification to PagerDuty. However, if those notifications all have the same incident key, we’ll consolidate the notifications to one alert that tells you 30 servers are down.
  • During an alert storm, PagerDuty will also bundle alerts that are triggered after the first incident. For example, if 10 incidents are triggered within the space of 1 minute, after your first alert, you’ll receive a single, aggregated alert.

5. Give alerts relevant names & descriptions

Nothing sucks more than getting an alert saying that something is broken without information to help you gauge the severity of the issue and what to do next.

  • Give your alerts descriptive names. If you’re giving a metric (i.e. disk space used), make sure there’s enough context around the number to let someone put it in perspective. Is disk space 80% full, or 99%?
  • Include relevant troubleshooting information in the alert description, like a link to existing documentation or runbooks that will help the team dig deeper. In PagerDuty, you can add a client_url to the incident, or include a runbook link in the service description.

6. Make sure the right people are getting alerts

When teams first start monitoring, we commonly see them sending all of their alerts to everyone. No one wants to receive alerts that aren’t meaningful, so if you have different teams responsible for certain parts of your infrastructure, use Escalation Policies in PagerDuty to direct alerts appropriately.

7. Keep it up to date with regular reviews

Don’t let your clean-up effort go to waste. Create a weekly process to review alerts. Etsy created a cool weekly review process they call “Opsweekly” (Github repo here), but we’ve heard of other companies that use a spreadsheet during weekly reviews.

To prevent alert fatigue from becoming the new norm, set quantifiable metrics for the on-call experience. If you hit these ceilings, it’s time to take action – whether that be monitoring clean-up or a little time off. At PagerDuty, we look at the number of alerts we get on a weekly basis, and if that number is more than 15 for an on-call team, we’ll do a de-brief to review the alerts.

Most importantly, take ownership of monitoring hygiene as a team – If you get an alert that isn’t actionable, even once, make it your responsibility to make sure no one ever gets woken up for that alert again.

Additional Resources:

Monitoring_Ebook_728_90