PagerDuty Blog

Cutting Alert Fatigue in Modern Ops

This is a guest post by Ilan Rabinovitch, Director of Product Management at Datadog.


The convergence of rapid feature development, automation, continuous delivery, and the shifting makeup of modern tech stacks has pushed monitoring requirements to a potentially overwhelming scale. But while the systems you need to monitor are complex, your monitoring strategy doesn’t have to be.

At Datadog, we see the demand for monitoring at scale as a product of four changes:

  1. Increasing number of infrastructure components (microservices, instances, containers)
  2. Frequency of code and configuration changes
  3. Number of people and roles interacting with infrastructure
  4. Proliferation of platforms, tools, and services (from a few vendor packages to lots of hosted services and open source software)

The scale and pace of change involved in ops today dictate a carefully crafted monitoring and incident response strategy. Keeping the strategy simple will take some of the pain out of monitoring.

Monitor all the things

Our unifying theme for monitoring is:

Collecting data is cheap, but not having it when you need it can be expensive, so you should instrument everything, and collect all the useful data you reasonably can.

When you are monitoring so many things simultaneously, automated alerts and an effective incident response strategy are indispensable to help you avoid or minimize service disruptions.

Clearly, an effective incident response strategy must separate issues that require immediate attention from issues that can wait. If you don’t strike the right balance, you risk alert fatigue, which can cause real problems to be missed.

Our overarching approach to alert management is:

  • Collect alerts liberally; notify judiciously (especially via phone/SMS)
  • Page on symptoms, not causes
  • Prevent alert fatigue by separating the signal from the noise in your notifications

Alert types

While we recommend collecting alerts liberally, not all alerts are handled in the same way. You can organize alerts into a few types: records (preserved in your monitoring system for future reference), or alerts that select the right notification urgency based on their severity (i.e. email or another non-interrupting channel for a low-urgency alert, and phone call for a high-urgency alert).

You can determine the appropriate alert type by answering three questions:

Question 1: Is the issue real?

No – No alert required. Example: Metrics in a test environment

Yes – Proceed to Question 2.

Question 2: Does the issue require attention?

No – Since no intervention is required, the alert is simply recorded for context in case a more serious problem emerges.

Yes – Go to Question 3.

Question 3: Is the issue urgent?

No – (Low urgency): Since intervention is not immediately required, you can send an alert automatically via a non-interrupting channel like email, chat, or ticketing system.

Yes – (High urgency): These issues require immediate intervention no matter what time, for example, an outage or SLA violation. Responders should be notified in real-time via phone call, SMS, or another channel that will get their full attention.

Symptoms not causes

When an alert is severe enough for someone to be paged, in most cases, that page should be tied to symptoms, not causes.  

A system that stops doing useful work is a symptom that could have a variety of causes. For example, a web site responding very slowly for three minutes is a symptom. Possible causes include database latency, failed application servers, high load, and so on.

Paging for symptoms focuses attention on real problems with potential user-facing impact. Symptoms typically point to real issues instead of potential or internal problems that might not be critical, might not affect users, or might revert to normal levels without intervention. Ideally, related alerts can all be automatically grouped together so that when responders get paged, they have all the context required to diagnose what is going on and coordinate a response.

In addition to pointing to real problems, symptom-triggered alerts tend to be more durable because they fire whenever a system stops working the way that it should. In other words, you don’t have to update your alert definitions every time your underlying system architectures change. In an environment with dynamic infrastructure and lots of moving parts, durable alerts eliminate extra work and reduce the potential for introducing blind spots.

One exception to the symptoms rule is when an issue is highly likely to turn into a serious problem, even though the system is performing adequately. A good example is disk space running low. In this case, a cause is a legitimate reason to send out a page, even before symptoms manifest.

More alerting strategies

Adopting a sensible framework for monitoring, alerting, and paging helps your teams effectively address issues in production without being overwhelmed by false alarms or flapping alerts. For more monitoring strategies, check out our Monitoring 101 series. Or you can drop by the Datadog booth at PagerDuty Summit 2017. We’d be happy to show you some of these principles in action and discuss how you can adapt your monitoring strategy to make modern applications more observable. We hope to see you there.