Blog

Outage Post-Mortem

by Andrew Miklas August 10, 2011 | 6 min read

As you may already know, PagerDuty suffered an outage of 30 minutes yesterday, followed by a period of increased alert delivery times.  We’re taking the downtime very seriously, especially considering that it overlapped with downtime many of our customers were facing.

Please understand that we aren’t trying to shift the blame to any other parties, but part of our processes involves understanding any serious downtime and coping with it openly.

What Happened: The Outage

PagerDuty is presently hosted on Amazon Web Services (AWS) Elastic Computing Cluster (EC2).  One of AWS’s most attractive features are “Availability Zones”.  These are “distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region”.

Like many high availability applications, PagerDuty uses multiple Availability Zones to protect our application from data center level failures.  AWS’s high speed inter-AZ networks allow us to synchronously replicate every event, notification, and incident we process to at least two physically separate locations.  Under normal circumstances, in the event of an AZ (i.e. data center) wide failure, we are able to redirect all traffic to one of the surviving AZs within 60 seconds with absolutely no loss of incoming events.

Unfortunately, yesterday the system did not perform as designed. While we’re looking forward to reading AWS’s official post mortem, our own investigation indicates that at least three nominally independent AZs in US-East-1 all simultaneously dropped from the Internet for 30 minutes.  This left us with no hardware to accept incoming events, nor to dispatch notifications for events we’d already received.

What Happened: The Aftermath

The region wide failure of EC2 impacted a large fraction of our customers. Once connectivity was restored, we received an extremely high load of incoming events and emails, and our (only semi-recovered) infrastructure was not able to process the backlog quickly enough.  The load also exposed some performance-related issues within our notification dispatch system.  In the future, our load testing framework will test a scenario where we are hit with a similar level and distribution of traffic.

Prevention: Immediate Plans

We strive to ensure that PagerDuty delivers every alert within 3 minutes of its scheduled delivery date.  A system-wide outage of 30 minutes is obviously completely unacceptable. We’ve already taken the following steps to ensure that a similar region-wide event won’t cause an extended outage:

  1. We’ve deployed a replica of our entire stack on an additional hosting provider.  In the event of a similar AWS failure, we will flip our DNS entries to the alternate stack and continue processing from where we left off.  While the flip process won’t be as fast or transparent as is possible with AWS’s elastic IP functionality, this alternate stack provides us with a level of redundancy we cannot achieve using AWS alone.
  2. We’ve doubled our front-end capacity.  Unfortunately, PagerDuty has an inherently high-variability load.  When a major hosting provider experiences an outage, a large fraction of our customer base will need to be simultaneously alerted.  We can therefore go from a system that is well under capacity to one that is severely under-provisioned in a matter of moments.  We have some thoughts on how to better address this issue in the future, but for now we have added more idle capacity to our system to handle the extra potential load.

Prevention: Future Plans

Over the coming months, we are planning on making a number of additional improvements to our infrastructure.  These changes will further decrease the chance of a system-wide outage.

  1. We intend to switch off of AWS EC2 entirely.  A very high fraction of our customer base host their services on AWS.  This creates a dangerous correlated failure condition where the periods during which our load is likely to be abnormally high are also likely to be times when we are experiencing operational issues ourselves.  Even when a failure is limited to just a single AZ, this creates issues, as it causes us to lose redundant capacity when we most need it.
  2. We will host PagerDuty across multiple providers.  Using a single hosting provider with multiple data centers is tempting — it’s much easier to write a distributed app with tools like virtual IPs that can be migrated from one data center to another.  But, in our opinion, the failure coupling that such features introduce to the environment are not worth the risk.  When using multiple hosting providers, the potential for single points of failure are much lower.
  3. We will provision and test with the assumption that, at any moment, we could need to alert 33% of our customers within 5 minutes.  This will allow us to cope with “perfect storm” scenarios where a provider level outage triggers failures across a very large proportion of our customer base.  In the past, our load testing has focused on large bursts of traffic from a smaller number of customers.  We will also take steps to ensure that our event queuing and notification dispatch systems degrade gracefully in overload scenarios.

Emergency Response

Another problem uncovered by yesterday’s outage is that we had no effective way to alert our customers that there was a gap in PagerDuty’s coverage.  While we obviously intend to prevent such a gap from ever occurring again, we believe it’s important to plan for all eventualities.

To that end, we’ve created a Twitter account where we will only announce PagerDuty downtime.  By subscribing your cell phone to this Twitter feed, you’ll be alerted any time there is a gap in your PagerDuty coverage.  To learn more about how to set this up, please see our previous blog post.

In addition, we intend to create a custom facility where users can subscribe to receive phone alerts if PagerDuty experiences another system-wide outage.  Naturally, it is our intention to never need to use this system.  However, we want to ensure that we have a way to rapidly notify interested customers of any gap in their PagerDuty coverage.  Obviously, we’ll ensure that this emergency system shares no dependencies with our main notification dispatch service.

Conclusions

Needless to say, we’re sorry for letting you all down. We’ve already taken several steps to ensure this won’t happen again, and we will be taking several more in the upcoming weeks.

Please don’t hesitate to contact us if you have any questions or concerns. We look forward to earning back your trust.