Turn any signal into insight and action. See how PagerDuty Digital Operations Management Platform integrates machine data and human intelligence to improve visibility and agility across organizations.
Connect insights to real-time action by aligning teams through the shared language of business impact.
Check out the latest products we’ve been working on—including event intelligence, machine learning, response automation, on-call, analytics, operations health management, integrations, and more.
Digital Operations Management arms organizations with the insights needed to turn data into opportunity across every operational use case, from DevOps, ITOps, Security, Support, and beyond.
Over 300 Integrations
Discover DevOps best practices with our library of webinars, whitepapers, reports, and much more.
Learn best practices and get support help with resources from our award-winning support team.
See how PagerDuty works with our live product demo — twice a week, every week.
We've created a maturity model to assist on the journey to digital operations excellence. Take our short assessment to find out where your team falls!
Interactive, simple-to-use API and technical documentation enables users to easily try updates and extend PagerDuty.
Engage with users and PagerDuty experts from our global community of 200k+ users. Become a member, connect, and share insights for success.
Get all your PagerDuty-related questions answered by exploring our in-depth support documentation and community forums.
A long time ago, back in the early days of 2017, we open-sourced our Incident Response Documentation, the reference point for all our internal processes...
PagerDuty helps organizations transform their digital operations. Learn more about PagerDuty's mission and what we do.
Meet our experienced and passionate executive team.
We are risk-taking innovators dedicated to delivering amazing products and delighting customers. Join us and do the best work of your career.
With the PagerDuty Foundation, we are committed to doing our part in giving back to the community.
On March 25th, PagerDuty suffered intermittent service degradation over a three hour span, which affected our customers in a variety of ways. During the service degradation, PagerDuty was unable to accept 2.5% of attempts to send events to our integrations endpoints and 11.0% of notifications experienced a delay in delivery – instead of arriving within five minutes of the triggering event, they were sent up to twenty-five minutes after the triggering event.
We take reliability seriously, an outage of this magnitude as well as the impact it causes to our customers is unacceptable. We apologize to all customers that were affected and are working to ensure the underlying causes never affect PagerDuty customers again.
Much of the PagerDuty notifications pipeline is built around Cassandra, a distributed NoSQL datastore. We use Cassandra for its excellent durability and availability characteristics, and it works extremely well for us. As we have moved more of the notifications pipeline over to use Cassandra the workload being applied to our Cassandra cluster has increased, including both steady-state load and a variety of bursty batch-style scheduled jobs.
On March 25, the Cassandra cluster was subjected to a higher than typical workload from several separate back-end services, but was still within capacity. However, some scheduled jobs then applied significant bursty workloads against the Cassandra cluster, which put multiple Cassandra cluster nodes into an overload state. The overloaded Cassandra nodes reacted by canceling various queued up requests, resulting in internal clients experiencing processing failures.
Request failures are not unexpected, many of our internal clients have retry-upon-fail logic to power through transient failures. However, these retries were counterproductive in the face of Cassandra overload, with many of the cancelled requests getting immediately retried – causing the overload period to extend longer than necessary as the retries subsided over time.
In summary, significant fluctuations in our Cassandra workload surpassed the cluster’s processing capacity, and failures occurred as a result. In addition, client retry logic resulted in the workload taking much longer to dissipate, extending the service interruption period.
Even with excellent monitoring and alerting in place, bursty workloads are dangerous: by the time their impact can be measured, the damage may already be done. Instead, an overall workload that has low variability should be the goal. With that in mind, we have re-balanced our scheduled jobs so that they are temporally distributed to minimize their overlap. In addition, we are flattening the intensity of our scheduled jobs so that each has a much more consistent and predictable load, albeit applied over a longer period of time.
Also, although our datasets are logically separated already, having a single shared Cassandra cluster for the entire notifications pipeline is still problematic. In addition to the combined workload from multiple systems being hard to model and accurately predict, it also means that when overload occurs it can impact multiple systems. To reduce this overload ripple effect, we will be isolating related systems to use separate Cassandra clusters, eliminating the ability for systems to interfere with each other via Cassandra.
Our failure detection and retry policies also need rebalancing, so that they better take into account overload scenarios and permit back-off and load dissipation.
Finally, we need to extend our failure testing regime to include overload scenarios, both within our Failure Friday sessions and beyond.
We take every customer-affecting outage seriously, and will be taking the above steps (and several more) to make PagerDuty even more reliable. If you have any questions or comments, please let us know.
Voices wield power. Staying silent is not an option. We must speak up and honor those who do. October is National Domestic Violence Awareness Month,...
“Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in...
600 Townsend St., #200
San Francisco, CA 94103
905 King Street West, Suite 600
Toronto, ON, M6K 3G9, Canada
1416 NW 46th St., St. 301
Seattle, WA 98107
5 Martin Place
1 Fore St,
London EC2Y 9DT
© 2009 - 2018