The PagerDuty Incident Response Process is a detailed document that provides a framework for how to structure your incident response process. But sometimes it helps...by George Miranda
June 20, 2019
We hosted our first user group last week at PagerDuty HQ! Not only did we gather our awesome customers and enjoy the taco bar and cervezas, but we got to learn a lot from our them, share our roadmap – and our customers learned from each other, too. We really value user feedback as part of how and why we build our product. We wanted to share some key takeaways from our sessions during the event.
What People Are Talking About
We had representatives from a wide range of customers, from DevOps models with both developers and operations engineers participating in incident resolution to a more traditional approach of having a first tier response team, NOC, or service desk. Even with varying operational models, everyone expressed a common need around mustering the correct team of people as soon as possible to fix the issue.
Alert fatigue is something we’ve talked about it before, and it’s something anyone who’s been on-call is probably familiar with. Not only did we chat about the pains, but we also discussed effective strategies for reviewing and improving alert quality. Etsy does a great job of formalizing and measuring their on-call experience with their Ops Weekly tool built on the PagerDuty API.
Speaking of building with the PagerDuty API, we had Jeff from Weebly join us – and he’s built a pretty sweet two-way integration between PagerDuty and Nagios. Check out our integration guide here if you haven’t yet. You can also read about other cool things you can do with our API.
Future User Groups
That’s a wrap for our first user group – thanks again to our awesome customers that attended! But, that’s not it – if you’d like to see a PagerDuty user group in your area, feel free to reach out to firstname.lastname@example.org.