PagerDuty Blog

A Developer’s Perspective

“Walking over to the Ops room – I don’t feel like I ever need to do that anymore.”

In the run up to our latest release of capabilities for developers, I sat down with David Yang, a senior engineer here at PagerDuty who’s seen our internal architecture evolve from a single monolithic codebase to dozens of microservices. He’s the technical lead for our Incident Management – People team, which owns the services that deliver alert notifications to all 8,000+ PagerDuty customers. We sat down and talked about life after switching to teams owning the operations of their services. Here are some observations about the benefits and drawbacks we’ve seen:

On life now that teams own their services:

Since moving to a model where developers own their services, there’s a lot more developer independence. A side effect is that we’ve minimized the difficulties in provisioning and managing infrastructure. Now, the team wants to optimize for the least amount of obstacles and roadblocks. Supporting infrastructure teams are geared toward providing better self-service tools to minimize the need for human intervention.

The shift to having developers own their code reduces cycle time from when someone says, “this is a problem,” to when they can actually fix the problem, which has been invaluable.

On cultural change:

By having people own more of the code, and have more responsibility in general for the systems they operate, you essentially push for a culture that’s more driven towards getting roadblocks out of the way — like each team is more optimized towards, “how can I make sure I’m not ever blocked again” or “not blocked in the future.” It’s a lot more apparent when we are blocked. Before, I had to ask ops every time we wanted to provision hosts, and I just accepted it. Now my team can see its roadblocks better because they aren’t hidden by other teams’ roadblocks.

We have teams that are focused a lot more on owning the whole process of delivering customer value from end to end, which is invaluable.

On how this can help with the incident response process:

There are clearer boundaries of service ownership. It’s easier to figure out which specific teams are impacted when there’s an operability issue. And the fact that I know the exact procedure to follow — and it’s more of an objective procedure of, “this is the checklist” — that is great. It enables me to focus 100% on solving the problem and not on the communication around the incident.

On what didn’t work so well:

Not to say that owning a service doesn’t come with its own set of problems. It requires dedicated time to tend to the operational maintenance of our services.  This ultimately takes up more of the team’s time, which is especially an issue with legacy services where they may be knowledge gaps. In the beginning, we didn’t put strong enough guardrails in place to protect operability work in our sprints. That’s being improved by leveraging KPIs [such as specific scaling goals and operational load levels] to enable us to make objective decisions.

On the future:

[Of balancing operations-related work vs. feature development work] teams are asking: “How do I leverage all of this stuff day-to-day? How do I make even more objective decisions?” — and driving to those objective decisions by metrics.

Everything in our product development is defined in, “what is customer value”, “what is success criteria,” etc. I think trying to convey the operational work in the same sense helps make it easier to prioritize effectively. We’re all on the same team and aligned to the same goal of delivering value to our customers, and you have to resolve the competing priorities at some point.

Trying to enact change within an organization around operations requires a lot of collaboration. It also takes figuring out what the right metrics are and having a discussion about those metrics.

 

 


Image: “Magnifying glass” is copyright (c) 2013 Todd Chandler