- PagerDuty /
- Engineering Blog /
- Lessons Learned While Building PagerDuty’s MCP Server
Engineering Blog
Lessons Learned While Building PagerDuty’s MCP Server
The Model Context Protocol (MCP) is quickly becoming the de facto standard for connecting tools to AI agents. Often described as the “USB-C for AI,” MCP is redefining how intelligent systems interact with external services.
We have released PagerDuty’s Official MCP server
MCP adoption has exploded in recent months, with a rapidly growing ecosystem of tools supporting a wide range of use cases, from developer operations to data pipelines, automation, and beyond. New MCP servers are appearing every day, and at PagerDuty, we’re so excited about the potential of MCP that we built our own.
That’s right: you can now interact directly with your PagerDuty account from your AI client of choice. With over 20 tools available out of the box, you can manage teams, update on-call schedules, create and resolve incidents, and much more, all without ever leaving your agent interface.
Want to give it a try? Then check PagerDuty/pagerduty-mcp-server and follow the README to spin it up locally.
Lessons learned from building an MCP server
On July 17th, we hosted a live stream session where Manuel Reis and I discussed more about MCP, and in particular, shared some of the lessons learned while building our own MCP server. Let’s now go over some of them!
The Challenge of MCP
Designing for MCP presents unique challenges: how do you build a set of tools for an unknown AI agent, whose internal reasoning is a complete black box? On top of that, the number of tools and instructions you can expose is limited by the agent’s ability to follow directions and handle large contexts. In other words, more tools don’t always mean better performance.
With those challenges in mind, here are six key lessons we learned while building our MCP server.
Lesson 1: APIs Aren’t Built for AI
When building an API, we are targeting humans. This means we expect humans to read the docs and understand how to use each endpoint. However, when dealing with AI agents, this doesn’t apply. When building an MCP server, it needs to be self-explanatory. The server should include enough information so that, when plugged into a client, it’s immediately clear what each tool does, what inputs it needs, and what outputs it returns.
Lesson 2: Limit Tool Count
We have established in the lesson before that APIs are built for humans; well, agents are not humans. This means that there is a limited number of tools an agent can use effectively without the model getting overwhelmed.
We have found that most MCP servers have between 1 and 30 tools, with 20-25 being the sweet spot.
Lesson 3: Design for User Journeys
When working with your APIs, the natural instinct might be to turn every endpoint into its own tool. But doing so can quickly overwhelm the AI agent and degrade performance. Instead of mapping endpoints one-to-one, start by thinking about what users will actually want to accomplish with your MCP server.
By identifying common user journeys, you can group related endpoints together and design tools that are focused, meaningful, and easier for agents to use effectively.
Lesson 4: Optimize for Model Ergonomics
AI agents struggle with raw data and complex calculations. We found that adding helpful metadata, like list sizes or summaries, makes tools easier for models to use. When the model starts chaining multiple tool calls, it’s often better to combine those actions into a single, smarter tool. For example, instead of three separate calls for user info, team info, and incidents, we built one tool that can answer, “Show me all incidents for my teams” in a single step.
Lesson 5: Test for Agent Behaviour
User journeys are key for testing. When testing MCP servers, focus on whether agents use the right tools for each scenario, not just the final response. For each test question, check if different models follow the expected tool path. And just like traditional testing, mocking remains essential to ensure reliable results.
Lesson 6: Iterate and Learn from the Community
The MCP ecosystem is new, but it’s evolving rapidly. We’re constantly looking at how other open source MCP servers are built, and we believe in sharing our own lessons and best practices along the way.
Our goal is to develop our MCP server out in the open, collaborating with the community to keep improving and deliver the best possible experience for everyone.
Wrapping up
As we have seen, building an MCP server is less about how we expose every single endpoint and function but more about designing a set of ergonomic, clear, and LLM-adapted tools that solve user problems. So in a sense, it is less about human developer experience but more about LLM developer experience.
We hope these lessons help you on your own MCP journey! Check out our MCP server at PagerDuty/pagerduty-mcp-server, give it a try, and let us know what you think. Open an issue, contribute, or just share your feedback—we’d love to hear from you!