Turn any signal into insight and action. See how PagerDuty Digital Operations Management Platform integrates machine data and human intelligence to improve visibility and agility across organizations.
Connect insights to real-time action by aligning teams through the shared language of business impact.
Check out the latest products we’ve been working on—including event intelligence, machine learning, response automation, on-call, analytics, operations health management, integrations, and more.
Digital Operations Management arms organizations with the insights needed to turn data into opportunity across every operational use case, from DevOps, ITOps, Security, Support, and beyond.
Over 300 Integrations
Discover DevOps best practices with our library of webinars, whitepapers, reports, and much more.
Learn best practices and get support help with resources from our award-winning support team.
See how PagerDuty works with our live product demo — twice a week, every week.
Join live and on-demand webinars for product deep dives, industry trends, configuration training, and use case-specific best practices.
Interactive, simple-to-use API and technical documentation enables users to easily try updates and extend PagerDuty.
Engage with users and PagerDuty experts from our global community of 200k+ users. Become a member, connect, and share insights for success.
Get all your PagerDuty-related questions answered by exploring our in-depth support documentation and community forums.
We just held our annual conference, PagerDuty Summit 2018, where we shared new product announcements and demoed new capabilities. But while we always have big...
PagerDuty helps organizations transform their digital operations. Learn more about PagerDuty's mission and what we do.
Meet our experienced and passionate executive team.
We are risk-taking innovators dedicated to delivering amazing products and delighting customers. Join us and do the best work of your career.
With the PagerDuty Foundation, we are committed to doing our part in giving back to the community.
Today’s infrastructure is not your grandparents’ IT infrastructure, nor is it the infrastructure from a generation ago. The days of punch cards, vacuum tubes, ferrite core memory, floppies, and dial-up Internet are over.
Today’s infrastructure is also not the IT infrastructure that it five years ago, or even a year ago for that matter. Modern infrastructure is changing constantly, and all that we can do is provide a snapshot of infrastructure at the moment, along with a general picture of where it’s going.
If you are going to monitor infrastructure effectively, you need to understand what infrastructure looks like today, how it is changing, and what it will include tomorrow.
Let’s start by making a basic distinction: Hardware infrastructure is relatively stable (with a strong emphasis on the word “relatively”), and has been in a state of semi-stability for a few years. While any speculation about Moore’s Law reaching the end of the line is premature, the Moore’s Law curve appears to have at least partially leveled off for the moment, at least with regard to processor speed and RAM capacity (mass storage may be another story).
This leveling off means that the most substantial and important changes in IT infrastructure have been on the software side. This shouldn’t be surprising, since to a considerable degree, modern infrastructure is software. Software-defined networking, virtual machines, containers and the like mean that the line between hardware and software today is effectively quite blurry.
The fact that IT infrastructure can be seen largely as software is itself a key element of modern computing, and it should come as no surprise. Hardware, after all, is basically a framework, a structure designed to make things possible. What one does with those possibilities can make all the difference in the world.
The shift to software-based infrastructure has implications that go far beyond a typical change of platform. For one thing, hardware itself imposes a serious lag on the rate of change. It is expensive and time-consuming to replace or upgrade physical servers, networks, and peripherals, so many organizations have traditionally waited until it is obviously necessary (or even later) before making such changes. This lag may only be a matter of a few years, but it has typically affected the software level, as well as the infrastructure hardware itself, by imposing the need to accommodate both legacy hardware and the legacy software that it requires.
In modern software-based infrastructure, however, both application software and the elements that make up the infrastructure are insulated from most (if not all) of the underlying hardware elements, often by several layers of abstraction. As long as the hardware can support the requirements of the abstraction layer, the infrastructure itself is now largely free of hardware-imposed lag.
As a result, the rate of change in both infrastructure and application software is now governed by other factors, such as organizational culture and the practical limits on the speed of software design and development. These factors are generally “soft,” and the kind of lag that they tend to impose is both much shorter and much more dependent on conditions prevailing within a specific organization.
This means that any understanding of how we compute today can only be a snapshot that captures the state of modern IT infrastructure at the current moment. And what would such a snapshot contain? The key elements might look something like this:
So, how do we compute today? We compute largely in an environment that is virtualized, and insulated from the hardware level by multiple layers of abstraction. Our development and deployment pipeline is continuous and managed by event-driven automation. In many respects, the modern IT environment is a virtual world, insulated from the traditional hardware-based IT world to the point where many of the concerns that dominated IT just a few years ago have become irrelevant.
If that’s a snapshot of today, what will the picture look like tomorrow, or in five or ten years? There’s no real way to know, of course. Any prediction made today is likely to look increasingly foolish as time goes on.
But here are some other predictions. It is likely that we have only begun to see the effects of freeing the virtual-computing environment from the constraints imposed by hardware. It is also likely that the distinctions between virtualized computing, virtual reality, and the traditional world of physical experience will break down even more. In many respects, the rate of change in computing today is limited by our ability to assimilate changes as they occur, and to make full use of new capabilities as they develop. But automation and intelligence capabilities will likely disrupt nearly every function, vertical, and domain, unleashing new potentials for efficiency and dramatically altering the focus of people’s work.
Perhaps the virtualization of both computing and everyday experiences will increase the rate at which we can assimilate future change. If this is the case, future computing and future life in general might be completely unrecognizable to us if we were to catch a glimpse of it now, even though we are likely to be both the creators of and participants in that future. As we change the world, we change ourselves.
A release is a set of customer visible and operational features that together provide a completely new or improved product capability. It’s something that’s meaningful...
Today’s infrastructure is not your grandparents’ IT infrastructure, nor is it the infrastructure from a generation ago. The days of punch cards, vacuum tubes, ferrite...
600 Townsend St., #200
San Francisco, CA 94103
905 King Street West, Suite 600
Toronto, ON, M6K 3G9, Canada
1416 NW 46th St., St. 301
Seattle, WA 98107
5 Martin Place
1 Fore St,
London EC2Y 9DT
© 2009 - 2018