Blog

How We Compute Today: What Modern Infrastructure Looks Like

by Michael Churchman July 6, 2017 | 7 min read

Today’s infrastructure is not your grandparents’ IT infrastructure, nor is it the infrastructure from a generation ago. The days of punch cards, vacuum tubes, ferrite core memory, floppies, and dial-up Internet are over.

Today’s infrastructure is also not the IT infrastructure that it five years ago, or even a year ago for that matter. Modern infrastructure is changing constantly, and all that we can do is provide a snapshot of infrastructure at the moment, along with a general picture of where it’s going.

If you are going to monitor infrastructure effectively, you need to understand what infrastructure looks like today, how it is changing, and what it will include tomorrow.

Hardware: Less of Moore’s

Let’s start by making a basic distinction: Hardware infrastructure is relatively stable (with a strong emphasis on the word “relatively”), and has been in a state of semi-stability for a few years. While any speculation about Moore’s Law reaching the end of the line is premature, the Moore’s Law curve appears to have at least partially leveled off for the moment, at least with regard to processor speed and RAM capacity (mass storage may be another story).

Software: Change is Natural

This leveling off means that the most substantial and important changes in IT infrastructure have been on the software side. This shouldn’t be surprising, since to a considerable degree, modern infrastructure is software. Software-defined networking, virtual machines, containers and the like mean that the line between hardware and software today is effectively quite blurry.

The fact that IT infrastructure can be seen largely as software is itself a key element of modern computing, and it should come as no surprise. Hardware, after all, is basically a framework, a structure designed to make things possible. What one does with those possibilities can make all the difference in the world.

Free of the Hardware Lag

The shift to software-based infrastructure has implications that go far beyond a typical change of platform. For one thing, hardware itself imposes a serious lag on the rate of change. It is expensive and time-consuming to replace or upgrade physical servers, networks, and peripherals, so many organizations have traditionally waited until it is obviously necessary (or even later) before making such changes. This lag may only be a matter of a few years, but it has typically affected the software level, as well as the infrastructure hardware itself, by imposing the need to accommodate both legacy hardware and the legacy software that it requires.

In modern software-based infrastructure, however, both application software and the elements that make up the infrastructure are insulated from most (if not all) of the underlying hardware elements, often by several layers of abstraction. As long as the hardware can support the requirements of the abstraction layer, the infrastructure itself is now largely free of hardware-imposed lag.

“Soft” Factors

As a result, the rate of change in both infrastructure and application software is now governed by other factors, such as organizational culture and the practical limits on the speed of software design and development. These factors are generally “soft,” and the kind of lag that they tend to impose is both much shorter and much more dependent on conditions prevailing within a specific organization.

Infrastructure Today

This means that any understanding of how we compute today can only be a snapshot that captures the state of modern IT infrastructure at the current moment. And what would such a snapshot contain? The key elements might look something like this:

  • The cloud. If infrastructure is software sitting on top of multiple layers of abstraction, there is no reason for it to be tied to any particular set of servers or networks. The cloud (which is basically a high-level abstraction layer) becomes the most fundamental level of infrastructure with which developers interact. The infrastructure which devs create and manage is in effect fully virtualized, whether it consists of apps running on VMs, or containers running on a virtualized host system.
  • Virtualization. Virtualization has, then, become a given, and we are only beginning to understand the implications of this fact. Existing operating systems were originally designed around the constraints imposed by hardware; we have not yet seen systems designed completely without reference to hardware-imposed limits.
    Even given the limits of current operating systems, however, the level of virtualization that has become standard means that not only applications but also the environments in which they exist can be created, managed, and destroyed as easily as a simple process running in a traditional operating system.
  • Automation across the pipeline. If infrastructure is software, it makes sense to manage it in the same way that you would manage other kinds of software—through automation. It also makes sense to extend this automation across the entire software delivery pipeline, whether it is in the form of a single system for managing all processes in the pipeline, or a set of scripts that hand tasks off to each other as required.
    Traditionally, automation has often been schedule-driven; in modern infrastructure, however, it is typically event-driven. This allows greater flexibility, and it eliminates unnecessary delays.
  • Continuous delivery. Such flexible, response-driven automation quite naturally leads to continuous delivery; if there is no need for manual intervention or for delays in the delivery process, then there is no reason why it shouldn’t be continuous.
    And in fact, the reasons for non-continuous delivery typically turn out to be artifacts of non-virtualized infrastructure and non-automated delivery pipelines. Elimination of the need to accommodate the constraints of a hardware-based infrastructure, combined with the ability to fully automate the virtualized, software-based infrastructure has made continuous delivery not only possible, but inevitable. Also, click here to learn how to optimize continuous delivery with incident management.

So, how do we compute today? We compute largely in an environment that is virtualized, and insulated from the hardware level by multiple layers of abstraction. Our development and deployment pipeline is continuous and managed by event-driven automation. In many respects, the modern IT environment is a virtual world, insulated from the traditional hardware-based IT world to the point where many of the concerns that dominated IT just a few years ago have become irrelevant.

A Virtual Tomorrow?

If that’s a snapshot of today, what will the picture look like tomorrow, or in five or ten years? There’s no real way to know, of course. Any prediction made today is likely to look increasingly foolish as time goes on.

But here are some other predictions. It is likely that we have only begun to see the effects of freeing the virtual-computing environment from the constraints imposed by hardware. It is also likely that the distinctions between virtualized computing, virtual reality, and the traditional world of physical experience will break down even more. In many respects, the rate of change in computing today is limited by our ability to assimilate changes as they occur, and to make full use of new capabilities as they develop. But automation and intelligence capabilities will likely disrupt nearly every function, vertical, and domain, unleashing new potentials for efficiency and dramatically altering the focus of people’s work.

Perhaps the virtualization of both computing and everyday experiences will increase the rate at which we can assimilate future change. If this is the case, future computing and future life in general might be completely unrecognizable to us if we were to catch a glimpse of it now, even though we are likely to be both the creators of and participants in that future. As we change the world, we change ourselves.