The Top Causes of Downtime
According to a roundup by Gartner, the average cost of downtime for an enterprise is $5,600 per minute. While the data collected was from incredibly large companies, the cost of downtime for even small startups is no laughing matter.
Let’s assume, for the sake of simplicity, that your core product is a web app that relies solely on organic sales, totaling $1 million in revenue a year. This amounts to about $2 in lost revenue per minute. This doesn’t sound like too much in the grand scheme of things, but revenue is only a small part of your downtime costs. We also must consider wasted operating costs.
Employees’ time and productivity, too, are wasted during downtime. If, for example, you pay $500,000 a year in employee costs, that’s an additional $1 in lost revenue per minute. If you’re keeping track, we’re now at $3 in cost per minute.
That’s $180 an hour. $4,320 a day.
Adds up quickly, doesn’t it? Now we’ve accounted for employee costs and lost revenue, but what about other wasted expenses? Every unused piece of your architecture results in additional losses during downtime. Unused servers and third-party services can simply sit around while your team is working on a fix, and the fix itself could result in necessary additional (and costly) resources.
Depending on how critical your product is to your customers’ businesses, downtime could not only cost you money, but also your customers’ trust. It’s difficult to justify the cost of paying an unreliable vendor, so while one outage is easily survivable, the loss of faith in your product is compounded with every subsequent occurrence.
Causes + Solutions
Ultimately, by understanding the causes of outages, you can maximize your chances of preventing them. The causes can be boiled down to a few categories — human error, third-party service outage, or a highly unpredictable “black swan” occurrence.
One of the most common causes of downtime that I’ve personally seen is human error. Regardless of if a developer committed broken code, or an administrator updated an untested package, when procedure isn’t followed or an obscure system bug isn’t accounted for, product uptime will suffer. Establishing a system of checks and balances within an organization is the best solution to this problem. Code reviews, unit tests, quality assurance, proper planning, and clear communication all go a long way in preventing downtime that is definitely avoidable.
Sometimes downtime isn’t caused internally, however. From time to time, even cloud providers like Amazon AWS go down. There is very little an organization can do when this happens (at least not without a proper plan in place). To combat this, I’m a fan of Netflix’s Chaos Monkey system. For the uninitiated, Chaos Monkey is a system whose sole job is to kill off random services within a product’s architecture. This forces the system to be self-repairing, and trains the team to handle outages effectively when they really matter. PagerDuty conducts its own Failure Fridays as well!
While occasional downtime is completely unavoidable (even Facebook goes down from time to time), how you handle and prepare for it will determine just how much of an impact it will have on your organization. Because every minute of downtime means additional costs, establishing workflows to prevent or reduce the length of an outage is crucial. Solutions like PagerDuty accelerate real-time incident resolution by notifying and getting everyone on the same page as soon as possible, and providing a platform for surfacing context to fix the issue. By aggregating all your event data and optimizing communication, it becomes far easier to identify root cause of an outage, and resolve issues efficiently and accurately.
It’s important to remember that improving communication externally is just as important as improving it internally. Communicating information about an outage to your customers early and clearly goes a long way to maintaining trust and credibility with them. Through the use of tools like StatusPage and StatusCast, as well as PagerDuty’s Stakeholder Engagement, organizations can better orchestrate the real-time business and external-facing response, and use status pages to provide valuable transparency into the health of a product. Personally, I find nothing more distrustful than an organization that remains quiet through a crisis. Their silence feels like an attempt at hiding something.
All of these solutions are great, but it’s important to understand that an indispensable part of managing unexpected downtime is to make sure there are always people on hand to fix the issue. This can be easily accomplished by establishing an on-call rotation amongst your engineers. An effective on-call rotation is a minimal investment that can help increase product reliability as well as maintain accountability, better service delivery, and improved work-life balance for your team. Without an on-call rotation, every outage turns into an “all hands” event, which is disruptive to the personal lives of every employee. On the flip side, a clearly defined on-call schedule and escalation policies means that workloads are balanced, and there is always a dedicated subject matter expert that is ready to fix an issue or drive collaboration for resolution as needed.
In the end, the best way to plan for (and mitigate) downtime is to invest in your resources and your team. Not every solution mentioned here is right for every organization, but the cost of doing nothing is significantly higher than the cost of doing something. When you have an established process for handling outages, it won’t matter if it was caused by a hacker or a power outage. You and your team will be prepared to handle it.