This is the first in a series of posts on increasing overall availability of your service or system.
This post is meant as a quick introduction to some concepts of system availability, so that subsequent posts in this series make sense. I’ll go over concepts like availability, SLA, mean time between failure, mean time to recovery, etc. If you’re already very familiar with these, feel free to skip over this post.
The availability of a system or service is the total percentage of time that the given system is up and functional. For instance, a system that is down for a total of 5 hours per year would result in about 99.94% availability. This measure is often stated in terms of “nines”: for example, a telephone service provider with “four nines” availability is 99.99% available, or has about 53 minutes of total downtime a year.
Downtime is a vague term, but usually covers both when a service is completely inaccessible, or when it is accessible but is throwing enough errors or is so slow that it is pretty much unusable. Some service providers try to omit scheduled downtime from their availability calculations, but this is bogus. You are not available when you are down, whether you actually foresaw the problem and “scheduled” the downtime or not. The nearly oxy-moronic concept of scheduled downtime is becoming more and more of an anachronism nowadays in modern web and SaaS businesses, but it is far from dead. This rant could be a blog post in and of itself, so I will skip it for now.
Paid services will often have service-level agreements (SLAs) in place with their customers that define the minimum level of availability their customers should see before financial reparations are made: in other words, before refunding some or all of their money when things break. Some services like Amazon S3 have very explicitly defined SLAs, whereas other services, like Netflix, won’t spell out their policy explicitly but will proactively refund their customers over periods where they were getting crummy service. Although these SLA refunds could add up to a lot of money across a service’s entire customer base during a big outage, they really amount to little for individual customers.
Refunds for SLA breaches, mind you, are only a small part of the financial damages that an outage can cost: some services, especially cloud services, live and die by their availability. Large outages of a consumer-facing service can impact customer mindshare and consumer confidence. Outages (of any size) of a business-facing service can severely damage customer trust, especially if these customers depend on the service to provide some part of their business-critical functionality. Nobody wants to be known as that service that always goes down.
Finally, there are the concepts of ‘mean time between failure’ and ‘mean time to recovery’, which are generally more practical than an availability percentage. Mean time between failure (MTBF) is a measure of, on average, how long your service can stay up between periods of downtime, and mean time to recovery (MTTR) is how fast you can get things back to a workable state when things start crumbling.
We always want to increase MTBF and decrease MTTR. There are a lot of techniques for doing both, and we’ll be following up in the next few availability posts with strategies for doing so. That being said, increasing MTBF can be quite hard, and involves designing systems from the ground up that are very robust and resistant to failure. Decreasing MTTR, on the other hand, can be easy, as there are a lot of things you can do and ways to prepare so that your team is ready when the s*it hits the fan. In the next post, we’ll start discussing ways to reduce MTTR. Stay tuned!
Improve mean time to resolution with PagerDuty.