Having one person on-call isn’t enough. What happens if your on-call engineer sleeps through their alert? What happens if their phone’s battery dies without them knowing, or if they get an alert at a really inconvenient time, like when stuck on a bus or in traffic? It will happen. We present best practices for back up. One or more people, waiting in the wings, ready to spring into action if your primary on-call is unable to perform his or her duties to the best of their abilities at any given time.
Hiring software engineers is hard. We all know this. If you get past the problem of sourcing and landing good candidates (which is hard in itself), the whole issue of “is this person I’m talking to ‘good enough’ to actually work here?” is a very difficult nut to crack. Again, we all know this. There […]
This is the fourth in a series of posts on increasing overall availability of your service or system. Have you ever gotten paged, and known right away that this problem isn’t like the last 15 operations issues you’ve dealt with this week? That this problem is special, and is really, really bad? You know, that […]
This is the third in a series of posts on increasing overall availability of your service or system. In the first post of this series, we defined and introduced some concepts of system availability, including mean time between failure – MTBF – and mean time to recovery – MTTR. In our second post we went on to […]
Like pretty much everything else in Rails, optimistic locking is nice and easy to setup: you simply add a “lock_version” column to your ActiveRecord model and you’re all set. If a given Rails process is trying to update some record, and some other process sneakily manages to update that same record while the first process wasn’t […]
This is the second in a series of posts on increasing overall availability of your service or system. In the first post of this series, we defined and introduced some concepts of system availability, including mean time between failure – MTBF – and mean time to recovery – MTTR. Both increasing MTBF and reducing MTTR […]
Tired of getting a flood of PagerDuty incidents whenever a problem occurs with one of your systems? Do many of the incidents seem identical? Do you spend valuable time trying to fend off the seemingly never-ending PagerDuty phone calls and SMS messages while you should be fixing the actual problem? Then you, my friend, might […]
Have you ever said to yourself: “PagerDuty is great, but I wish I could better integrate it into the custom tools I already use.” Or maybe: “Why can’t I see more reports on the number of incidents each of my team members have worked on, bucketed by MTTR, split out by seniority of the person […]
Standing on the shoulders of giants and stumbling with them – the Amazon AWS outage’s “pain” statistics
Today, at around 1am Pacific Time, Amazon began having major problems with some of their cloud infrastructure: specifically with their EC2, EBS, and RDS offerings. We’d like to share some statistics on the alerts we sent out – via phone or SMS – during the outage.
This post is meant as a quick introduction to some concepts of system availability, so that subsequent posts in this series make sense. I’ll go over concepts like availability, SLA, mean time between failure, mean time to recovery, etc.