While I’ve had an interest in computers for almost as long as I can remember, it wasn’t until I was a freshman in college that I got my first computer-related job, as a Systems Administrator for the Center for Integrated Plasma Studies. It was, like all jobs at the university, a great learning opportunity for under-market pay with an emphasis on self-direction. My job duties at the time ranged from true systems administration tasks to basic help desk services for the staff. I spent the next 4 years working as a sysadmin for various departments at the university, and another year afterwards as a broadcast engineer (a story for another time). I’d like to think it was the work I did during those few years that gave me the respect I have for just how difficult proper incident management is.
Stop me if this sounds familiar: a user reports a mission critical bug on the system. You immediately identify that it is an issue for the development team to fix, and you forward it along to the appropriate person and move on with your day. You might hear back about the issue from the dev team, or you might not. Depending on the complexity of the issue, if you do receive a response, it might be a cold one from a frustrated developer who just had a grenade thrown into his workload.
Let me say this: It’s not entirely your fault.
As developers, we have a tendency to get frustrated when a problem is thrown our way. It’s not the problem itself that is upsetting — problem-solving is fun! The issue is the list of unanswered questions that accompany it. When did it happen? Which users are affected? Where are the logs? Do we have logs? How do I reproduce it? Has it happened before? Who’s responsible for that feature? Are they in the office today? How am I going to balance this with my current workload?
The questions just snowball, and frankly, it’s terrifying.
It’s not all about us, though. In many instances, we understand that we receive tickets with the information that is available at the time. As incidents move up the chain, the most important thing that you can do to help the next guy in line is to document everything. Steps taken, notes, logs. The more information the better. It’s not any one person’s job to know everything, but if you can investigate something (whether it is just asking clarifying questions or pulling logs), you should make an effort to build a record of information in order to effectively solve the problem at hand.
Let’s revisit the scenario from before. A user reports a mission critical bug on the system. While it is obviously an issue for the development team to solve, before you send it up the chain, you can ask the user clarifying questions and attempt to reproduce it. Reproducible bugs are fixable bugs. If you can reproduce it, include everything. Steps, screenshots, error messages, and exact paths taken. Now, after compiling all of this information, you are ready to forward it to the appropriate dev team member who will pick up where you left off.
A good bug report can be the difference between a developer being able to have dinner with his or her family and pulling an all-nighter. It doesn’t stop there, though. When the incident hits the dev team, the cycle of investigation and documentation should persist throughout the entire process. Not only is this an important step in understanding what happened, but it is also crucial in understanding how to reduce the likelihood of something like it happening again in the future. Effective incident management is a team effort. From the front line all the way to the subject matter experts, everybody plays a crucial role in quickly and efficiently solving problems.