In our always-on, IoT-enabled, cloud-connected, big data age, we face a major paradox: it’s now easier than ever to collect large amounts of data — yet the more data we collect, the harder it becomes to monitor situations effectively.
This problem is similar to what psychologists call “information overload” — the phenomenon that causes someone to fail to make decisions effectively because he has too much information to contend with.
In some contexts information overload is unavoidable. If you get hundreds of emails each day, there may not be much you can do about feeling overwhelmed by them, as you don’t necessarily have a lot of control over who sends you an email. Yet, when it comes to data center infrastructure, information overload is not inevitable. It’s entirely up to you to decide how much and what types of data to collect. If you find that you have too much data to parse feasibly, it means you need to rethink your monitoring practices and alert filtering.
Of course, as we’ve already noted, many admins may find themselves fighting an uphill battle when it comes to preventing information overload in the data center. That’s because the explosion of the cloud and the advent of IoT — and all of the inexpensive data that comes alongside those trends — have made it easier than ever to collect all manners of information about your servers and applications.
What’s Critical, What’s Not
That’s why it’s now more important than ever to decide which types of monitoring you actually need, what to set up notifications on, and what you can do without. Just because adding more monitoring to your infrastructure is easy and inexpensive doesn’t mean you should necessarily do it.
If you add monitoring blindly, you’re shooting yourself in the foot by collecting more data than you can ever process or act on effectively. This turns into fatigue for your on-call staff, wasted time spent on low priority issues, and causes low priority issues to distract from the critical ones.
Successful alert management depends on your particular needs, of course. There’s no one-size-fits-all approach. In general, it’s a good idea to try to restrict yourself to deploying sensors that center around the following types of information:
- Security incidents: You’ll want to be alerted to things like repeated failed login attempts or port scans so you can stay ahead of threats.
- Host failure: If a physical or virtual server fails to start, or crashes suddenly, that’s an important event to know about.
- Resource exhaustion: You don’t want to wait until you run out of data storage or network bandwidth to find out that you should be adding more. Use sensors to warn you when usage starts to approach the maximum available and stays at that level for more than a short time.
Again, your mileage may well vary. But the above list provides the core essential types of events you should be notified on.
Monitoring vs. Alarms
There are other types of data that are good to monitor but may not require an alarm. Those include things like:
- CPU usage: This can vary widely throughout the day due to a number of factors. You want to know about general trends, but you don’t need an alarm to tell you each time CPU usage has jumped.
- Network load: This is in the same category as CPU usage. Network load varies naturally. You should know your data center’s trends so you can plan for long-term expansion. But there’s no need to set off alarms just because a lot of devices happen to be on the network in a given moment — unless, of course, the situation is extreme and sustained.
- Environmental conditions: You should track things like data center temperature. But this is the type of incident that can usually be handled in an automated fashion. Instead of having sensors send you an alert when temps climb high, have software that turns up the cooling units for you. You only need an alert if temperatures approach critical level and stay there.
It’s quite possible that an issue triggered by a sensor like processor queue length can easily be covered indirectly with the more relevant data point such as processor utilization.
The Right Data for the Right People
Another way to make sure you’re getting optimal results from your sensors is to make sure the right incident notifications are going to the right people.
Platforms like PagerDuty let you specify an order of command for handling different types of events. Rather than blanketing your whole team with incident notifications, make sure only the exact right people who need to handle issues get woken up. This minimizes unplanned work and alert fatigue in responding to issues.
You can also configure PagerDuty to send notifications to a larger group if the initial recipients don’t respond in a certain amount of time.
Get More Out of Logs
Last but not least, keep in mind that there are lots of different ways to deal with information. One way is to generate alerts. But another is to use log analytics tools to identify trends that stretch across a large amount of data collected by various monitoring tools.
By boiling your log results down to the essentials, you can figure out what you should be paying attention to without having to handle a huge number of events on an individual basis.
That’s why PagerDuty offers features like integrations with Splunk and other analysis tools. These are ideal for providing a way to derive value from monitoring data without suffering information overload.