Authored by Rod Lewis

I originally wrote a version of this a few years ago for my friends at Auvik. I’ve updated it as some things have changed but the core message is still invaluable and frankly a concept missed by many organizations. Whether you’re trying to troubleshoot a problem, defend against cyber security attacks, or simply optimize your environment, event logs are your best source of information. 

Moreover, not logging or ignoring your logs is like not checking your blindspot when you’re changing lanes—sooner or later you’re going to seriously regret it because the effects will be disastrous. The regret will set in when you have an issue (attack or otherwise) you will simply not know “what good looks like” you will be searching through logs and every thing will require investigation because without a baseline you have no clue what normal is “what good looks like”.

Centralized logging is where all your network/server elements send data to a central server, it is far more advantageous than logging your systems locally. With central logs, you have one complete view of your environment. Here are 5 ways a centralized view makes you better, faster, and more efficient at your job.

1. Centralized logs are indispensable to troubleshooting

Logs are indispensable when it comes to pinpointing problems and determining their causes. They’ll let you identify issues based on hard data, not guesswork.

During or after an incident, a logging tool such as ELK (Elastic, Logstash, Kibana), Graylog, or Splunk with real-time graphing, filtering, comparisons, and alarming can give you access to data correlated from multiple sources across your systems, helping you narrow down the incident cause.

You get a complete before-and-after picture of what happened, showing the effects on all the systems in your environment from one interface. In a locally logged environment, you’d need to go from system to system, open multiple windows, and attempt to piece things together. That’s time-consuming and you might miss critical correlations.

2. Centralized logs help you proactively manage your network

Once you’re collecting data, log review and analysis should become part of your daily or weekly regimen, depending on the size of your environment.

Constant analysis means you can be proactive instead of reactive, nipping problems in the bud before they even occur. For example, if you see memory or disk size creeping up or giving errors you can address the issue before it causes failure.

There’s a huge difference between scheduled and unscheduled downtime when it comes to maintaining user trust and keeping repair costs low, so any proactive maintenance you can do is a big win.

3. Centralized logs help you deliver greater value

Once you’ve accumulated enough data, you can perform a lot of different types of analysis on it to better understand your network and your users. For example, you might complete a comparative analysis using daily, monthly, or yearly data to identify changes that have occurred on your network – for the better or worse.

Trend analysis, whether day over day or year over year, can be used to quickly find anomalies, such as a sudden spike in log frequency you will have a baseline and know “what good looks like”. Once you’ve identified the change, you can dig into why it happened and address it.

The business intelligence you extract from your analysis can be used to find efficiencies, improve network/server design, and provide an overall improved experience for the business.

4. Centralized logs reduce the risk of losing data

A centralized logging system removes the individual server from the equation. If the server you’re trying to troubleshoot is down, local log files won’t be accessible, rendering you blind. Centralized logging (with proper system backups) ensures you always have a place to view the logs and diagnose the issue.

5. Centralized logs improve your network security

By centrally logging user activity, you can analyze for activity trends and notice any unusual behavior.

When a system is compromised you can no longer trust its logs. Centralized logs give you the forensic ability to determine what happened right before the compromise, including any user activity. This data is instrumental in preventing a recurrence.

If a system is under a cyber-security attack via brute force, you’ll quickly be able to see this in the logs. Even if the attack is spread across multiple systems and requires a more subtle correlation, you can still see the attack in the logs and respond to it. By comparison, detecting a multi-system attack by looking at local logs would be extremely difficult.

Whether log events point to issues with hardware, applications, capacity, or security issues, they contain the data you need to quickly find and solve problems that have a direct impact on business operations. This ability to zero in on issues, be proactive, and react intelligently is invaluable. Even discovering obscure edge cases that occur periodically is often only possible by analyzing centralized log data. The other invaluable result of having centralized logs with history is baselining, it is next to impossible to troubleshoot an issue, or search for an attacker if you don’t know what is normal i.e. “what good looks like”.

In other words, a properly used centralized logging system is both necessary and beneficial for any operation.

A little about the author Rod Lewis P.Eng. CISSP. Rod is a Business, Technology, Operations, Security Leader and Advisor. To find out more visit: theoknetwork or Linkedin. This article and more can be found at the recently launched cyberthreat.info blog.


cyberthreatinfo

Experts Musing on Cyber Security!