In part two of our month-long series on the benefits of monitoring network and system events (audit logs), we will be discussing how with more insight, companies can detect a breach more quickly. A misconception is that if logs are aggregated and people are watching the logs, bad guys will be found. False! Depending on the logs collected, the capabilities of an analyst and the signatures in the SIEM, experiences may vary.


Organizations often stick with collecting the default logs for Windows Operating Systems and maybe firewall logs. At first glance these events sound fantastic and as a customer we are happy that the MSSP is on our side, however much detail is lacking for incident analysis. From our experiences, these few logs will not tell us about processes ran on the workstation, the content of the network connections, the success of the connections or many other items needed for proper detection. Like most things a good rule of thumb is, quality over quantity. We want to know what is happening with both the network and client systems so that the chain of events can be followed. Otherwise, if there is a break in the chain analyst we’ll have to guess what happened next. Think of cyber breach investigations like the old movies where the blood hounds track the escaped prisoner. The prisoner would run down a body of water for a distance so that the hound dogs lose the scent. Like with the dog handler, the analyst will have to guess where the attacker went to pick up the trail again. Having the right audit logs will aid in speedier investigations and ultimately get the intruder out of an environment much quicker.

Skills & Experience

Not every analyst is made the same. If the analyst is an in-house hire, you can look at the background and ask probing questions to determine their experience. For Managed Service Providers, capability is harder to determine. As a customer, you don’t typically have access to each analyst’s resume and you don’t get to pick who is on your account. Judging the service provider by looking at their training, their methodology, and speaking with some of the staff is your best bet. If the company is cagey with that information, keep shopping around. A company’s monitoring capabilities are only as strong as those watching; if the analyst is asleep at the wheel, the criminals can walk right in. Alternatively, with an experienced team, attackers will be expunged from the environment in a timely manner with minimal impact on the bottom line.


A final piece of the monitoring puzzle is threat signatures (also known as SIEM content). In our opinion, one of the best features of a SIEM is the ability to correlate events from multiple sources. This capability is especially useful in avoiding an issue plaguing the cyber security community, “alert fatigue”. Fatigue happens due to an overwhelming number of alerts that trigger every day, of which only a limited set are properly investigated. In fact, most businesses are overwhelmed by a more than 50% false positive rate (in our experiences, we see closer to 80%). By correlating multiple log sources, alerts are limited to events that have been validated by more than one tool. For instance, a malware signature may only care about samples that have successfully executed. We can do this by first looking at the antivirus logs to see if the code was caught before running and then verify with the process logs that the code never ran. Yes, sometimes logs are wrong, “Trust but verify”.


For our encore, we will go over a typical infection scenario and how the topics we have covered thus far would help save time.

On Tuesday, a user name Susie who works in finance has received an email with a suspicious document titled “Urgent Funds Transfer Request” from an email address with the CEO’s name but from a Gmail account. Susie doesn’t want to get fired so she downloads the document, opens it and follows the instructions to “enable macros for decryption”. Afterward nothing happens, and she suspects the document is a fake. Susie proceeds to delete the file and the email “just in case”. Back at the SIEM, three days later on Friday (due to backlog of alerts from high false positive rates), Joe the analyst sees that a malware event triggered from the antivirus product on a system for the user Susie. From the alert, Joe can’t tell if the file ran so he rings up Susie to ask. Susie states she downloaded the document but deleted it because she thought it was suspicious. Because Joe is a nice guy, he believes Susie and doesn’t see any other logs that show other suspicious activity, therefore he closes the alert. Little do Susie and Joe know but the malware has begun to spread on the network, compromising other hosts that Joe discovers on Monday. Obviously, we could play this story out with a companywide compromise, but our point is that with more log sources Joe could have done his job properly. With process and network logs, he would have seen the malware ran and spread to other hosts and he would have immediately enacted the company’s incident response plan.  Even better, if Joe was properly trained with the technology, he could have tuned the signatures decreasing the number of logs and detected the malware on Tuesday instead of Friday.

The End

The moral of the story, monitoring is a great tool for a cyber security program when conducted properly, you must have the right logs, the right people, and a well-tuned SIEM. We look forward to sharing more of our experiences with you.

​Please comment below with your stories about how using a SIEM has reduced the detection time of incidents in your environment.

Leave a Reply