network alerts visibility banner

Network Alerting
Visibility Series Part 3

Nexum logo for blog posts


Written by: Sarah Lantz, Security Specialist
Connect with us on LinkedIn

One of the primary things information technology (IT) and security teams continue to contend with are alerts. Every system has the capability to generate an alert. Is it some sort of security alert? Some sort of uptime alert? Perhaps an alert indicating that too many alerts have been generated in too short a time? Either way, the volume of alerts likely involves headaches for everyone.

In Part 1 of this series, Tackling the Visibility Monolith, we discussed the categories of visibility and the objectives for a successful visibility project. 

In Part 2 of this series, Flashlight on Shadow IT, we dove into the importance of understanding and identifying shadow IT.

In Part 3 of the Visibility Series, Network Alerting, we’ll talk about alerts – the history behind them and the problems they can cause.

Alerts – A Brief History

What started with good intentions has become a major source of stress for many. As I said in Flashlight on Shadow IT: Visibility Series Part 2, “you cannot secure what you do not know.” In the same way, if you do not know what is occurring, you cannot respond to it. Many organizations started with their firewalls. What did the firewall tell the security and network team about what was traversing in and out of the organization? This started off great if an external address was included in a firewall alert – organizations just had to make sure that security signatures caused the firewall to block malicious traffic. If the user was overly egregious, they just blocked all traffic from the external address.

Then came endpoint security. If random people on the Internet were sending malicious things to your websites, you can bet they were sending the same “gifts” in email format as well. So, then organizations needed to watch the alerts from their endpoint solutions to see if someone received a nasty virus or clicked some sort of phishing link – a bit more of an ask, but still possible to address.

This created a concern about traffic that wasn’t being observed. Disabled endpoint protections could lead to a mysterious external or internal bad actor plundering the company’s data. Now network data needed to be examined. Network SPANs and TAPs became common, splitting off copies of the traffic to let an intrusion detection/prevention system (ID/PS) determine if anything was suspicious. In this way, the volume of data, and therefore the volume of alerts, continued to escalate.

What Came Next

Knowing that all these disparate systems had their own alerts and severity scales, organizations struggled to have teams watch every dashboard for events, and struggled to respond to the sheer scale of them. This issue resulted in the emergence of the first security information and event management (SIEM) systems.

In terms of functionality, first-generation SIEMs were merely cousins of what we are presently accustomed to. They were more akin to “the family members that come around for gatherings once in a decade.” These SIEMs had incredible difficulty with scale, extremely limited historical data, incomprehensible alerts and reports, and barely any data enrichment. The best you could do was funnel multiple alert sources into one window and watch.

Next-generation SIEMs partially addressed the scaling issue but also had plenty of reporting and data enrichment. However, centralizing your alerts did just that, it centralized your alerts. Organizations gained some relief in the ability to refine alarm rules where every minor alert wasn’t something that needed to be addressed. With a fire hose of data between all these systems flowing into the SIEM, normal operations would get flagged as alerts. To some older systems, someone’s backup script trying to log in and take backups every week looked suspiciously like a script kiddie trying to hack into the network.

Compounding Problems of Alerts

In time, organizations sorting through these normalized alerts found they could tread water. With enough refinement and tweaks, IT departments felt pretty good about their SIEM. More systems were brought into the SIEM because it was thought more data would lead to better correlation. In practice, however, this did not work. Those with the knowledge to write creative alarm rules for the alerts found they spent more time responding rather than keeping up with the refinement of those alerts.

In came the need for automation. Many of the alerts needed the same information, so it made sense to try to speed the process by acting on simple logic. If refinement of alarms could reduce their volume, the same sort of “panning for gold” approach might work. The lines between SIEM and security orchestration automation and response (SOAR) systems began to blur. Some SIEM systems natively had some SOAR functions, while some SOAR platforms could also work as a SIEM.

With low-level alerts filtered by the SIEM, and medium-low level alerts handled by SOAR, the organization hoped to no longer just tread water but to swim with grace. This aspiration was thwarted by another wave of data with the further adoption of cloud technologies, increased remote workforce, and everything becoming some variation of as-a-Service (aaS).

Response at Scale

As I said previously, SIEMs scale better now than those first market entries, but still may not scale as easily or as painlessly as you want. There are two options here: (1) a reliance on looking for a way to make SIEM scale through an aaS option, or (2) trying to split up the function of the SIEM. If choosing option 2, it’s important to consider if all network traffic will be evaluated by the same system that is evaluating Structured Query Language (SQL), the firewall alerts, and all endpoint alerts. For some organizations, this would work. However, when processing at scale becomes a problem, these SIEMs start getting split into disparate systems that ingest alarms from each other.

Extended detection and response (XDR) is helpful here. These systems tend to have fewer issues with processing at scale, instead of relying on cloud-provided data storage, and then using machine learning-driven data models to find any correlations. With the ability to scale in terms of data volume, and built-in response capability, once more organizations can look toward better managing their alert data.

Nexum Visibility Solutions

Will there be even more scaling issues in the future necessitating evolution yet again in how organizations respond to alerts? History would say, “Absolutely.” Yet until your organization adopts more data sources, there are ways to get on top of this problem.

Nexum has helped many organizations figure out where they need to start and where to go. Between architectural reviews for network SPAN/TAP and ID/PS placement, deployment of new centralized systems, tuning, and refinement of those systems, Nexum is well-positioned to find the solution that best fits your needs. Talk with an expert at Nexum.

Jump to –

Check Out More Resources

Nexum Resources

Enterprise Logging Best Practices

Each quarter, the managed security team at Nexum shares insights from our first*defense SNOCC. In this post, we decided to share some general logging best practices that are likely to benefit every organization.

Read More »