shadow it visibility banner

Flashlight on ShadowIT
Visibility Series Part 2

Nexum logo for blog posts


Written by: Sarah Lantz, Security Specialist
Connect with us on LinkedIn

“You cannot secure what you do not know.” I frequently see iterations of this phrase when it comes to cybersecurity and visibility. When a security team looks to gain insight into what is being used in their network, they won’t be able to secure what they can’t see or are not aware of. It’s here that the buzzword of Shadow Information Technology (IT) is also commonly thrown around. You can think of Shadow IT as the thing that you do not know is on the network – and therefore cannot secure. 

But what exactly is Shadow IT? Is it truly as menacing as it sounds? And why do you need to be familiar with it? 

In Part 2 of our Visibility series, I’ll answer these questions and further the discussion. 

Here’s a link to our Visibility Series Part 1: Tackling the Visibility Monolith where I discussed the categories of visibility and the objectives of a visibility project. 

 

What’s Behind Shadow IT? 

At its root, Shadow IT is often not as nefarious as it sounds. Shadow IT refers to the systems or applications that are used without the knowledge, or consent, of an organization’s IT department. This may happen for several different reasons (particularly in a larger organization), but here are three of the most common that I have come across: 

  1. Corporate structure is too convoluted to formally adopt a new technology. Perhaps this was grown over time, where previous management decisions were locked into certain frames of mind, or no one brought forward new technology. 
  2. ‘If we build it, they will come’ mentality. The goal here is to prove that a given technology will provide a benefit, then by the time it’s discovered, it will be so integral that the company will be forced to adopt it formally.  
  3. Permission was requested and rejected. But we’re going to use it anyway.  
 

In reasons #1 and #2, no formal adoption process was pursued. Before looking at technology to limit unapproved traffic, it may be beneficial to look at how the corporate structure and internal processes can be changed to encourage bringing new technology into the fold.  

While each of the reasons above can lead to exposing sensitive information, reason #3 is the most troubling. A desire to prove the value or ease of a certain tool has created risk for the organization. And the security-minded folks within the IT department are left with the question of how to identify and secure such efforts. 

Alongside this issue, organizations must contend with how restrictive they can be without causing friction. If we look at the sophistication of spear phishing attacks against an individual organization, we see that the drive for better tools to screen out basic scams has necessitated more complex phishing emails. As more sophisticated attackers find ways around these tools, we become reliant on the users to know how to better spot phishing attacks. In much the same way, as you increase the complexity involved to prevent the use of unapproved software, users will find more complex ways to get around those restrictions. 

This will likely escalate until the measures to prevent unapproved applications interfere with either approved traffic or so much time is invested in it that other operations will see a detrimental effect. 

  

Security With a Flashlight 

There are a few ways to approach this issue and several tools that can be used to see what is on the network. Depending on where the organization is in its security journey, these tools may exist on the network already and it’s more about how best to leverage them rather than implementing brand new tools. 

 

Scanning 

Vulnerability management tools at their most basic will only help you discover live hosts on a given IP, and what ports are open. However, I have seen many take the approach of having discovery scans and vulnerability scans occur concurrently. Discovery scans will attempt to ping a given IP, and perhaps probe a couple of well-known ports, to get an idea of what sort of device is there. Meanwhile, a vulnerability scan is looking for what version of application is listening on the given port. That versioning would then be used to determine what vulnerabilities that system could be susceptible to. Due to firewalls in the organization, bandwidth bottlenecks, and resource provisioning, often this can lead to an early sci-fi “slow-moving scanning robot” trope.   

Another approach I have seen is what I would call the “known subnets” approach. The “known subnets” approach is the idea that the network operations (NetOps) team knows which subnets are alive, and thus only those should be used. However, how certain is that? Will the list of known subnets be maintained as part of discovery scans? When was the last time that the network team had a talk with security to make sure all subnets in use are being scanned? This is where separating your discovery from vulnerability scans comes into place. In a given /24 subnet, I would say that 70% are in use at the upper range. How much of a traffic reduction is it if you have a basic discovery sweep daily, a vulnerability scan for any newly discovered hosts, and then your normal vulnerability scan on its schedule? 

Network scanning may fail at finding Shadow IT for any number of reasons (an unknown subnet, leveraging containers on known machines, etc.). Therefore, while scanning is useful, it is not the end-all, be-all when searching for Shadow IT. 

 

Application Centric Inspection 

So now what? Perhaps your organization has already begun adoption of some segmentation and an East-West firewall with application identification capabilities. Such measures may give insight into what is in the corporate data center, but let’s not forget about the users who are now working remotely. Depending on the virtual private network (VPN) used for working from home, some solutions tunnel all traffic, while some only tunnel the organizational traffic, and web traffic goes directly out.   

The attempt to leverage some variant of a next-generation firewall with application awareness has been the go-to for years. In the past, this solution coupled with network aggregation points could identify most traffic within the network and give insight into what was being used. When the traffic is all within the corporate office, you can rely on the firewall to see the traffic (presuming it’s not being obscured or tunneled as something else). If the traffic does not traverse a firewall or inspection point, there are two categories of tools that achieve visibility.   

 

CASB or SASE Solution 

Providing inspection for traffic that does not pass the former perimeter is where the cloud access service broker (CASB) and secure access service edge (SASE) tools come into their own. A CASB solution, for example, may alter the login portal to known applications and have agents that monitor applications that could be adopted. A SASE tool, however, looks to fold in the CASB approach with several additional tools to ensure that traffic can be inspected regardless of where it is going. Both tools are at their most effective when you can reliably say that there is an agent on the device. That agent is watching what they are accessing, making these tools useful in finding some instances of Shadow IT.   

Where they could fall short is a home-grown application that may not be accounted for with a SASE solution. This depends on how the tool is being used and how often reports on applications are generated and reviewed. Further, it matters how many agents the organization is going to mandate and maintain. If the user can spin up an application and hide it from the tools in the organization, would they not also know how to evade such agents anyways? Content creators aplenty take advertisement deals with personal VPN providers, bringing the idea to the average user. 

 

Bringing It Together 

Ultimately, this is why it’s best to adopt a culture of application adoption. There could be an argument for a parallel with the movie WarGames, in which it was stated, “The only winning move is not to play.” However, there are steps an organization can take to evaluate what level of risk they are willing to take, and tools that exist to help an organization get on top of what is already out there.  

It can be helpful to bring in outside help to review the tools available and determine how those applications could be utilized to promote a culture that doesn’t turn to Shadow IT. Nexum has helped many organizations leverage inspection points with the tools they already have and find areas where new tools might be introduced to further enhance visibility. Talk to an expert for more details.

Jump to –

Check Out More Resources

Nexum Resources

Enterprise Logging Best Practices

Each quarter, the managed security team at Nexum shares insights from our first*defense SNOCC. In this post, we decided to share some general logging best practices that are likely to benefit every organization.

Read More »