Environmental Drift Yields Cybersecurity Ineffectiveness

Environmental Drift Yields Cybersecurity Ineffectiveness
Author: Brian Contos, CISO and VP of Technology Innovation at Verodin
Date Published: 26 February 2019

Your cybersecurity tools are working, optimized, and providing real, measurable, business value. They are successfully blocking attacks, detecting nefarious activity, and alerting the security team.

Then it happens. Somewhere a change is made by someone outside of the security department. That change isn’t communicated to the security team. Now all of a sudden, your cybersecurity tools are becoming ineffective and, worse, financial, brand, and operational risk has been introduced to the organization. Your cybersecurity effectiveness has drifted from a known good state. You are experiencing environmental drift.

Environmental drift
There are countless causes of environmental drift. More often than not, environmental drift is the result of someone in IT or a related group making a change without any malicious intent. However, the change might not be communicated to the security team or it might have unintended consequences that degrade cybersecurity effectiveness.

Here are some examples of ways that environmental drift can be introduced and the impacts that can result:

  • A proxy is installed that is inadvertently dropping syslog traffic between cybersecurity tools and their management consoles. This can result in a lack of visibility on the management console, and if a SIEM is involved, events relevant for correlation and alerting are simply not seen.
  • A tap or span is modified to only send unidirectional traffic to a cybersecurity tool. This can result in that cybersecurity tool becoming totally ineffective because many tools require access to bidirectional traffic to operate correctly.
  • A firewall rule configuration change is made to open various ports for testing, but the configuration is never returned to the prior state. This can result in a wide range of issues such as data exfiltration, cleartext protocols being allowed, successful beaconing, and active C2, for example.
  • An update made to an endpoint cybersecurity tool before being fully tested breaks some existing capabilities. This can result in endpoints such as laptops and servers being made vulnerable to credential theft, data theft and sabotage.
  • A configuration modification made in the cloud alters network segmentation. This can result in webservers, databases, and other assets not being protected by cybersecurity tools such as firewalls or WAFs because from a networking perspective those assets are now on the internet side of those cybersecurity tools. This type of mistake is pretty easy to make in the cloud while less likely in a data center, where you are physically connecting cables.

These are just a few simple examples where a known good cybersecurity effectiveness baseline can drift because of environmental changes of which the security team may not even be aware. Environmental drift happens all the time, everywhere, regardless of company size, processes, and tools. It greatly reduces the value the cybersecurity tools and teams provide and puts organizations at risk.

Detecting and mitigating the drift
Because environmental drift can happen at any time, impacting any cybersecurity tools, it’s essential to utilize an automated approach to detect when you have drifted from a known good cybersecurity effectiveness state. In other words, so you know when that thing that was working, has stopped working. For example, my WAF was stopping XSS attacks for my cloud-based webserver, my DLP was preventing PII from going out to the Internet over ICMP regardless of compression type, and my SIEM was correlating and alerting on lateral movement based on cybersecurity tool and operating system logs. But now something has stopped.

So, what can be done?

Create a baseline of known good cybersecurity effectiveness. Understand how your cybersecurity tools are reacting to various tests such as data exfiltration, the installation and execution of malware, beaconing, and thousands of other measures across endpoint, email, network, and cloud cybersecurity tools. In most cases, you’ll need to tune those cybersecurity tools to operate the way you want because the process of validating their effectiveness will often yield a number of shortcomings.

Use automation to detect drift from that known good baseline. You still may have an imperfect environment, but as you continue to improve, you’ll know at each stage how your cybersecurity tools should be preventing, detecting, alerting, and so on. Any deviation from this known good baseline detected through automation is an anomaly. Responding to the anomaly means you’ll be managing by exception, making your responses more precise, such as knowing that the WAF in the cloud that was preventing XSS stopped preventing XSS 15 minutes ago.

Use these drifts as scenarios to dissect in postmortems with a goal of improving the processes and communication between the cybersecurity team and others.

The ongoing effort to mitigate environmental drift will often result in highlighting where investments need to be made to further improve cybersecurity effectiveness as well as where legacy or redundant solutions can be removed, thus allowing those dollars to be reinvested in more critical areas. Continue to expand the reach of your known good baseline and the types of tests you are using to validate your cybersecurity effectiveness.

Environmental drift will not go away. There are simply too many variables and too much complexity to fully remediate it. However, using automation, environmental drift can be detected and mitigated, and the process of doing so will have broader ramifications for cybersecurity effectiveness as a whole – leading to improved change management and communication, greater value and precision from cybersecurity investments, and ultimately, reduced financial, brand, and operational risk from cyber threats.