“Prevention of bad things” is not an idea unique to the information security world – and not even a new one for us. For decades, the information security market has been dominated by so-called prevention solutions. These often promise immunity from whatever the latest specter of bad things™ happens to be parroted in any given year. The prevalence of viruses led to antivirus, worms led to firewalls, and the ever-popular “advanced persistent threat” led to … well, whatever a company want to sell that supposedly stops the “advanced” attacker before they have a chance to do “advanced” bad things to your data.
Despite immeasurable spending in this space, breaches continue. They are more common (though perhaps only more widely discussed), and grow more severe in scope [PDF Link]. It’s time to stop believing the hype and face facts: A security posture built primarily around the idea of prevention will fail. Period. It is troubling to me that we put so much trust in preventive solutions despite continuous evidence that they don’t prevent much of anything. The reason is simple: our attackers are humans. Preventive technology only works against known and well-defined threats. Until artificial intelligence becomes an affordable component of the average information security solution, a human attacker only needs to succeed one time against one victim to gain access to the typically unprotected core of their victim’s network.
Perhaps the misguided reliance on prevention technology has led to a false sense of security that contributes to the observation that 70% of breaches are discovered by parties other than the victim. Or maybe the cause is that organizations don’t read the instructions for their technology purchases, and fail to keep up with the maintenance the tech requires to stay relevant and useful with ever-changing attack surfaces and adversarial capabilities. Maybe they soon realize that the most effective prevention technology is also incredibly frustrating for the average user, resulting in workarounds or complete abandonment.
No matter the reason, the state of affairs across our industry is clearly disappointing: We are not doing a good job of keeping attackers out. So here’s a novel concept – let’s not put all of our efforts into the fabled basket of “prevention”, and instead direct funding, training, and attention toward quick and efficient detection of compromise. If we add a dose of continuous monitoring and ongoing evidentiary collection – proactively building visibility into the network architecture, I’m certain we’ll identify breaches sooner and respond more quickly and decisively than ever before.
This doesn’t mean that preventive technology should be killed off entirely. It certainly has a place in a comprehensive security posture. However, it should be localized to the most critical resources, applied aggressively in a manner that impacts the fewest users possible. Let me explain.
Imagine a company whose very existence relies on the protection of intellectual property (IP) that cost millions or billions of dollars to create. Think of a research and development laboratory, pharmaceutical developer, or perhaps a defense contractor tasked to create next-generation technology. Loss of the IP from such an organization would be catastrophic to the business, shareholders, or even to national security. However, deploying a useful preventive solution such as aggressive quarantining AntiVirus or allow-by-exception web proxying for the entire user base would result in an excessive amount of work for the company to maintain.
Between fielding users’ “unblock” requests, maintaining current and accurate domain whitelists, and ensuring full systemic operation of the technology to support such measures, the team operating such a solution could quickly range into a dozen employees and a sizable technology budget requirement depending on the organization’s size. This assumes users won’t get frustrated and find ways around the technology they perceive to hinder their ability to do work.
The end result? Big budget, users will find ways to “get the job done” (aka compromise organizational integrity), and the attackers will soon gain access to the environment
In this realistic scenario, perhaps it may make sense to deploy such aggressive “preventive” solutions only in the areas of the business that are most critical. The R&D section, clinical trial unit, or defense technology division could be locked down with strong technology that affects only a small subset of employees. Policy dictates that systems on the more controlled part of the environment have different allowable uses, avoiding the “Why can’t I get to my Gmail?” variety of problems. All in all, the aggressive “prevention” technology is limited to a small group of the overall user base, focused on the most critical crown jewels in the environment, minimizing cost, complexity, and user frustration.
So what of the rest of the environment? Experience clearly shows that every user is a potential phishing target, not just those in the high-interest areas of the company. If an attacker can gain a foothold anywhere at all in their victim’s environment, they can generally spread laterally to other, more valuable target systems. But without the false hope of “prevention” in the rest of the environment, what can a forward-leaning organization do to improve their overall posture? The key word here is continuous visibility. By proactively installing passive data collectors as far and wide as possible, the evidentiary collection will inform fast and decisive incident response actions wherever they may be required. Every forensicator knows that you don’t need logs until you REALLY need logs, and this approach is an attempt to address that reality. Pre-collecting evidence that will aid and inform the incident response team ensures long-term visibility into actions that have occurred across the environment. These collections may include items such as aggregated logs from individual systems of course, but also NetFlow, firewall, DNS, and IDS logs, full packet captures, web proxy logs, and now endpoint data records.
Endpoint data has long been out of reach because of scalability limitations. It simply wasn’t feasible to collect and examine tens of millions of data points consisting of module loads, network socket activity, registry and filesystem modifications, or other common activities that occur all the time on each endpoint. However, the advent of proper endpoint technology from Red Canary partner Carbon Black means that proactive, long-term visibility at scale is a reality. Red Canary makes those endpoint data collections even more valuable by continuously monitoring for conditions of exploitation, enriching with the most relevant threat intelligence sources, then validating each event with human review to eliminate false positives – within hours of occurrence instead of days or months. When an event requires a full incident response, the client organization has full access to the Carbon Black collection. In turn, the immediate availability of this rich data set drives the timeline and cost of IR down.
So back to the idea of “prevention” being a fallacy for information security, I contend that our reliance on such dreams has not made any significant impact on the number or severity of data breaches. To the contrary, it has arguably given a false sense of security that allowed more severe breaches while our collective heads have been in the sand. Such preventive technology should not be abandoned entirely, but localized to protect the most important information while impacting the fewest number of people possible. Then, deploy a passive data collection regimen broadly across the environment. This will enable proactive, fast, and decisive detection while driving down the costs of incident response by minimizing the time required to conduct those critical activities.