When looking through stock photo sites, spotting a ‘bad’ hacker is pretty easy – they wear hoodies, have no visible facial features, sit in dark rooms, and neon green/blue lines of code typically make an appearance. If only it were that easy. In reality, the smiling face in the expensive suit is just as likely a threat to your business.
And while most people know this, everyone still has a lot of questions about the “how” and “what” specific to defending against insider threats. I recently sat on a panel that discussed insider threats, and based on the questions asked, I thought that it would be valuable to share Red Canary’s perspective on the topic.
Defining Insider Threats
Our panel moderator set the stage by breaking insider threats into three categories – and I will do the same for the purpose of this article.
- Type 1: External attackers using compromised credentials to move about an organization just like an employee (55% of breaches).
- Type 2: A truly malicious employee that is intentionally acting in a way that could harm the organization–traditional corporate espionage (15% of breaches).
- Type 3: An internal employee that inadvertently exposes their organization to risk (25% of breaches).
*Figures provided by 2014 Breach Level Index by Gemalto.
There are many security products on the market today that attempt to detect and classify insider threats into one of the three buckets used above.
Red Canary’s View on Insider Threats
We find that insider threats of any type exhibit many of the same behaviors, and leave traces very similar to other types of threats. Type 1, for example, is only an insider threat insofar as the external attacker has achieved his first objective, which is to obtain access to legitimate credentials and with it access to authorized services (i.e., the employee VPN or file share). This is a common post-exploitation activity in both opportunistic and targeted attacks, the latter leading to what we think of as a “breach.”
Type 2 is closely linked with eventual Type 1 activity. In this case an insider has taken some action that allows the attacker a foothold within an organization. Type 2 may be as simple as an insider providing credentials or information about systems and infrastructure to an attacker, though this could also include an employee exfiltrating sensitive information.
And Type 3 addresses accidental loss or disclosure of information. Hard or impossible to prevent using technical means, Type 3 may be detectable using technology but is best addressed through training and non-punitive reporting policies.
Red Canary is successful at detecting insider threats using the same scalable, fundamentals-based approach that we use to detect everything else: By looking for a very broad set of both process and user behaviors, providing correlation and context via our platform, and then leveraging our analysts to carefully tune out false positives and alert customers to confirmed threats.
In all cases, we prioritize identification and correlation of threats and affected assets over attribution to any named threat actor, insider threat type, or malware family. Timely receipt of these facts allows the organization to determine its response strategy. Once the dust settles and the breach is contained, the “who” and “why” often come to light, and in cases where they are not apparent then it is a great time to run these questions to ground.
5 Steps to Improve Your Defenses
If you are currently assessing your organization’s ability to detect and respond to insider threats, I recommend the following steps:
Understand your priorities
While the four steps below are things that every organization should do, many organizations are still struggling with the basics. Insider threats (like its fear-cousin, the Internet-of-Things!) is a flashy topic, but even Verizon’s 2015 DBIR report pegs insiders in-aggregate as the source of less than 10% of breaches. Now this may not be inclusive of all the types I outlined, but it does help to scope the subset of breaches that are classified primarily as insider-derived and thus likely to be attacked by insider-focused solutions.
These breaches are not to be ignored, but aren’t likely to warrant highly specialized solutions when many organizations are still struggling with detection of threats in general.
Increase your visibility
You simply can’t detect threats of any kind without a fundamental ability to account for access and changes to data. Red Canary uses Carbon Black’s Enterprise Response sensor to provide visibility into Operating System-level activity across our customers’ endpoints. This unparalleled level of visibility is essential for monitoring what is happening on your endpoints and detecting insider threats. Other tools and approaches exist, and it is wise to start with collection and analysis of system logs, moving up the stack as team and budget allow.
Identify your assets
Knowing what you have to protect is critical and should include endpoints, users, and data at a minimum. We frequently talk with security teams that don’t know how many endpoints they have, let alone where they are, whether they are up to date, and who’s using them. Answers like “somewhere around 2,400, give or take a few hundred” are common.
Understanding the data you’re protecting is also important. Know what data you have, where it resides, and who’s authorized to view, change or share it. This item in particular seems daunting, but most organizations would benefit tremendously from classifying things as simply as “internal” or “public.”
Baseline your data and users
Now you need to build a profile of “normal” data and user activity. Again, this sounds intimidating, but there are an increasing number of products that can be leveraged to start collecting, reporting, and even visualizing this data. Generating high-confidence alerting based on this data can be non-trivial, but collecting and reviewing it is the first step.
Detect and respond to anomalies
The specific steps here will vary depending on the tools you put in place and how you have profiled and baselined your environment. Some vendors recommend the classic “Risk = Likelihood x Impact” equation, where the ultimate risk associated with a given behavior is calculated from the likelihood it will occur and the business impact or asset loss should the event occur.
Regardless of how you organize the anomalies, you will likely spend a good deal of time iterating between steps three and four due to the ever-increasing level of knowledge you will gain about your organization. As you successfully accomplish this overall process, you will establish new baselines and identify new anomalies almost constantly.
There are No Silver Bullets
Detecting insider threats is difficult and I have never found the proverbial “silver bullet product” that does it all with any level of competence. It takes time to build a security process, and any organization that is not willing to commit ongoing resources toward the problem will struggle during both the development and operational phases of the program.
If you are concerned about your organization’s ability to identify insider threats, I recommend reading the abundance of publications related to breaches–but keep to professional articles and papers rather than mass media reports, which are generally too high-level to contain actionable details. Always consider how your organization would fare in a similar breach scenario. For example, how quickly you would be able to detect that activity and how comprehensively you could respond?
The key theme is that insider threats of any variety can be most efficiently addressed by understanding and detecting their commonalities. Do not get hung up on external attacks vs. corporate espionage vs. inadvertent insider incidents. From the perspectives of detection and response, they are more alike than they are different. Maximize your organization’s time and budget by building a process or looking for solutions that will improve your detection coverage, time to respond, and confidence in remediation.
Images: AndreyPopov / Flynt / Gajus / Lomachevsky / Elenathewise / Zetwe / Bigstock.com