While I’ve always been passionate about working in InfoSec, I can’t help but feel jaded about the way our industry approaches some things. We run around pointing fingers at each other with slander marketing, we use Twitter as an intel sharing platform, and we cry out that the sky is falling every time a researcher posts a new post exploit attack technique. It leaves me wondering: are we focusing on the right things? What are we doing to improve ourselves, our customers, and the people we protect?
This is a call to arms for all security practitioners, vendors, researchers, and red/blue/purple/sparkly rainbow teams. It’s time to stop and rethink how we approach information security. By this, I don’t mean we should focus more on one technique or tool over another, but on how we as a community holistically develop, share, and work together.
To frame this, I’m going to run through four key areas where I think we can improve. Each of these areas has plenty of room for expansion. My goal here is simply to get the conversation started.
4 Things to Stop Doing Right Now
1: Applying Skewed Testing Methodologies
The acceptance of the idea that the red team always wins is a fallacy and symptom of acceptance of poor testing methodologies. As pentesting has become more commoditized, some organizations have developed cookie-cutter testing where the only goal is to gain domain admin at all costs. Many of these tests use unrealistic methods or are not appropriately scoped to help identify actionable gaps and develop better controls. These types of tests are often driven by a skewed priority on compliance requirements where the only intent is to ensure a “test” was completed.
What we should do instead:
We should assume compromise, but we should also test with methods that actually focus on simulating an adversary or testing a specific set of controls. This can be achieved by proper scoping and pre-planning with the team executing the test. Taking the time to discuss and determine the effective goals and boundaries of a test can help to ensure there are usable takeaways.
The other side of proper test planning and execution should include expectations from the defense side. If this is a blind test to see how a blue team reacts to an actual determined attacker, do not provide them with upfront details of the test. Ensure the test is a surprise and actions are taken just as if it were the real deal. However, if you are testing something specific (such as a black box test of web service where the tester is trying to actively identify possible gaps in the code and implementation), this should be done with the blue team and system owners involved so they can learn from the findings along the way.
Red team versus blue team should be seen as a friendly competition of groups working to the same end goal, improving the security posture of the organization. If the blue team successfully stops the red team, that’s a good thing; it challenges the red team to develop new techniques and trade craft. The same applies if the red team sneaks right in without setting off any alarm or anyone becoming the wiser. Both teams should sit down afterward and debrief to determine how defenses can be improved to better identify this activity in the future.
As a lifelong defender, I also think it is important for me to call out some of my favorite teams to go against: Black Hills Information Security, SpecterOps, Mandiant, and Hacker on Retainer. All these groups have solid tradecraft and are just plain fun to duke it out with as a defender.
2: Chicken Little Syndrome
No matter how good the researcher, let’s get one thing straight: the sky is not falling, the internet is not going to end, and all your controls have not been made obsolete (I hope) every time he or she tweets a new post exploit technique. Much of the security community and media get all fired up about every new tactic, technique, malware, or vulnerability. If we as defenders spent more time patching systems, implementing better IT hygiene, and understanding the landscape of our environments, there would be no worry about things like EternalBlue because the vulnerability would have been patched when it was first identified.
Some people reading this may point out the dead equine that is currently being clubbed, but we have not changed the conversation. If we change the conversation to talk about better operations and understanding of our environments, the tone and context will change every time the internet becomes abuzz with a new tactic with a funny name. For example, when the hype began around Dynamic Data Exchange (DDE) as a mechanism for maldoc abuse, there was a ton of FUD and conversation about the effects. My team did not worry because we knew we had the appropriate visibility and coverage. The basic behavior is no different than execution and delivery of a maldoc leveraging macros or embedded attachments.
What we should do instead:
I want to have more conversations where I can discuss strategies to set the stage. Going back to the idea of assumption of compromise, we know users will open and click bad things. As defenders, we can set the field and navigate the attacker to the area where we have the best visibility. We can take time to understand how attackers operate, where they come in, what activities they complete when they land, and how they move around.
As the system owners and defenders, we know where everything lies, but the attacker has to figure it out. We know the most likely paths and where there are easier ways to move; this is where we create choke points and focus our visibility. This allows us to react faster when the attacker initially gets in. Most advanced attackers still use the same opportunistic techniques to make initial compromise. Exploiting the user via social engineering in some mechanism has the highest success rate, so that’s where the focus remains. That should also drive our focus as defenders.
Use tools and controls to block certain activity, then focus your monitoring on things you cannot prevent. This means you should not be spending cycles constantly monitoring your firewall logs to see what is being dropped at the boundary; instead, spend those human cycles to monitor what a user is executing. Implementing tools like high enforcement application whitelisting makes it that much harder to drop and execute untrusted code, so we can focus on collecting information about malicious use of native tools. This approach further reduces the risk of opportunistic attacks so we can focus on more advanced tactics.
3: Vendor Flame Wars
First and foremost: hello, my name is Joe and, yes, I do work for a vendor.
Now that we’ve gotten that out of the way, KNOCK OFF THE SILLY FLAME WARS. This path to marketing does nothing but cause confusion. I understand we (vendors) are trying to sell something, but let’s talk about things that actually matter. For example, what are some actionable ways an organization can improve its defenses? Maybe my tool/service/widget will help and maybe it won’t, but let’s try engaging in meaningful conversations.
What we should do instead:
As providers and vendors, we are often experts in our field, and others look to us for meaningful advice to help improve their security posture. So instead of infighting, let’s be honest. We can and should talk about the merits of our solution; if we didn’t think it was a good idea, we would not have built it. But we also need to accept that no product is a silver bullet or “one size fits all” solution. What if we stopped the slander marketing and gave some honest feedback on where the competitor might have a leg up on us? If the product or solution we are selling is worth anything, then an educated consumer should be able to identify how it can help them.
Let’s educate the consumer. Help them to understand good security practices, where there is real risk, and the different ways they can address those matters. Be honest. If your product is not a fit, do not try to force it. Furthermore, do not get upset if someone finds a hole, gap, or vulnerability in your tool. Instead, say thank you, offer them a bug bounty, and work on getting it fixed. Which takes us to my last point…
4: Sharing the Wrong Things, in the Wrong Places
Information sharing is one of the key methods for the security industry as a whole to stay ahead and achieve our mission. This is a challenging area to address because if done improperly, we can actually cause more damage than good. At the same time, I believe our focus on what we collect and how we use it is skewed. For example, let’s imagine a threat researcher who is tracking infrastructure of malware or an attacker for an extended period of time. Later that researcher complains because someone unknowingly burned that infrastructure on Twitter. The situation is not uncommon and highlights two areas I think we could seriously improve.
What we should do instead:
First, I am not saying that actor or malware tracking has no value or purpose, but I do believe we put too much emphasis on this kind of attribution and intel. As such, we hold much of the information related to these efforts extremely close to the vest. I believe this type of research should focus more on behaviors, which brings us full circle to focusing on how an attacker moves and acts. For the vast majority of teams, knowing if this malware or attack was Hancitor or Kovter or Color Animal APT is not really that important. It matters (or at least, it should) what the attack did, how far it progressed, and how long it took to identify the activity. Yes, having common terminology for malware families and actors is helpful for communications (again with the deceased equine), but for the vast majority of defensive teams, behaviors are what’s important, not attribution.
The other part of this point is to stop using Twitter as an intel sharing platform. I am a luddite when it comes to social media. We need to establish a better platform for sharing behavioral information and research. There is a vast wealth of data and actionable intelligence flying around 140 characters at a time. I am a fan of the openness of this type of sharing, but the delivery has room for improvement. We have tools and platforms purpose-built for intelligence sharing, a bit more focused on atomic data, but we as a collective need to establish a standard for sharing our findings in a more constructive manner. Despite how people try, Twitter is not a good medium for intelligent discussion of these topics, how to use the information, and how we can defend or leverage these techniques. So the takeaway here is: share information intelligently, but take it somewhere other than Twitter.
Take these thoughts for what they are: my opinions. My real hope is that this post leads to some good, thoughtful conversations. If something I said strikes a chord in you, good. We should discuss. I want the security industry to question itself more, bring constructive criticism to the status quo, and use those conversations to educate and improve how things are done.