Does Next-Generation Anti-Virus Solve the Fatal Flaws of Anti-Virus?

Bob Gourley

Share this Project

next-gen-anti-virusEditors Note: This post originally appeared on CTOvision.com
By Bob Gourley

The chorus of voices declaring the end of anti-virus has reached a deafening pitch. Many of us in the community, myself included, have long said that anti-virus is dead and even a senior VP at Symantec has now admitted such. When the biggest anti-virus vendors, folks who want to perpetuate anti-virus myths for financial reasons, admit anti-virus is dead, it is time to realize anti-virus is a failed approach. It does not stop bad guys and their code.

We are now seeing a rise of new vendors claiming they are “next-generation” anti-virus. Yet before anointing “next-generation anti-virus” as the successor to anti-virus, we should ensure we are not falling into the same traps that sunk legacy anti-virus. As in other endeavors, those who don’t learn from history are condemned to repeat it.

A Critical Gap in Coverage

One of the most significant challenges for next-generation anti-virus is the file-centric paradigm in which it operates. Anti-virus – next-gen or legacy – operates on files. Any time a new file hits the system or a file is executed, the anti-virus agent on host is called to analyze that file. A binary decision is then made on the file – allow it or block it.

This paradigm made sense over the last 25 years, but is now obsolete. New file-less intrusions are increasingly common in enterprise environments. A file-less intrusion is one where intruders leverages programs already on the host (e.g., browsers, Office applications, and Windows utilities) to achieve their objectives.

This phenomenon is being widely recognized in the industry. See:

The file-centric paradigm of anti-virus has a significant coverage gap when it comes to file-less intrusions. Moreover, even in the context of attacks that are file-based, malware is increasingly encrypted or packed using obfuscation engines, and then unpacked or decrypted in memory when executed. This renders the static analysis approaches in next-gen anti-virus ineffective, creating yet another hole in protection.

In addition, the fact that next-gen anti-virus technology must be updated regularly – just like traditional anti-virus – demonstrates it is still chasing threats, not getting in front of them. Any AV successor will need to fundamentally break the chasing-from-behind model of anti-virus. Regardless of the algorithm used – heuristics, signatures, reputation lists, machine learning – any anti-virus engine must make a real-time decision on whether to allow or block a file, and it can only do that using the detection logic it is running at the moment.

Where legacy anti-virus failed to produce threat intelligence, any next-generation endpoint solution will need to produce and consume threat intelligence to stay in front of the adversary and empower other security controls to learn from attacks.

Put simply, next-gen anti-virus will need to analyze program behaviors (to detect file-less intrusions) and capture attack artifacts, as well as any command and control servers. Code is just one aspect of understanding adversaries and is captured too little too late by anti-virus. The following sections explain the fundamental challenges faced by anti-virus solutions in making decisions on files.

The Inescapable Trade-offs of Anti-Virus

The most obvious criticism of traditional anti-virus is that it uses a signature-based technique to detect threats. Without a particular attack signature, an anti-virus solution generally won’t detect the attack. As a result, defeating signature-based solutions is as simple as changing the code slightly, without changing its functionality.

While this is too limited a model to address today’s attacks, the fact that customers still use signature-based AV is a reflection of market requirements: Customers have near-zero tolerance for false positives in a blocking technology. Anti-virus is unable to achieve higher detection accuracy without generating significant false positives, which is a function of the science of detection. Any product that wants to claim the mantle of next generation anti-virus will need to wrestle with this science.

The Science of Detection

The trade-off of accepting lower accuracy to minimize false positives is best represented by a Receiver Operating Characteristic (ROC) curve, such as shown in Figure 1.

Receiver Operating Characteristics

Figure 1: Receiver Operating Characteristic curves for malware detection

The ROC curve is the scientifically accepted method for measuring the performance of any detection system and illustrates the trade-off between false positives and true positives. The true positive rate, indicating what fraction of actual malware is detected, is shown on the Y-axis. The false positive rate, indicating what fraction of detections are not malicious files, is shown on the X-axis.

Discussing the performance of a detection engine without addressing true positives (detections) versus false positives (wrongly labeled benign events) is an ignorant error in marketing or, worse, deliberate deception. For instance, if not mentioning the false positive rate, one could easily achieve 100% detection of all threats simply by calling every event malicious. The deception by not stating one’s false positive rate is obvious – this highly “accurate” system would be untenable in practice because of the high false positive rate in calling every event malicious.

The left chart in Figure 1 illustrates how to measure the performance of a detection engine (a “receiver”). The line that cuts diagonally through the square represents the performance curve of flipping a coin in deciding whether an event is malicious or not. You have a 50/50 shot at getting the detection right; this represents the worst possible performance curve. Region C is the next worst region and Region A is the best. As you accept more false positives (going right on X-axis), your detection rate (Y-axis) increases. In other words, if you are willing to accept higher false positives you can detect more malicious events.

ROC curves illustrate the fundamental trade-off between false positives and true positives, which allows companies to tune (or select) a detection engine to achieve an acceptable level of false positives. In the case of anti-virus, defined as a solution that automatically detects and quarantines malware in real-time before damage can occur, the acceptable false positive rate is extremely low.

A false positive rate notably above zero runs the risk of “bricking” systems. For instance, if a system driver is detected to be malicious when it is not – a false positive – the system might be rendered inoperable and potentially unbootable when anti-virus detects and quarantines it. For an anti-virus system to reach an acceptable false positive rate – close to zero – its true positive rate will be necessarily low – something in the red zone of the right chart in Figure 1 (zoomed in on the false positive axis). Better performance is represented in the blue zone, but even there false positives are a problem.

Even a very small false positive rate can translate into a high number of false positives. If you process 1 million events per day in your enterprise (the sum of all events analyzed on endpoints in a given day), a false positive rate of just 0.0002 will result in 200 false positives or potentially 200 bricked machines daily. The number of events your detection engine processes daily is as important as the false positive percentage in determining your actual false positive volume.

Three things should now be clear: (1) if you want to provide a blocking technology (like anti-virus), you must have very close to zero false positives, (2) there is an inherent trade-off between true positive detection and false positives, and (3) any anti-virus engine (traditional or next-gen) presents operational challenges for the enterprise customer even at extremely low false positive rates.

Next-Generation Anti-Virus will Bear the Anti-Virus Cross

Regardless of how next-generation anti-virus is marketed, these solutions still need to live with the true positive/false positive trade-off and the burden of near-zero tolerance for false positives. In the anti-virus model, the only solution to an unacceptable false positive volume is to tune down the false positive rate, resulting in a lower true positive rate and thus missing malicious activity. The other alternative is to not use the solution in inline blocking mode, in which case it’s not really anti-virus – it’s an alerting system.

The key question for products claiming to be next-generation anti-virus is whether they are a true substitute for anti-virus – real-time detection and inline blocking – or a companion to traditional anti-virus – alerting on suspicious activity in order to increase efficacy. If you claim to have a blocking solution that stops malware missed by traditional anti-virus, you lack credibility without demonstrating your false positive rate.

The Next-Generation Security Endpoint

Industry consensus is building around the view that anti-virus (new or traditional) is fundamentally the wrong paradigm for next-generation endpoint security. The gaps left by file-based scanning leave too much unprotected attack surface. In addition, the ROC curve highlights an untenable position, regardless of what algorithm you use.

To date, next-generation AV has failed to change the model, leaving it hamstrung by the same challenges as traditional AV and facing the same future. As Einstein reportedly observed, “Insanity is doing the same thing over and over and expecting different results.” Businesses considering their future endpoint strategy are encourage to explore a more comprehensive approach.

The Take-Aways On Next-Generation Endpoint:

  • You can’t stop what you can’t see. File-centric solutions will never stop file-less attacks. Static analysis solutions are stymied by encrypted code.
  • Boasting of accurate detection without addressing one’s false positive rate is like a quarterback bragging of touchdowns without mentioning interceptions.
  • The science of detection is inescapable. Next-generation anti-virus will bear the sins of traditional anti-virus.
  • Enterprises need a comprehensive approach to endpoint security. Nothing else will work.





Request a demo with Red Canary