ISACA Journal
Volume 5, 2,017 


Does Fully Disclosed Mean Fully Exposed? Nondisclosure 

Pete Lindstrom 

Every day, between 10 and 20 vulnerabilities are publicly disclosed, a few more than that are discovered, and an estimated 10-100 times that are created by software developers around the world. It’s like an 8th grade word problem gone horribly awry: “Hackers disclose 20 vulnerabilities every day. If developers create 200 vulnerabilities, when will the hackers find all the vulnerabilities?” Infinity beckons.

Meanwhile, though nobody disputes the likelihood that these disclosure rates and well-known events such as Microsoft’s Patch Tuesday will continue, they willfully ignore the fact that the vulnerabilities being announced in the next 3-5 years are currently present in production systems. No matter how hard we try to find all these vulnerabilities before the bad guys, it is a mathematically impossible exercise, especially when factoring in individual vulnerability instances in the enterprise.

These latent, undiscovered vulnerabilities would be a much bigger problem if it were not for 2 things: First, software vulnerabilities are not even the most common way that attackers get into systems, and second, there are so many vulnerabilities available that there is no need for attackers to find their own 0-days. In fact, less than 10% of vulnerabilities that are found are ever exploited. And even if they are, the chance that they will be exploited in your IT environment is vanishingly small.

For the discerning security professional, these two facts together should lead them to conclude that patch management is highly inefficient and ineffective. It is inefficient because so many patches applied will never protect against an attack; it is ineffective because so many attacks against enterprises, such as 0-days and phishing, will still be successful.

Consider, for example, recent statistics offered by Microsoft in its Security Intelligence Report. The data show that in 2015, 391 of the 415 vulnerabilities disclosed that were patched were never exploited (94%), and of the 24 vulnerabilities that were exploited, 12 were actually 0-days that were exploited prior to the availability of the patch (50%).1

What should we make of the fact that attackers never consummate their relationship with so many vulnerabilities? With an abundance of vulnerabilities, all attention must turn to attackers and their own cost-benefit model as the deciding factor in whether systems will be compromised. That is, the higher the benefits and lower the costs, the more likely it is that an attacker will act.

As a profession, we make the costs trivial for attackers to exploit vulnerabilities through the publication of proof-of-concept exploits, penetration testing tools and the like. Our smartest technical engineers argue about disclosure as we stroke their egos amidst the abundant evidence that they are increasing risk simply to assuage their cognitive biases about whether developers are negligent or not as they compete for recognition as if their one snowflake (or handfuls) out of tens of thousands of vulnerabilities matters most. It is, of course, even more reprehensible when endorsed by very large companies against their competitors.

At the very least, these hackers and their media groupies should be required to define security-level thresholds about the software that they alternately label “secure” or “insecure” based on the disclosure and patch of a single vulnerability. Since nobody believes vulnerability-free software is possible in nontrivial software, we should be seeking more appropriate means for evaluation.

For any economics-minded, risk-oriented security professional, the alternative to the disclosure process is obvious—minimize the availability of information and seek other means of protection that do not require participation in this ridiculous side show.

With patching so expensive and insufficient, perhaps we should pursue solutions that provide patch independence—that is, solutions that protect regardless of whether the vulnerability is known and/or the patch applied.

Some of these solutions are simple, such as removing local admin rights from users, enlisting user access control on Windows systems and employing application control and whitelisting solutions. Often, it is reasonable to incorporate intrusion detection rules to identify specific attacks against vulnerabilities or apply more sophisticated solutions that leverage machine learning techniques to identify compromised systems. Extreme isolation and separation techniques can be used to minimize the impact of a compromise as well.

Consider also a future that leverages deception techniques to identify attackers attempting to exploit systems. Or one that encrypts data separately from the application and protects it from system-level attacks.

Perhaps the most nefarious element to this entire scenario is that people get stuck in the patch rut—it becomes a regulatory requirement and completely saps any interest in doing something BETTER. It is remarkable how many security professionals fall into the trap of believing patching is the only way to protect their environments. And the opportunity cost simply adds insult to injury by tying up all scarce resources.

For years, security practitioners have espoused the idea of “patch hygiene” that seeks to respond to and close 0-days as rapidly as possible. But as any physician will tell you, hygiene is preventative—once the patient is already sick, the answer is not better hygiene. Instead, it’s about more active treatment options: ferreting out the root cause of the disease, bolstering the immune system, removing the source of the infection. “Patch independence” then, more than simple “patch hygiene,” must be the mantra of the future as we consider digital transformation as the Internet of Things takes hold and software revolutionizes our lives. Attempting to deploy an already-failed-can’t-scale solution to new environments binds us to the preexisting conditions of epic failures.


1 Microsoft, Microsoft Security Intelligence Report Volume 21, USA, 2016,

Pete Lindstrom
Is vice president of security strategies with IDC’s IT Executive Program (IEP). He has extensive and broad expertise across the cyber security landscape, but is best known as an authority on cyber security economic issues such as strategic security metrics, estimating risk and return, and measuring security program efficacy. He has also focused on applying core risk management principles to new technologies, architectures and systems such as virtualization, cloud security and big data. He has developed the Four Disciplines of Security Management (a security operations model), and the Five Immutable Laws of Virtualization Security, which was integrated into guidance from the PCI Council.


Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.