ISACA Now Blog

Knowledge & Insights > ISACA Now > Posts > Increased Cyber Awareness Must Lead to Equivalent Action

Increased Cyber Awareness Must Lead to Equivalent Action

Matt Loeb, CGEIT, CAE, FASAE, Chief Executive Officer, ISACA
| Posted at 3:08 PM by ISACA News | Category: Security | Permalink | Email this Post | Comments (1)

Matt LoebRecent and widely publicized cyber attacks must be the impetus for a renewed and more concerted and coordinated global commitment to strengthen cyber security capabilities.

In May, the WannaCry ransomware attacks struck, underscoring the potentially disastrous consequences for health care facilities and their patients when medical records and medical devices are compromised. June brought yet another major attack in Petya, originally characterized as another widespread ransomware attack, but later revealed to draw upon a form of malware that does not steal data but, in fact, destroys it.

These types of attacks, and those that will follow, accentuate the increasing concerns about the continued escalation of the global cyber security crisis. It’s no longer just about stealing money and data, but one that’s now placing human lives at risk. While health care has been a primary target this time around, more threats loom on the potential for breaches or compromised access to industrial control systems that could result in penetration of critical infrastructure systems such as electric utilities, oil and gas facilities, or nuclear energy plants. This shines a spotlight on the need for a unified global response now.

Amidst the challenges of the current threat landscape, there are promising signs that an increasing number of enterprise leaders and boards of directors are making the defense of their organization against ransomware and other cyber threats a top priority. ISACA’s State of Cyber Security 2017 research showed the percentage of organizations with Chief Information Security Officers (CISOs) is up to 65 percent, a 15-point rise over the year before. And in a micro-poll of the ISACA professional community in the immediate aftermath of the Petya incident, half of respondents indicated they took action after WannaCry to bolster their defenses – in case something like Petya showed up.

Additionally, half of the post-Petya poll respondents indicated their organizations provide ransomware awareness training to their staff, and more than half of organizations are applying software patches within the first week that they are available. That’s a good start. Promoting cyber security awareness and adhering to basic cyber security fundamentals needs to be as common in the global digital economy as seatbelts are in cars. We have a long way to go to make this the reality.

While the past several months have created an aura of inevitability around major attacks, more than 4 in 5 respondents to our micro-poll indicate they expect ransomware attacks will be even more prevalent in the second half of 2017.We cannot accept this level of havoc as a ‘new normal.’ Putting in place a viable incident response plan is critical, but what’s worthy of further investment is protection before an attack happens. Every organization should proactively employ cyber security awareness for all staff, performance-based cyber security skills training, timely hardware and software updates, and the hiring of the most highly skilled staff to ensure preparedness for the next attack, ransomware or otherwise. Start with an assumption that your organization will be the next target of a cyber attack.

Governments need to exhibit bold leadership and do more, too. This includes a commitment from G20 nations to expand cyber security research and training, and standardize some of the measures that individual nations are putting in place. G20 nations also should consider providing cyber security resources and support to nations that are not equipped to invest in themselves, as the connectivity of the global digital economy means all of us are in this together. This can help amplify the reach of encouraging efforts that are unfolding at national levels, such as the UK’s National Cyber Security Strategy and the recent executive order on cyber security in the US. Expanding public-private cyber security partnerships, while leveraging the resources of industry associations and academia, also should be part of the solution.

As a global community, we remain vulnerable to the cyber threats that already are here today, as well as the ones that will surface tomorrow. We cannot fall victim to cyber attack ‘fatigue’; attacks like the WannaCrys and Petyas cannot become “business as usual.” Cyber security is everybody’s business. Cyber security is more than pickpocketing; it’s a matter of public safety. Awareness must translate into resolve, not resignation. Only then will we make even greater leaps toward a more safe and secure future.

Editor’s note: This blog post by ISACA CEO Matt Loeb originally appeared in CSO.

Comments

Patch Queue Theory: 88% less to cry about.

<P>The really sad part of WannaCry attacks was that they were completely preventable.  The core vulnerability used to implement the WannaCry attacks had its vulnerability and Microsoft patch publically announced in 16 March 2017, https://nvd.nist.gov/vuln/detail/CVE-2017-0144.  The finding was listed as a critical technical risk by every vulnerability scanner tool available.  By 12 May 2017, WannaCry ransom ware attacks had affected more than 150 countries and perhaps more than 300,000 instances of attack.  What would happen to all these firms if they simply patched a critical vulnerability within 10 days by 26 March 2017? <\P>
<P>The idea is so simple and obvious that it leads to a mix of honest and defensive objections.  What about the inventory of Windows XP systems still on line that had no patch until 12 May 2017?  But, this simply recasts the question, Why was a known vulnerable, unsupported computer still on line? <\P>
<P>Oddly speaking, hardware refresh cycles are a slow speed form of a Patch Queue.  The new replacement system is put in production with all its current security patches applied.  It replaces a system with an inventory of known vulnerable security patches in production.  The Cycle Time for patch updates is long, approximately 3 or more years in length, but hardware swap out actually does remove known vulnerabilities from a firm’s computer network. <\P>
<P>Patching known critical vulnerabilities at a faster rate, perhaps once a year does reduce the inventory of online but vulnerable computers by adding resources to the resolution team.  The only catch is that the vulnerability patching process may remain too slow to prevent loss.  A classic Queue theory trade-off between the costs of improving the rate of vulnerability resolution to exceed the rate of discovered vulnerabilities and the cost of leaving discovered vulnerabilities unresolved. <\P>
<P>Vague terms like “Best Practice” do not evaluate the costs of leaving known vulnerable systems online as targets of opportunity for villains.  Delving in to the idea of “Best Practice”, we see standards such as PCI DSS cleanly indicating that Critical vulnerabilities should be resolved in 30 days after a patch becomes available.  Best Practice would have allowed a firm to wait until 15 April 2017 to patch this vulnerability.  While that date may still involve risk of attack, it at least had a chance to reduce the number of attacks that occurred by 12 May 2017.  <\P>
<P>Systems are not all patched at once.  Some systems are laptops taken out of the office or systems left turned off during the first patching window.  With simple NM1 Queue Theory model it is possible to estimate that only 63% of vulnerable systems would likely be patched by the intended patching target of 30 days.  It should take another 60 days for 95% of systems to be fully patched.  To be 95% confident that all systems are patched in 30 days, the NM1 Queue Theory model would have a firm to have an average patching rate capable of patching all of its systems in 10 days; the remaining 20 days being set aside for naturally occurring noise in the patching process such as roaming laptops. <\P>
<P>Often villains are creatures of habit just as everyone else.  These would read of the vulnerability at the same time everyone else could and then begin development.  Assuming it took about 3 days to develop the attack and functionally test it, then they would begin releasing it on a target set of firms in small lots at first, just as any other fraud effort might.  First to see if deployment works and then also to see if attacks and ransom messages can be easily traced back to the villain.  After that, wide spread distribution would begin. <\P>
<P>For purpose of discussion, it will be assumed that villains can perfect a working attack in 3 days and then begin deployment at a rate that will reach 50% of its available targets by 12 days after release.  A villain team with this level of capability should be able to reach 33% of its intended targets within 10 days.  As the attack is ramping up, a patching team will be removing the numbers of systems still vulnerable.  For this example if teams had been working to normally patch about 63% percent of systems in 10 days a maximum of about 12% of systems could have been available to be hacked.   Perhaps only 36,000 rather than 300,000 systems might have been harmed. <\P>
Don Turnblade at 8/7/2017 6:37 PM
You must be logged in and a member to post a comment to this blog.
Email