If you work in technology and have a working Internet connection, chances are good that you heard of (best case) or experienced firsthand (worst case) the ransomware variant making the rounds yesterday that most are referring to as a new Petya variant. It is fast, it is sophisticated and it has left a trail of global chaos in its wake as it impacted everything from national electrical grids to banks to shipping and logistics.
While this attack would be noteworthy on its own, it is particularly so coming as it does on the heels of the WannaCry attack just a few weeks ago. The reason that this fact is both relevant and noteworthy is that it leverages one of the same transmission vectors that WannaCry did: specifically, the EternalBlue Server Message Block (SMB) exploit (i.e., CVE-2017-0144)—an SMB issue addressed by MS17-010.
It bears noting, of course, that the situation is a bit more nuanced in this instance compared to WannaCry. WannaCry was fairly simple in its operation: Exploit the EternalBlue SMB issue, establish control, execute the payload, rinse and repeat. In the case of Petya, researchers are confirming that other transmission vectors are used beyond the relatively straightforward methodology employed by WannaCry. For example, IBM X-Force is reporting that this new malware also leverages the administrative remote execution tools Windows Management Instrumentation (WMIC) and PsExec as mechanisms to move laterally inside a network once it has established a foothold. Point being, it is not just a straightforward SMB worm anymore.
That said, the link with EternalBlue is significant for us as security practitioners because it highlights something to which we should pay attention. Specifically, this issue is fixable. The SMB issue can be patched. One might also reasonably question the rationale behind allowing unfiltered SMB traffic into an environment (particularly a relatively protected one such as a clinical or industrial control network) in the first place. But the fact that this ransomeware is spreading as rapidly as it is—and causing the incredible damage that it has—highlights the fact that in many organizations, actions are not (or cannot be) taken to address these issues. This, even after WannaCry caused a first round of serious chaos on the back of the same bug.
Is Your Risk Management Approach Working?
I apologize if that smacks of a “blame the victim” mentality. I am not calling it out to establish blame, but instead to highlight a point about this fact that is important. Specifically, there are some situations that are so dire—issues that have such a high level of potential risk associated with them—that they should cause us to at least question the status quo processes that we have guiding our organizations’ operations, meaning there are situations where the risk should cause us to approach the situation differently than might otherwise be the case. I think EternalBlue is one of those times.
There are often very good reasons why organizations cannot immediately apply every patch that comes along the exact second that they learn about it. Installing a patch is often a risky proposition from a critical business application uptime standpoint. Some applications are “temperamental,” requiring extensive testing before a patch can be applied. Other times, patching can require applying pressure to external vendors, for example, in situations when those vendors support a critical system directly due to a contractual relationship or existing business arrangement. In fact, even when employing technologies such as virtualization or patch management tools that can assist in minimizing the downtime risk associated with applying a patch, there are some systems and applications that are so critical that even a “ghost of a chance” of downtime is too great a risk to take.
And that is just the patching side of things. An organization could also have valid business reasons requiring it to allow SMB traffic directly into the production environment (even a protected or cloistered one). It might require this to support legacy applications, for example, that need to transfer files between environments—maybe those applications are both critical to business operations and challenging to replace. All these are potentially excellent reasons.
On the other hand, there is EternalBlue. There are two points I would make about this. The first one is that the ability to evaluate the potential impact of a given issue—and adjust processes as needed in a manner commensurate with that risk—is the quintessential touchstone of risk management. There are times when, even if your organization is one that has special needs with respect to applying patches or filtering inbound traffic such as SMB, the risk associated with a given issue may cause you to adjust your approach. The point? An issue such as this one is a useful time to examine performance and whether or not those risk management approaches are working as you want and expect. I am not suggesting you do this now if you are still fighting fires and dealing with the cleanup. But maybe make a note to address it after cleanup is resolved.
The second point I would make is that gaining foreknowledge about issues of this magnitude—and enabling preplanning as a result—is exactly the point of intelligence-driven security approaches. Meaning, getting a warning about an issue like this, e.g., its relative severity, ubiquity of vulnerability, is exactly why threat intelligence is valuable in the first place. So, your foreknowledge of both Petya and WannaCry (regardless of whether or not you were in the position to do anything about it in advance) is a useful barometer of the value that threat intelligence is providing to you. If you subscribe to a threat intelligence feed, have an internal threat intelligence capability or otherwise have some mechanism for advance warning and this was not on your radar, now is a good time to examine why not. Was this due to a breakdown in the process? Did the right people not hear about the issue? What were the available data and did the right people have them in time to make the right decision?
The point is, situations like these—dire as they may appear in the heat of the moment—can be a great opportunity to improve how we do things. Whether we act on them or not is what separates effective organizations from less effective ones.
Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.