Yesterday, we provided some background information on Meltdown and Spectre, the two issues that are taking the security world (and in fact users of technology in general) by storm. By now, most practitioners are probably up to speed (or getting there) on what the issues are, what caused them, and how to address them in the short term. Looking down the road though, it already is clear that even after the initial cleanup is taken care of, these issues will be with us for a long, long time to come.
This means that there are some important considerations for savvy practitioners to address beyond just sliding back into “business as usual” once initial remediation is complete. These issues represent both an area of opportunity as well as a potential wakeup call for how we approach our security programs and how we model threats in the environment. Handled well, we can leverage these issues to bolster our organizational impact and help meet security and assurance goals both now and into the future; handled poorly, though, we ensure status quo (best case) or even potentially a regression in security efforts and our overall posture.
Messaging and communication
The first area where this is true is in the realm of messaging and communication around these issues. It goes without saying that in a situation like this one, communication is vital. But it may not be apparent the degree to which good communication represents a success factor for security efforts and poor communication serves as a stumbling block. However, to the extent that we can establish communication that is accurate, ongoing and 360-degree in nature (i.e., targeting both upward communication to the board and senior decision-makers but also peer organizations and staff), doing so represents an avenue for us as practitioners to build organizational credibility and internal social capital. It can also help cement a reputation as a reliable, go-to source of information about topics like these going forward. Conversely, poor (or, worse yet, no) communication is a recipe for panic and pandemonium.
There are a few reasons why this is the case, and they go hand-in-hand with why communication is so important in the first place. First, it is a given that personnel (including executives and board members) will read stories in the press or hear information second-hand that only sometimes will be complete and accurate. Second, even when information they receive is accurate, they may draw conclusions that are off the mark. Obviously, neither of those outcomes are desirable.
The antidote to this is communication. The ability of a practitioner to help socialize actionable, complete, and risk-informed information to those other areas is an opportunity to build credibility and also to help realize the response outcomes for the organization. As an example, consider statements an executive might hear in the press; for example, that “almost every modern computer is vulnerable” and that “attackers can use the information to steal secrets.” Now, as we covered yesterday, both of those statements are true. However, it also is true that, as of now, the issue involves information disclosure only – and that we have not yet seen attacks leveraging these issues in the wild. It would of course be foolhardy to assume that these things will always be the case. That said, it is also important to temper panic-inducing statements that folks might hear in the press with a systematic and workmanlike understanding of the risks and communication about those risks to the organization at large.
How does a practitioner establish that solid and grounded communication? It starts with understanding the issues and becoming educated on how your vendors are approaching the issues, how they operate, what their impacts are to your organization specifically and what your response posture is. Once you have this information in hand, socialize it. Establish reliable and consistent communication channels (again, both upwards and laterally) to inform others, and be open and receptive to answering questions that others might have. In fact, it can be helpful to establish a response dashboard, frequently asked questions, or other information source in a frequently used repository that your organization might have, such as a corporate intranet or information portal. Even just making the effort to put that information out there alongside a few practical, non-FUD statements (with links to authoritative technical details for the curious) can go a long way.
A learning moment
Additionally, events like this can serve as an important reminder and lesson for security and assurance teams. In this case, these issues can lead to information leakage across boundaries. From a threat modeling standpoint, for example when evaluating cloud workloads or containerized applications, threat models often assume that these attacks against segmentation will not occur.
But, yet, they do. Attacks like these serve to illustrate how things we take for granted (in this case, the segmentation barrier between processes, applications, and in fact the operating system itself) can occasionally be more porous or weaker than we often assume. It’s important to keep this in mind because there can be situations where this can impact how we plan the security architecture and how we implement controls. Last year, for example, we saw issues like Flip Feng Shui, Rowhammer, and hypervisor and container engine issues that potentially undermine the segmentation model in cloud and container environments.
This means that, for a truly future-proofed threat model, it can be advantageous to build in mechanisms that defend against a segmentation attack for high-risk situations. For example, we might choose to leverage hardware storage for cryptographic keys in a cloud situation to help mitigate against a possible segmentation attack. The point isn’t to get hung up on the specifics (for example, I’m not advocating that we store each and every single key we might use in an HSM), but instead to keep an open mind to segmentation attacks as we build security architectures, assess threat models and implement controls.
Likewise, if we haven’t taken the time to adapt systematic approaches (e.g., formalized threat modeling) but instead are kind of “winging it,” issues like this can help serve to remind us why a more workmanlike approach can be beneficial. Would a formalized threat model have informed us ahead of time that an issue like this one would happen? Probably not. However, it would give us an opportunity to at least consider an attack that undermines segmentation for a cloud environment and to build architectures that help preserve confidentiality to the extent that is possible.
See related video >>