Let’s face it, trust professionals (i.e., practitioners in trust-impacting disciplines like assurance, risk, cybersecurity, etc.) have a reputation for being risk averse. This is particularly true when it comes to new and potentially disruptive change.
There are good reasons for this. First, organizational partners sometimes loop us into things too late, meaning we often discover new initiatives late in the process (leaving little or no time to react.) Second, evaluating risk requires more time and data than evaluating usage or organizational value. To support that second point, consider a thought experiment: compare what is required to learn to drive (to gather sufficient skill and information to drive a car) to what you’d need to know and do to evaluate a car’s operational safety (i.e., its likelihood of crashing, failing, etc.). Which is more difficult? Which requires more time and information?
The point is that sometimes change—particularly disruptive change and technological innovation —can and does impact risk in our organizations. We, as the professionals chartered with understanding, evaluating, and mitigating risk, sometimes find ourselves fearing change, seeking to avoid it, or pushing back on it.
Pushing Back is a Trap
While all this is a logical result of human nature, I firmly believe pushing back to be a trap. It is a trap because pushing back on what the enterprise wants to do incentivizes them to go around us. It is a trap because failure to learn about and embrace new things leads to us missing out on opportunities to leverage the risk-reducing facets of emerging technologies (which in some cases can be significant). And it is a trap because it limits our own ability to grow as professionals.
It can be easier to understand this dynamic by looking backward at technology shifts that have already occurred. Hindsight can help us understand the full trajectory of a technology’s adoption from inception to ubiquitous (or near so) adoption.
Nowadays, it is inarguable that most practitioners need an understanding of cloud to be successful, that it’s a technology paradigm foundational to our organizations’ risk posture, and that it is a cornerstone of most modern technology stacks. At the time that cloud was initially being adopted though, it was still new. Consequently, and subject to the above dynamics, it was a source of potential anxiety for those working in trust disciplines at the time.
At that time, I worked closely with several customers who were understandably reluctant to embrace cloud. For example, one customer had an explicit “no cloud” policy. I recall a situation where that firm’s CISO had a request from organizational peers about a SaaS tool the organization very much wanted to bring in. The CISO’s answer? A hard no. No discussion or no negotiation. Just “no.”
What I knew that he did not (and that for obvious customer privacy reasons I could not tell him) was that the tool in question was hosted in the same datacenter as his production environment—literally within 10 feet of his production stack. The physical security controls were the same, many of the underlying operational and technical controls were the same, backup and resumption and controls were the same, etc.
He, logically and understandably, had reservations about using SaaS. And because of this, he burned quite a bit of “political capital” to temporarily stop organizational usage of this particular SaaS. But ask yourself—was this a productive risk control strategy? I would argue it was not.
Outcome-wise, it was only a year or so before the organization started using that same SaaS anyway. And because the organization felt they had no chance to advocate for their interests, they were loath to include security in future efforts. The net effect of his resistance then was to make his own job harder over the long term while having a de minimis impact on risk.
The Case for “Situational Awareness”
If pushing back on change is not the best long-term strategy, what is? One strategy that can help us “future proof” is situational awareness. Situational awareness, as a general term, just means understanding the situation you are in and being able to respond to it. In this case, there are two different ways that I interpret this. The first is understanding our own organization. This means understanding our technology environments, organizational context, and plans of organization and technology peers. The second is understanding the emerging technologies themselves—with the same (or nearly so) rapidity that organizations and technology peers build their understanding.
Looking at the first item, we need to understand the existing landscape from an inside-out and outside-in perspective. This includes the organization and how it functions, the technology used to enable that organization, the potential threats to the organization, and subsequent technology. Note that there is nuance here. Not only do we need to understand today’s landscape, but because we are trying to future proof, we need to understand the environment as it evolves as well. For this reason, we need to cultivate a relationship with organizational and technology peers where we are plugged into the new things they may want to do by establishing a “trusted advisor” relationship. This is trite, but also accurate. It’s also much more difficult to do in practice than it seems on the surface. At a minimum, it means that we need to find ways to secure things that we might prefer they avoid entirely.
Very seldom is a new technology all upside or downside. Usually, any new technology is a combination of both new risk and potential risk reduction.Second, building situational awareness means understanding new technologies as well. We need to understand how they function (to give us perspective on new risk they might potentially introduce) and how they can be used to reduce existing risk. Very seldom is a new technology all upside or downside. Usually, any new technology is a combination of both new risk and potential risk reduction.
But again, there is nuance. In the past, we learned about cloud and virtualization to understand their risk and risk mitigation potential. Today we might examine Infrastructure as Code, generative AI, or application containerization. Tomorrow we will need to examine whatever technology comes next. We need to be just as engaged in these new and emerging platforms as our technology and organizational peers, if not more so.
ED MOYLE | CISSP
Is currently chief information security officer for Drake Software. In his years in information security, Moyle has held numerous positions including director of thought leadership and research for ISACA®, application security principal for Adaptive Biotechnologies, senior security strategist with Savvis, senior manager with CTG, and vice president and information security officer for Merrill Lynch Investment Managers. Moyle is co-author of Cryptographic Libraries for Developers and Practical Cybersecurity Architecture and a frequent contributor to the information security industry as an author, public speaker, and analyst.