Artificial intelligence is rapidly reshaping how organizations operate, innovate, and even imagine their futures. Yet amid extraordinary promise, AI brings with it a familiar challenge: how to govern a technology that feels both transformative and opaque. As we prepare for ISACA Conference North America 2026, one theme continues to surface in nearly every boardroom, audit committee and risk function discussion I participate in: How do we manage AI risk without slowing innovation?
My upcoming conference session explores this question through a lens not often applied to AI: quantitative governance. Rather than treat AI as a singular, unprecedented threat, we can ground it in established disciplines already proven to manage uncertainty at scale. And when we do, a clearer and more actionable framework emerges.
Despite the aura of novelty surrounding AI, the risk behaviors it introduces fall neatly into the operational risk categories the Basel II framework defined several decades ago. Data poisoning, hallucinated outputs and deepfakes all become precursors to familiar loss types: fraud, business disruption, regulatory exposure and reputational damage.
This continuity matters. It means we don’t have to invent new governance doctrines. We can extend existing ones.
While the loss categories are familiar, the causal pathways are new. In my session, we will walk through a taxonomy of 13 AI-specific triggers, from hallucinations and anthropomorphism to model drift, explainability gaps, and robustness failures. Understanding these root causes is what allows cybersecurity, audit and enterprise risk leaders to move beyond high-level anxiety into measurable, decomposed scenarios that can be prioritized, tested and governed.
AI is accelerating faster than most corporate governance structures can adapt. But governance lag is not inevitable. When we blend the discipline of operational risk, the rigor of model validation, and the clarity of quantitative analysis, we create a governance system that can scale with the technology it is meant to oversee.
AI doesn’t replace the need for risk governance. It reinforces it.