“When AI systems fail, they fail in big ways.”
This stark warning from AI governance expert and longtime ISACA member Meghan Maneval opened her session at the recent ISACA Conference North America and set the tone for a candid discussion about the next phase of AI governance. In a world where AI can make decisions at scale and speed, a single misstep can lead to oversized consequences: legal violations, reputational damage, and lasting harm to customers and communities.
In her session, AI Governance 2.0: Navigating the Next Phase of Ethical and Trustworthy AI, Maneval challenged organizations to move beyond checkbox compliance. Today’s AI systems are dynamic, complex and deeply embedded into business operations. Managing them requires more than traditional controls. It demands a governance model that is strategic, adaptive and deeply cross-functional.
Maneval highlighted several approaches that can help organizations build trust and collaboration in their AI usage:
- Align AI initiatives with company goals: Every AI initiative should support clear business outcomes and stakeholder values. If it doesn’t serve a purpose, it probably shouldn’t exist.
- Demonstrate AI risks in a business-friendly way: Effective governance depends on storytelling. Reframe AI risk in terms executives care about: brand reputation, cost exposure and customer trust.
- Use regulatory compliance as a value driver: Regulatory alignment with frameworks like the EU AI Act, NIST RMF, or ISO 42001 is not just about avoiding fines; it’s a proof point of responsibility.
- Create an AI ethics board: Include diverse perspectives and clearly define roles for oversight across the AI lifecycle, from ideation to deployment and retirement.
- Conduct AI risk scenario planning: Think beyond technical failure. Simulate incidents like biased outputs, misuse by third parties or unexpected regulatory scrutiny.
- Use cross-functional teams to evaluate AI decisions: Embed AI governance into existing systems: software development, third-party assessments, procurement and even HR training.
Maneval also emphasized that companies’ existing incident response playbooks and business continuity planning will need to be updated to account for the AI risk landscape.
“We have to come into it with an understanding that something’s going to go wrong and we have to be prepared,” Maneval said.
Business processes should address how the AI system makes decisions, the potential risks of using the AI system, fair and ethical use of AI, and monitoring and reporting.
One tell-tale indicator of whether an organization has sound governance in place is how clearly it can explain how and where it is utilizing AI.
“If a company can’t tell you what their AI is doing, run away,” Maneval said.
Maneval also underscored the importance of embedding ethics and security throughout the AI systems’ lifecycle, before code is ever written. She added it is important to ensure AI models are trained on diverse, representative data.
To gain executive buy-in for the importance of investing in AI governance and ethics, Maneval said sharing high-profile examples of mistakes made by other companies can be effective.
“Leverage an existing incident as a catalyst for change in your own organization,” Maneval said.
Furthering the theme of learning from common AI missteps, Maneval provided guidance around mistakes to avoid when it comes to AI governance. Those include being too focused on a certain point in time, focusing only on technical risks, relying on IT or compliance teams instead of a cross-functional approach, failing to monitor AI over time, and inadequate prioritization of AI governance training.
“AI governance isn’t a one-time fix,” Maneval said, adding that it requires integration into business processes and regular stakeholder engagement to serve the organization well and turn AI governance into a competitive advantage.
Maneval offered guidance on developing a sustainable program.
- Start AI risk assessments at ideation – Don’t wait until you’re already building.
- Automate monitoring and evidence collection – Log how systems make decisions, who accesses them and whether results align with expectations.
- Train humans, not just models – Incorporate AI-specific risks into security awareness, ethics and compliance training.
- Document everything – Governance logs can provide clarity and accountability if your AI’s decisions are ever questioned.
Maneval’s final takeaway was simple but powerful: strong AI governance is more than a framework; it’s a mindset. It reflects how your organization balances innovation with responsibility, speed with scrutiny, and automation with accountability.
“Ethics needs to be considered at every stage of the process, in every piece of the AI,” she said.
In an era where AI is embedded in every decision, that commitment to transparency, fairness, and trust may be the strongest differentiator of all.