Editor’s note: As we begin 2026, the ISACA Now blog is diving into the questions that will shape digital trust disciplines in the new year. In today’s final installment of our weeklong series, we surface questions that should be on the radar of governance professionals in the months ahead. Find more governance resources from ISACA here.
As we enter 2026, governance professionals are operating in an environment defined by acceleration, convergence and heightened (and often unclear) accountability. Emerging technologies, particularly artificial intelligence, are no longer peripheral innovations; they are embedded into core business processes, decision-making and customer experiences. At the same time, regulatory expectations continue to expand across cybersecurity, privacy, AI, ESG, and operational resilience, increasing both compliance complexity and personal accountability for business leaders.
In this landscape, governance is evolving beyond traditional oversight and control. It is increasingly about enabling responsible innovation, sustaining digital trust and ensuring that organizations can adapt at speed without losing alignment with ethical, legal and societal expectations. The following five questions will help define the governance agenda for 2026 and distinguish organizations that are prepared from those that are reacting.
1. Who Is Accountable for Governing AI and Can That Accountability Be Defended?
AI will be operationally embedded across most organizations, influencing everything from credit decisions and fraud detection to workforce management and cybersecurity response. As AI systems become more autonomous and interconnected, governance professionals will face increasing scrutiny over accountability: when AI outcomes cause harm, bias or regulatory noncompliance, who is responsible?
Many organizations still rely on diffused accountability models, where responsibility for AI is shared across IT, data, legal, risk and business teams. While collaboration is essential, it often results in unclear decision rights and weak escalation paths. Regulators and boards, however, will expect clearly defined ownership and demonstrable oversight.
Governance professionals will need to help organizations establish explicit AI accountability structures: defining owners, decision authorities and governance bodies with clear charters. This includes accountability across the AI lifecycle: data sourcing, model development, deployment, monitoring and retirement. Frameworks such as COBIT, ISO/IEC 42001 (AI Management Systems) and the NIST AI Risk Management Framework (AI RMF) provide a foundation, but accountability must be operationalized and not merely documented.
|
Example: A financial institution designates an “AI System Owner” for each high-risk AI use case, accountable for performance, compliance and ethical outcomes. This role reports regularly to a cross-functional AI governance committee and escalates material issues to enterprise risk management and the board. |
2. How Do We Harness AI’s Value Without Eroding Digital Trust?
AI’s value proposition of speed, scale and predictive insight comes with new trust challenges. As AI-driven decisions increasingly affect customers, employees and citizens, governance professionals will be expected to ensure that innovation does not come at the expense of transparency, fairness and privacy.
One of the defining issues will be explainability. As AI models grow more complex, organizations may struggle to explain how decisions are made, even internally. Yet explainability and transparency are rapidly becoming expectations, particularly in regulated and high-impact contexts.
Governance professionals will need to embed trust principles directly into AI governance. This includes defining acceptable-use policies, minimum transparency thresholds, human oversight requirements and ethical review processes. ISACA’s Digital Trust Ecosystem Framework (DTEF) reinforces that trust is built through consistent, measurable behaviors across governance, technology and culture. In 2026, organizations that cannot demonstrate trustworthy AI practices may face resistance from regulators, customers and their own workforce.
|
Example: An organization requires explainability assessments for AI systems used in customer-facing decisions, ensuring that outputs can be understood, challenged and corrected by humans when needed. |
3. Can Our Governance Approach Keep Pace with Growing Compliance Complexity?
The compliance burden is expanding in both scope and depth. Organizations will be navigating overlapping requirements related to cybersecurity resilience, privacy, AI regulation, ESG reporting and operational continuity, often across multiple jurisdictions. Governance professionals will be challenged to maintain compliance without creating excessive friction or governance fatigue.
Traditional, siloed compliance models will increasingly prove ineffective. Governance professionals will need to promote integrated, risk-based governance approaches that align policies, controls and reporting across regulatory domains. COBIT’s focus on end-to-end governance, combined with enterprise risk management practices, provides a structure for prioritizing material risk rather than treating all requirements equally.
Technology will be a critical enabler: automated controls monitoring, policy management tools and integrated GRC platforms will help organizations scale governance. However, governance professionals must ensure that automation supports informed decision-making rather than creating a false sense of assurance.
|
Example: A multinational organization aligns AI governance, privacy controls and cybersecurity requirements into a single control framework, reducing duplication while improving regulatory reporting consistency. |
4. How Do We Govern Digital Supply Chain Risk in an Interconnected World?
Modern organizations operate within complex digital supply chains that include cloud providers, AI vendors, data platforms and technology partners. Governance professionals will need to address a fundamental shift: risk is no longer confined within organizational boundaries, yet accountability remains firmly internal.
Traditional third-party assessments conducted annually or during onboarding are insufficient for managing dynamic, technology-driven dependencies. AI-enabled services, continuous data exchange and shared platforms mean that failures or weaknesses in one part of the digital supply chain can rapidly cascade.
Governance professionals will need to advocate for continuous digital supply chain governance—integrating real-time monitoring, stronger contractual accountability and closer alignment between procurement, technology, risk and compliance functions. Digital trust is transitive; weaknesses in a supplier’s controls can directly affect regulatory exposure and organizational reputation.
|
Example: An organization continuously monitors the security posture and AI practices of key digital suppliers, linking results to risk appetite thresholds and escalation protocols. |
5. Are Boards Equipped to Govern Technology-Driven Risk and Opportunity?
I was recently asked at a board meeting by the chair of the strategy committee: “Mark, when did technology start driving my business strategy?” As technology becomes inseparable from strategy, boards are under growing pressure to meaningfully govern technology-driven risk and opportunity. Boards will be expected to understand not only what technologies are in use, but how they affect resilience, trust and long-term value creation.
Governance professionals will play a critical role in enabling this shift. Boards need clear, decision-relevant insights, not technical details on topics such as AI risk, cyber resilience, regulatory exposure and digital trust. This requires rethinking how information is framed, how risk is communicated and how governance bodies are structured.
Frameworks such as COBIT, supported by board-level performance and oversight practices, can help translate complex technology risks into strategic governance discussions. In 2026, effective boards will move from passive oversight to active stewardship of digital capabilities.
|
Example: A board receives quarterly briefings on AI risk aligned to enterprise risk appetite, including emerging regulatory trends and scenarios that could materially affect strategy. |
Build Trusted Governance Systems
The governance challenges of 2026 are already here. Accountability for AI, trust in automated decision-making, compliance complexity, digital supply chain exposure and board readiness will define how organizations succeed or struggle in the years ahead.
For governance professionals, the task is no longer simply to implement frameworks or meet regulatory requirements. It is to ask the right questions early and to build governance systems that are adaptive, defensible and trusted. Those who do will be well-positioned to help their organizations innovate responsibly while maintaining confidence in an increasingly digital world.
Five Practical Actions Governance Professionals Should Take Now
As governance expectations continue to evolve, these actions can help professionals move from awareness to execution as 2026 approaches:
- Clarify accountability before scaling technology.
Ensure every high-impact digital and AI-enabled capability has a clearly named owner with defined decision rights and escalation authority. Collective oversight mechanisms are important, but they do not replace individual accountability for outcomes. - Govern outcomes, not just compliance artifacts.
Move beyond verifying the existence of policies and controls. Use risk, trust and performance indicators to assess whether governance is influencing real-world behavior, decision quality and resilience. - Build trust into design instead of after deployment.
Embed transparency, explainability, privacy and ethical considerations early in system and AI lifecycle decisions. Retrofitting trust after implementation is costly, disruptive and often ineffective. - Apply continuous governance across digital supply chains.
Shift from periodic assessments to ongoing, risk-based oversight of critical digital suppliers, platforms, data providers and AI service partners. Digital trust increasingly depends on ecosystems, not just internal controls. - Invest in people, skills and governance capability.
Future-ready governance depends on professionals who understand emerging technologies, risk and enterprise decision-making. Exploring ISACA’s training and certification portfolio – including COBIT®, CRISC™, CGEIT®, CDPSE™, and AI-focused credentials such as the Advanced in AI Audit (AAIA™), Advanced in AI Security Management (AAISM™,) and Advanced in AI Risk (AAIR™) – can help organizations build the skills and competencies needed to govern enterprise information and technology with confidence.
About the author: Mark is an internationally known Governance, Risk, and Compliance expert specializing in information assurance, IT risk, IT strategy, service management, cybersecurity and digital trust. Mark has a wide array of industry experience including government, health care, finance/banking, manufacturing and technology services. He has held roles spanning from CIO to IT consulting and is considered a thought leader in frameworks such as COBIT, DTEF, NIST, ITIL and multiple ISO standards. Mark is also a two-time recipient of the ISACA John Kuyers award for best conference contributor/speaker as well as an ISACA Hall of Fame recipient in 2024. He is also an APMG product knowledge assessor for the CGEIT, CRISC and CDPSE certifications.