Editor’s note: As we begin 2026, the ISACA Now blog is diving into the questions that will shape digital trust disciplines in the new year. In today’s installment of our weeklong series, we surface questions that should be on the radar of risk professionals in the months ahead. Find more risk resources from ISACA here.
Planning for 2026 does not involve preparing for a single dominant threat. Instead, leaders are watching several pressures at once. Digital disruptions are routine. Supply chains depend on policy as much as logistics. Artificial intelligence (AI) influences decisions in ways that are not always visible. Geopolitics impacts energy and technology access. Budgets are tighter, but expectations are higher.
None of these issues are new. What has changed is the need for readiness. The organizations that will navigate 2026 most effectively are not those that predict perfectly, but those that can respond quickly, protect value and maintain trust. For ISACA governance, risk and assurance professionals, this means clear roles, practiced capability and confidence in decisions.
With that context, here are five risk questions that will influence how effectively organizations navigate the year ahead:
1. Are we resilient enough when disruption is ordinary?
Cyber incidents are now part of daily operations. The 2025 Cost of a Data Breach Report places the average global breach cost at US $4.44 million, yet many organizations report that the most significant losses come from interrupted services rather than the breach itself.
Disruption often begins with routine activity: a configuration change, a cloud region issue or a software update. The 2024 CrowdStrike incident shows how a routine update can have global impact when many organizations share the same dependency. Technology and business functions were restored, but the disruption was felt for days in customer service, operations and confidence.
For ISACA professionals, resilience is a practiced capability. Recovery must be coordinated, roles understood and communication fast and clear. Leadership now asks:
- Who is accountable when disruption occurs?
- How will decisions be made when information is incomplete?
- What practice has taken place to ensure the response will work?
Takeaway: Organizations that ran tabletop exercises and simulations were faster and more coordinated when disruption occurred.
2. How should risk be managed when geopolitics impacts technology supply chains?
Technology supply chains are now political. They span cloud regions, undersea cables, software libraries, chip manufacturing and model hosting. A problem in one area can affect many others. Sanctions, export controls, and national policies now influence goods, but also compute capacity, data location and access to advanced chips.
The effects extend beyond borders. Local hosting rules for AI models and training data may require redesigning architectures or moving workloads. As countries develop sovereign AI capability, access to compute becomes a strategic risk. The concern is no longer cost, but control: who owns the technology, where it can be used, and under which rules.
Leaders are asking questions that go beyond procurement:
- Which cloud regions and providers are essential?
- How far do dependencies extend beyond tier one?
- Do contracts support alternatives, portability, and continuity?
For ISACA professionals, this is third- and fourth-party risk. Their assurance and visibility support confident decisions when conditions shift.
Takeaway: Visibility into supplier chains, resilience expectations and continuity arrangements are now strategic.
3. Can AI be governed as a system rather than a tool?
AI influences decisions across the organization. Productivity tools suggest wording. Customer service platforms generate responses. Developers accept AI-generated code. These actions are small, but they shape outcomes.
According to the AI Incident Tracker, many reported incidents were caused not by model failure, but by decisions made without review. For example, AI-generated contract language referenced a regulation that did not exist and was approved anyway. The risk is unchecked certainty.
Leaders are responding with questions:
- Who approved the decision?
- What evidence shows it was reviewed?
- How do we know the outcome was reasonable?
Regulators are moving in the same direction. The EU AI Act emphasizes accountability, explainability and documentation. It does not require executives to understand models; they require ownership of how AI is used and proof that decisions were reviewed.
For ISACA professionals, governing AI as a system means managing how it is used across workflows, not assessing each tool separately. Controls must ensure appropriate use, oversight and evidence of review, such as human-in-the-loop checkpoints, decision logs and audit trails.
Takeaway: AI risk is governance risk. Clarity of ownership matters more than the algorithm.
4. Where is the buffer when digital ambitions continue, but budgets tighten?
Organizations remain committed to digital transformation, but the financial environment offers less margin for error. With slower growth, strategic investments must earn their place.
Leaders increasingly ask:
- What problem is being solved?
- What outcome will be delivered?
- How will success be evidenced?
This is value governance: a core element of COBIT. Some initiatives, such as, cybersecurity cannot be delayed because postponing will increase risk and long-term cost. Others may need to wait. A customer-facing chatbot might be useful, but if identity controls are outdated, strengthening authentication comes first. The priority is to invest in what keeps the organization safe and running, before adding new features.
ISACA professionals connect technology proposals to risk, outcomes and operational impact, helping leadership decide what to advance, what to delay and what must be protected.
Takeaway: Strategic trade-offs matter. Invest in what strengthens the core and defer what does not.
5. How must the risk profession evolve in a digital-first environment?
Automation now performs risk tasks that once required manual effort. Dashboards update automatically, alerts fire when thresholds are crossed, and in some cases, risks are detected before they appear. These improvements reduce operational effort, but they also change expectations.
Executives increasingly expect insight rather than information. Instead of heat maps and scorecards, they want to know what matters, why it matters and what action should follow. This pattern has been noted in multiple surveys, including the 2024 PwC Global Risk Survey, where the majority of executives said they expect risk teams to provide guidance, not just reporting.
This shift changes the work of the profession. Risk and assurance teams now need to connect data to context, anticipate how risks interact and explain how choices affect outcomes. Communication, systems thinking and ethical judgement become central skills. The role is moving from describing risk to helping leaders make better decisions when information is incomplete.
Takeaway: The future of risk is decision support, helping leaders make good choices.
Prepare. Adapt. Thrive.
Preparation will make the difference on the risk landscape in 2026. Teams that have practiced their response know who decides, how to communicate and how to keep services running when something goes wrong. Clear ownership, reliable information and visibility into key dependencies give leaders the confidence to act while events are still unfolding. When those conditions are in place, disruption becomes something to manage rather than something that stops the business.