Editor’s note: As we begin 2026, the ISACA Now blog is diving into the questions that will shape digital trust disciplines in the new year. In today’s installment of our weeklong series, we surface questions that should be on the radar of cybersecurity professionals in the months ahead. Find more security resources from ISACA here.
As we look toward 2026, the conversation around cybersecurity is becoming more direct and more consequential. Boards, regulators and executive teams are no longer satisfied with knowing that security investment exists. They want to understand whether those investments are defensible, effective and aligned to real business risk.
The rapid adoption of AI, growing reliance on third parties and the continued exploitation of identity have changed the nature of accountability. Cybersecurity leaders are being judged less on the volume of controls they deploy and more on their ability to explain decisions, trade-offs and outcomes with clarity.
In this environment, credibility will be defined by how well security professionals can answer a small number of difficult questions. The following five are increasingly shaping trust, influence and professional standing as we move into 2026:
1. Where is AI making decisions today, and who is accountable when it goes wrong?
AI is no longer experimental. In many organizations, it is already influencing outcomes that matter, from access decisions and fraud detection to customer interactions and workforce screening. These decisions often carry legal, financial and ethical consequences, yet accountability remains unclear.
Ownership is frequently fragmented across technology, data, legal and business teams, with limited visibility of where AI-driven decisions are actually occurring. This ambiguity is becoming increasingly difficult to defend. Boards and regulators now expect clear answers about where AI is used, how decisions are governed and who is accountable when harm occurs.
What is already happening is a shift away from treating AI as a technical capability toward recognizing it as a decision-making system. That shift brings expectations of governance, transparency and escalation pathways. In practice, this means AI inventories, explicit ownership aligned to business accountability and defined mechanisms for human intervention and overrides.
The hard truth is that many organizations believe they have AI governance because they have policy. Very few can demonstrate operational control over how AI decisions are actually made and challenged in practice.
Security professionals will play a critical role in enabling this clarity – not by owning every AI system, but by helping organizations establish visibility, risk assessment and assurance across increasingly complex decision environments.
2. Can we explain our cyber risk posture in business terms, not technical language?
For years, boards have asked security leaders to communicate risk in a way that supports decision-making. Many believe they are receiving those answers. In reality, they are often being shown activity, not exposure.
Dashboards, maturity scores and technical metrics can demonstrate effort, but they rarely help executives understand what actually matters. As economic pressure, regulatory scrutiny and incident costs continue to rise, this gap is becoming harder to sustain. Leaders need to make trade-offs, and they expect security teams to support those decisions with clarity.
Risk appetite statements are often written at a level of abstraction that makes them difficult to apply to real decisions. When everything is described as “low tolerance” or “not acceptable,” teams are left without meaningful guidance on prioritization or trade-offs.
This is driving a move toward scenario-based risk assessment and financial expression of risk. Precision is less important than defensibility. Boards want to understand which scenarios matter most, how much exposure exists and how proposed investments reduce that exposure in practical terms.
A common failure is confusing completeness with insight. Reporting everything often results in leaders understanding nothing.
Security professionals who cannot bridge this gap will struggle to influence priorities. Those who can do it effectively will increasingly shape investment decisions, risk appetite discussions and enterprise strategy. At leading organizations, risk appetite is increasingly being tested against realistic scenarios, forcing explicit conversations about what risk is genuinely tolerable and what is not. Organizations that have adopted cyber risk quantification in a deliberate way are already finding it easier to translate cyber risk into terms executives can act on.
3. Which third parties and supply chain dependencies could materially disrupt our organization, and how confident are we in that assessment?
Third-party risk is no longer a peripheral issue. Concentration risk, cloud dependencies, SaaS platforms and offshore service models mean a small number of providers can now create disproportionate operational and reputational impact.
At scale, this risk extends well beyond contractual third parties. Software supply chains, open-source dependencies, managed platforms and automated update mechanisms now represent material exposure, often without clear ownership or visibility.
The uncomfortable reality is that many organizations do not know which supplier or dependency failure would hurt them most until it happens. Annual questionnaires and generic ratings provide coverage but little insight. This approach is increasingly misaligned with board and regulatory expectations.
In many organizations, software supply chain risk is implicitly accepted rather than explicitly assessed, simply because the dependency is widespread or operationally difficult to replace.
The question is no longer whether a vendor was assessed, but whether leaders understand the impact of that dependency failing and the organization’s ability to respond. This requires fewer but deeper assessments, grounded in business impact and realistic scenarios, supported by ongoing monitoring rather than annual reviews.
Many supply chain risk programs optimize for coverage rather than consequence, creating a false sense of assurance at precisely the wrong level.
Security professionals must influence procurement, contracting, architecture and exit strategies – not just assurance processes. The ability to clearly articulate supply chain exposure is becoming a defining capability.
4. Are we just securing systems or are we protecting what truly matters to the business?
Digital transformation has blurred traditional system boundaries. Business services now span cloud platforms, APIs, third parties and legacy environments. Yet many security programs still focus on protecting individual systems rather than the services and processes the business actually depends on.
This disconnect is becoming increasingly visible to boards and executives. They care about service continuity, customer trust and regulatory obligations, not whether a specific system meets a control benchmark. Security investment that does not align to those outcomes is increasingly questioned.
As a result, enterprise security architecture, business process mapping and recovery capability are returning to the center of the conversation. Prevention remains important, but resilience, containment and recoverability are increasingly how success is measured.
In many organizations, security architecture has quietly drifted away from business reality, leaving teams highly compliant but poorly prepared for disruption.
Security professionals who understand how their organization delivers value, and how to align controls to those realities, are better positioned to support strategic outcomes rather than simply reacting to threats.
5. Do we truly understand who and what can act in our environment, and under what authority?
Identity has become the most exploited and least well-understood attack surface. Most successful intrusions no longer rely on malware alone. They exploit credentials, tokens, sessions and over-privileged accounts. Despite significant investment in identity platforms, many organizations still lack a clear view of who and what can act across their environment.
Most identity programs grew organically, tool by tool and exception by exception. Few were designed for a world where autonomous agents operate at scale.
The rise of agentic AI intensifies this challenge. AI agents will authenticate, access data, initiate workflows and make changes across systems, often without continuous human involvement. This introduces a new class of non-human identities whose authority, behavior and lifecycle must be governed.
In 2026, organizations will be expected to demonstrate visibility across human and non-human identities, strong lifecycle controls, clear ownership of privileged access, and behavior-based detection of anomalous identity activity. Just as importantly, they will need explicit limits on what AI agents are permitted to do and when escalation is required.
Many organizations believe identity risk is under control because IAM programs are funded. In practice, authority often remains poorly understood and loosely constrained.
For security professionals, identity is no longer a platform discussion. It is a core risk discipline that underpins AI governance, supply chain trust and operational resilience.
Security Professionals Must Embrace Accountability
The questions cybersecurity professionals will be asked in 2026 are not primarily technical. They are about clarity, accountability and judgment.
As a profession, this places renewed emphasis on shared standards, ethical practice and disciplined professional development. Organizations such as ISACA play a critical role in shaping this future by providing globally recognized frameworks, guidance and education that help security, risk and assurance professionals navigate increasing complexity with confidence.
The profession is moving from mere technical excellence to decision accountability. The future of the profession will be defined less by what we deploy, and more by how well we understand, explain and stand behind those decisions.
About the author: Chirag Joshi is a multi-award-winning cybersecurity executive, board advisor and author, with extensive experience leading and advising cyber, risk and resilience programs across government, financial services and critical infrastructure organizations. He is the Founder of 7 Rules Cyber, a strategic advisory firm that works with boards and executive teams to deliver defensible, business-aligned cybersecurity outcomes. Chirag is President of ISACA Sydney, is recognized as a Fellow of the Australian Information Security Association and serves as the National Ambassador for Critical Infrastructure ISAC Australia. He is a regular speaker at international industry forums and is known for translating complex technical and regulatory challenges into clear executive decision-making.