Editor’s note: As we begin 2026, the ISACA Now blog is diving into the questions that will shape digital trust disciplines in the new year. In today’s installment of our weeklong series, we surface questions that should be on the radar of audit professionals in the months ahead. Find more audit resources from ISACA here.
The room goes quiet before anyone shares a screen. A director turns to you. “Are we actually comfortable with our risk picture for 2026?” They do not care how neat your files are. They care whether you can answer without flinching.
Attacks now run at machine speed. AI writes emails that your own teams struggle to spot. Boards move from “How many audits did you complete?” to “What do you see that we do not?”
In 2026, those questions cluster into five key ones that will largely shape the audit and assurance landscape:
1. How confident are you in our cyber and operational resilience under stress?
Not on paper. Under stress.
I once sat in a crisis room with three problems at once: a cloud outage, a supplier ransom and a fake “CEO payment” call. Everyone knew the continuity plan existed. Almost nobody had rehearsed what it felt like when phones, vendors and staff pulled in different directions.
When the board asks about resilience now, they mean that feeling. Customers are locked out. Regulators asking sharp questions. Staff trying to improvise.
Your answer cannot only be “Maturity level three.” It has to sound like a lived experience. You know which services are critical, how long they can be down, and which scenarios were tested with IT, OT, supply chain and finance together. You can say what broke and what stayed broken.
2. Can you give us independent assurance over our AI and automation risks?
Half the room sees AI as free staff – faster decisions and leaner support. The other half remembers the last model that treated some customers as second-class citizens.
You do not need to be the best coder. You do need a map. Where AI sits in the business, which decisions it shapes, who owns the outcomes and how drift is spotted when models meet messy reality.
Attackers now use agents to probe, phish and move through identities at a pace no human red team matches. Your own teams plug large models into risky workflows because “everyone else is experimenting.”
Independent assurance means more than a paragraph in an IT review. It means you have walked a few end-to-end AI journeys, checked data and approvals, tested how outputs are monitored, and asked, “If this misbehaves tomorrow, whose risk is that?”
3. Are we spending audit and risk effort on the right things?
This sounds like a planning query. It is really about courage.
Budgets tighten while tools multiply. Yet ugly stories still start the same way. A forgotten admin account, a vendor with weak access, a basic server that missed several patch cycles.
Boards see the pattern. When they ask if you are spending effort on the right things, they want to know if your plan mirrors how the organization could be hit where it hurts, not which audits fit into last year’s template.
In 2026, that means a living assurance map. You know who already covers cyber, AI, privacy, conduct and resilience. You can show gaps that only the internal audit sees. Identity sprawl. Third-party concentration. Fragile controls around payments and critical services.
4. How do you know your opinions and our disclosures rest on solid data?
Sooner or later, a director will ask, “How sure are you?” If your first line is, “We sampled 20 items,” you will feel it land.
Fast attacks make small samples look like guesswork.
You do not need a research lab. You do need simple fluency in where the data came from, who maintains it, what it does not show, and which blind spots you accepted.
This question bites hardest in public statements: cyber posture, AI use, resilience and control effectiveness. If you sign off on those, you should be able to describe the evidence trail and its limits.
A good answer might sound like this. “We analyzed a year of access logs for our most critical systems and tested outliers. The pattern supports a moderate view, but we lack deep visibility into two providers.”
5. What are you doing to protect our trust, ethics, independence and talent in an AI-heavy audit function?
Now the light turns on you. A younger auditor told me, “If my work becomes feeding prompts into tools that write reports, I will go do something else.” You cannot claim to protect organizational trust if your own house feels hollow.
As you bring AI into audit work, boards will worry about two things. Can they rely on the outputs? And can you keep the people who know how to ask awkward questions?
You will need guardrails. Which data can touch external tools, where human review is non-negotiable, and how do you record AI assistance in workpapers?
You will also need to defend thinking time. If your team spends every hour chasing issues and editing generated text, nobody has the energy to join dots across cyber, AI, resilience, regulation and culture.
A serious board will expect a view on both sides. How you assess talent, skills and well-being for cyber and digital risk, and how you protect ethics and independence inside your function.
Pick the Scariest Question
You do not have to master all five questions by January. But you cannot avoid them and hope 2026 will be kind.
For a practical start, pick the one question that scares you most. Make that your project. Build your knowledge. Reshape your plan. Start an honest conversation.
When the next crisis hits, and the room turns toward you, nobody will ask how many audits you delivered. They will listen to whether you sound really ready.