Most organisations cannot say how quickly they could halt an AI system in a crisis - and many could not explain what went wrong afterwards
London—AI technology is being adopted rapidly across European organisations, but many have deployed it without the governance and safety infrastructure to match, according to new research from ISACA, the leading global professional association for digital trust professionals. The findings, drawn from an advance release of selected questions from ISACA's 2026 AI Pulse Poll, ,based on responses from digital trust professionals in Europe, point to a significant and widening gap between AI adoption and organisational readiness to manage the risks it brings.
The control problem
When asked how quickly their organisation could halt an AI system in the event of a security incident, almost three-fifths (59%) of respondents said they do not know. Only a fifth (21%) said they could do so within half an hour, suggesting that for the majority of organisations, a compromised or malfunctioning AI system could continue to operate unchecked for over half an hour.
The findings raise questions about operational preparedness at a time when AI systems are increasingly embedded in core business processes. The absence of clear response procedures has direct implications for regulatory exposure, reputational risk and the continuity of the processes and services these systems support.
The understanding problem
Beyond the ability to stop an AI system, the research also points to significant gaps in organisations' capacity to understand and account for what happened when one fails. Fewer than half (42%) of respondents express confidence in their organisation's ability to investigate and explain a serious AI incident to leadership or regulators, and only 11% are completely confident.
This is particularly significant as regulation begins to come into force. The EU AI Act, now moving into enforcement, places explicit requirements around explainability and accountability. These obligations demand not only technical controls, but governance structures, audit trails, and – most importantly - professionals with the skills to interpret and communicate the behaviour of AI systems. Our research suggests those capabilities are not yet in place at scale.
The root cause: governance that hasn't kept up
These findings point to a deeper structural issue. A third of organisations (33%) do not require their employees to disclose when AI has been used in work products, leaving significant gaps in visibility over where and how AI is being used across the business.
A further 20% of respondents do not know who would be ultimately accountable if an AI system caused harm, with only 38% identifying the Board or an Executive. This revelation sits at odds with the direction of travel in regulation, which is largely focussed towards placing accountability at senior leadership level.
On the surface, the oversight picture offers some reassurance. 40% of respondents say humans approve most AI-generated actions before execution, and a further 26% review decisions after the fact. However, without the broader governance infrastructure to support it, human oversight alone may not be sufficient to identify or address problems before they escalate.
The data suggests that many organisations continue to treat AI risk as a technology issue rather than an enterprise-wide governance challenge. This is not sustainable, especially at a time when AI is increasingly shaping decisions, outputs and customer interactions across every part of the business.
Chris Dimitriadis, Chief Global Strategy Officer at ISACA, said: “What this research reflects is that our thirst to innovate is not matched by our desire to govern change, exposing us to critical risks. The tools to govern AI responsibly already exist. Risk management, prevention controls, detection mechanisms, incident response and recovery strategies are the foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour and urgency.
“The gap between deployment and governance is not closing; it is growing. Organisations need to act quickly. That starts by establishing who is accountable, building the incident response capability, and creating the visibility over AI use through audit to foster a culture of meaningful oversight.
“But truly closing the gap can’t be done by process changes alone. Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle, and translate that into decisions that stand up to board and regulatory scrutiny. The organisations that get this right are those that focus on customer and overall stakeholder trust and those that will lead through sustainable innovation.”
Notes to Editors
Figures are based on fieldwork conducted by ISACA between 6th to 22nd February among 681 digital trust professionals in Europe, including IT audit, governance, cybersecurity, privacy and emerging technology roles. The full 2026 AI Pulse Poll will be published in May 2026.
About ISACA
ISACA® (www.isaca.org) is a global community advancing individuals and organizations in their pursuit of digital trust. For more than 50 years, ISACA has equipped individuals and enterprises with the knowledge, credentials, education, training and community to progress their careers, transform their organizations, and build a more trusted and ethical digital world. ISACA is a global professional association and learning organization that leverages the expertise of its 180,000 members who work in digital trust fields such as information security, governance, assurance, risk, privacy and quality. It has a presence in 188 countries, including 225 chapters worldwide. Through the ISACA Foundation, ISACA supports IT education and career pathways for underresourced and underrepresented populations.
Twitter: www.twitter.com/ISACANews
LinkedIn: www.linkedin.com/company/isaca
Facebook: www.facebook.com/ISACAGlobal
Instagram: www.instagram.com/isacanews
Contact:
firstlight group
Alice Hyne, +44 7758 929141, isacateam@firstlightgroup.io
ISACA
Esther Almendros, +34 692 669 772, ealmendros@isaca.org