The most striking message from ISACA’s 2026 AI Pulse Poll is not that artificial intelligence is coming. It is that AI is already here, already embedded in daily work and already shaping how organizations create content, analyze data and automate routine tasks. When 90 percent of respondents believe employees are using AI in their organization, the conversation can no longer center on future adoption. The urgent question is whether organizations are equally prepared to manage the security consequences of that adoption.
From a cybersecurity perspective, the answer appears to be no. The poll suggests that while AI use has become normal across the enterprise, the supporting control environment remains uneven and, in some cases, untested. That gap should concern security leaders, risk professionals and boards alike, because unmanaged AI use does not stay a productivity issue for long. It quickly becomes a governance, resilience and trust issue.
AI use is now an enterprise reality
The poll shows that 81 percent of respondents say employees are using generative AI specifically. In many organizations, this means AI tools are already influencing how staff write, summarize, search, analyze and make decisions. In practice, this creates a new kind of exposure. Employees do not need a formal enterprise AI program to introduce AI risk. They only need access to public or embedded tools and a reason to use them for speed or convenience.
That is why shadow AI should be viewed as a cybersecurity issue rather than only a policy issue. When staff use AI tools outside approved governance channels, organizations can face unintended disclosure of sensitive information, privacy violations, intellectual property leakage and increased susceptibility to manipulation. The poll’s own risk findings reinforce this concern: respondents identified misinformation and disinformation, privacy violations, social engineering and loss of intellectual property among the most significant AI-related risks.
The real weakness is operational readiness
One of the most important findings in the poll is not about adoption at all. It is about control under pressure. Only 12 percent of respondents say their organization has a documented process for shutting down or overriding AI systems if something goes wrong and that the process is tested regularly. Just as concerning, 56 percent do not know how long it would take to halt an AI system in the event of a security incident.
For cybersecurity professionals, these are not abstract governance gaps. They point to a basic readiness problem. A policy may describe acceptable use, but a real incident requires something more operational: clear ownership, defined escalation paths, decision authority, fallback options and tested procedures. If an organization cannot quickly answer who can halt an AI-enabled process, under what conditions and within what timeframe, then its resilience is weaker than it may appear on paper.
This is a lesson many security teams have learned in other contexts. The issue is rarely the existence of a control statement alone. The issue is whether that control can actually be executed when time pressure, business dependency and uncertainty all collide. AI should be treated no differently.
Policy progress is encouraging, but not enough
There are positive signs in the survey. Thirty-eight percent of organizations now report having a formal, comprehensive AI policy, up from 28 percent in 2025. That movement matters. It suggests organizations are beginning to recognize that AI use needs structured oversight rather than informal guidance.
Still, the broader picture remains incomplete. A formal policy in 38 percent of organizations also means most organizations are either operating with limited policy coverage, no active policy or uncertainty about whether such a policy exists. That is not a comfortable position for a technology that is increasingly shaping business processes, information flows and decision support. The gap between adoption and control remains wide.
The same applies to ethics and leadership awareness. Only 11 percent strongly agree that organizations are giving sufficient attention to ethical standards related to AI implementation, and only 38 percent are confident in their board’s understanding of and action against AI risks. These figures suggest that many organizations are still treating AI security as a technical matter when it is increasingly an enterprise governance issue.
What organizations should do next
The next step isn’t about slowing down innovation just for the sake of it. Instead, it’s about making sure innovation goes hand in hand with strong cyber resilience. Organizations should begin by clearly assigning responsibility for managing AI risks and governance. While security, legal, risk, privacy and business teams all play important roles, there needs to be someone in charge of coordinating the efforts and making key decisions.
Next, organizations should go beyond just having static policy documents and create real, practical safeguards. This means setting approved use cases, rules for handling data, expectations for logging and monitoring, clear steps for when to escalate issues and tested procedures to shut down or override AI-driven processes that could have a big impact.
Third, AI shouldn’t be handled separately but should be woven into existing cyber practices. Incident response, managing third-party risks, training and awareness programs, data protection and reporting to boards all need to include AI-related concerns. The poll shows that 45 percent of respondents already see AI risks as an urgent priority, and now it’s time to turn that priority into action.
Finally, organizations should invest in educating people at all levels. Employees need clear, practical advice on how to use AI safely. Managers need to understand who’s accountable. And boards need enough knowledge to ask the right questions about whether AI systems are not just useful, but also controllable, auditable and resilient.
Closing the gap is a must
ISACA’s 2026 AI Pulse Poll makes one point unmistakably clear: AI adoption is accelerating faster than cybersecurity readiness. That does not mean organizations should step back from AI. It means they should close the gap between usage and control before that gap widens further. In the years ahead, the organizations that benefit most from AI are likely to be those that treat security, governance and response capability not as barriers to innovation, but as the conditions that make trustworthy innovation possible.