Enterprise AI has never lacked for ambition or for caution. For decades, enterprises have piloted, prototyped, and debated through cycles of hype and recalibration for good reason. Then generative AI came along in 2022 and accelerated that cycle dramatically. In 2026, the shift is no longer theoretical. AI is moving into production at scale, and the risks now emerging are different in kind, and not just degree, from what enterprises dealt with even a year ago.
Four of those areas deserve immediate attention not because they're emerging but because they're already here. Standards organizations are issuing guidance, research firms are putting numbers on the gaps, and the plaintiffs’ bar is paying attention faster than enterprises expect.
Area 1 - AI Agent Security
The AI systems getting the most attention in 2026 aren’t chatbots but autonomous agents: systems that can plan, make decisions, execute multi-step tasks, and interact with external tools and services with varying (and sometimes minimal) degrees of human involvement at each step. What makes them genuinely useful is exactly what makes them a new class of security risk.
In January 2026, NIST issued a formal Request for Information to industry, academia, and the security community seeking insights specifically focused on AI agent security, noting that agent systems are capable of taking autonomous actions that impact real-world systems and may be susceptible to hijacking, backdoor attacks, and other exploits. Then in February 2026, OWASP released its “Top 10 for Agentic Applications,” developed by more than 100 security researchers and peer-reviewed by NIST and European Commission representatives, to make the scope of the challenge concrete and actionable.
There are multiple security concerns associated with agents. A key vulnerability associated with deployments is goal hijacking, where agents are redirected toward unintended objectives. Insufficient identity management is also a concern because agents need credentials, however IAM frameworks weren’t built with non‑human actors in mind. Tool misuse and memory poisoning round out the picture and neither is theoretical. Eighty-eight percent of enterprises reported confirmed or suspected AI agent security incidents in the past year.
What security researchers make clear, however, is that the challenge runs deeper than operational gaps. Vulnerabilities in agentic architecture are not fixable in any conventional sense. The attack surface is a mathematical artifact: statistically trained systems flatten higher-dimensional representations into lower-dimensional approximations, a compression that permanently encodes uncertainty that constitutes an attack vector. Among the most pervasive agentic attack vectors is prompt injection which can never be fully patched; not because the industry hasn’t tried hard enough, but because the near-infinite adversarial space makes it impossible to find or block all potential attacks.
Tool misuse compounds this in ways the architecture makes structural: agentic tool use vulnerabilities appear not in one place in the OWASP Agentic Top Ten but across at least three separate categories: 1) ASI02 (Tool Misuse), 2) ASI03 (Identity and Privilege Abuse), and 3) ASI05 (Unexpected Code Execution). This means that the failure modes of legitimate access going wrong are distinct from broken authorization; and too frequently, they are harder to detect and contain. Moreover, the component that enables agents to plan and act across sessions, agentic memory, has been demonstrated in empirical research to be susceptible to poisoning attacks, achieving success rates exceeding 80% against real-world agentic systems with minimal impact on baseline task performance.
This isn't an argument for inaction. It's a warning against assuming that diligence alone will close the gap. Enterprises that have made meaningful progress started by taking the architecture's inherent exposure seriously and not minimizing it. That means: including unsanctioned ones (which is harder than it sounds now that AI capabilities are embedded in so many platforms); using the OWASP “Top 10 for Agentic Applications” as a practical threat modeling framework; establishing identity boundaries for AI agents that are distinct from human credentials; and building incident response protocols that address agent-specific failure modes that traditional security playbooks don’t cover.
Area 2 - Organizational Liability and Regulatory Scrutiny
As AI systems take on more consequential decisions, the question of who’s accountable for those decisions, and how enterprises demonstrate that accountability, is drawing significant legal and regulatory attention. Digital trust professionals who wait for this to fully materialize will already be behind.
Gartner projects that by the end of 2026, more than 1,000 legal claims for harm caused by AI agents will be filed against enterprises due to insufficient guardrails and inadequate oversight. The SEC’s 2026 examination priorities have moved AI governance and cybersecurity to the top of its annual list, displacing cryptocurrency as the primary concern. A Palo Alto Networks analysis published in Harvard Business Review anticipates that the first lawsuits holding executives personally liable for rogue AI agent actions, as expected to materialize in 2026, will fundamentally redefine enterprise security’s role.
The Allianz Risk Barometer puts a number on what boardrooms are already sensing in its annual survey of global business risk. AI-related risk moved from tenth to second in a single year. Boards that weren't paying attention are doing so now.
Enterprises that have built strong AI risk management capabilities tend to share a few practices: they document AI risk management controls in forms that board-level audiences can actually engage with; their governance structures specifically address AI-related risks; many have established dedicated AI governance functions or cross-functional committees; and they’ve thought through coverage for AI-related incidents. The organizations that can demonstrate proactive risk management, not just a response after something goes wrong, face significantly less exposure. For digital trust professionals, there’s a clear and immediate role in helping enterprises get to that position.
Area 3 - Data Integrity and AI Supply Chain Security
AI systems are only as trustworthy as the data and components they depend on. As the use of external data sources, open-source models, third-party tooling, and cloud-based AI services continues to grow, what Palo Alto Networks calls a “crisis of trust” is emerging: data integrity is being subtly compromised as it moves through AI pipelines often without detection. The consequences, such as corrupted model outputs, unauthorized decisions and liability that's hard to trace, compound before anyone notices something is wrong.
The NIST RFI explicitly addresses data poisoning, which is the manipulation of training data to shape a model’s behavior in ways that are difficult to detect and trace. This isn’t a future risk. In December 2025, BleepingComputer reported the discovery of 126 malicious Node Package Manager (NPM) server packages. These aren’t isolated edge cases. They are actual compromises embedded in widely used AI tooling, including MCP integration packages that connect AI agents to enterprise systems.
A structural challenge common to many enterprises complicates the response: AI development and security teams often operate with limited integration. Training data provenance, dependency management for AI tooling, and supply chain hygiene for AI components tend to fall into the gap between functions. Enterprises that have navigated this well have typically established cross-functional ownership early, rather than waiting for an incident to clarify who’s responsible.
The starting point is mapping AI system dependencies, such as data sources, third-party tools, model repositories and API integrations, and establishing clear ownership and accountability for each. Provenance tracking for training and fine-tuning data is a foundational step that successful AI adopters have taken. Many of the supply chain security disciplines that already govern software development apply equally to AI tooling, and extending those practices to AI components is an immediate and concrete starting point that reduces exposure even where it cannot eliminate it. Building regular collaboration between AI development and security teams is what keeps incremental gains from evaporating. Incidents will happen. Who responds, and how quickly, depends on whether that collaboration exists before the incident.
Area 4 - The Governance Implementation Gap
Translating principles into operational practice is where most AI governance programs quietly stall. Many enterprises with advancing AI programs have articulated governance principles, like fairness, transparency, accountability, and human oversight; but they struggle to put them into the operational policies, workflows, and controls that govern day-to-day AI deployment. That implementation gap is where risk accumulates.
The scope of that gap is measurable and there are numbers directly connected to it. Only 14.4% of enterprises obtain full security and IT approval before deploying AI agents. Gartner has found that 57% of employees use personal AI accounts for work tasks, creating shadow AI exposure that most enterprises currently can’t measure. Nearly half of survey respondents in Deloitte’s State of AI in the Enterprise report identified turning Responsible AI principles into operational processes as a significant challenge. Additionally, 45.6% of teams still rely on shared API keys for agent authentication, which creates serious accountability gaps.
Palo Alto Networks projects that enterprises in 2026 are contending with an 82:1 ratio of autonomous AI agents to human employees. Principles that exist only in strategy documents can’t scale to govern systems operating at that ratio.
Enterprises making meaningful progress share a recognizable approach: they’ve conducted AI usage assessments to surface shadow AI deployments; translated principles into specific policies and technical controls; established structured approval workflows for AI agent deployment; and built cross-functional AI governance working groups that connect IT, legal, risk, compliance, and business leadership. PwC’s 2026 AI Business Predictions note that enterprises are increasingly expected to demonstrate that governance keeps pace with capability as a present-day operational reality and not just a future aspiration.
The Opportunity in Front of Us
These four areas: 1) AI agent security, 2) organizational liability, 3) data integrity, and 4) the governance implementation gap, are not risks on a far horizon. They’re active and accelerating, with standards organizations, regulatory agencies, major research firms, and the plaintiffs’ bar all paying close attention at the same time.
That convergence is also an opening. Digital trust professionals are trained to bridge technical complexity and organizational accountability, translating risk into governance structures and building the controls that make principled commitments operational. The frameworks from NIST, OWASP, Gartner, Deloitte, and others are being written right now. The enterprises that engage with these frameworks early, before enforcement matures, will be demonstrably better positioned to navigate regulatory scrutiny, respond effectively when incidents occur, and build the kind of stakeholder trust that distinguishes accountable enterprises from reactive ones.
For professionals looking to deepen expertise in AI risk and governance, ISACA’s suite of AI-focused resources, including certifications and professional development programs, gives practitioners a distinct pathway into exactly this kind of work. Those who engage now will find this work genuinely consequential. The profession's moment in AI governance isn't coming. It's already here.