In today’s enterprise world, artificial intelligence (AI) no longer merely answers questions or drafts emails, it acts. From copilots booking travel to intelligent agents updating systems and coordinating with other bots, professionals are stepping into a world where software can reason, plan, and operate with increasing autonomy.
This shift brings immense promise and significant risk. The identity and access management (IAM) infrastructures that organizations rely upon today were built for human beings and fixed service accounts. They were not designed to manage autonomous AI systems that can reason about goals, make independent decisions, and dynamically adapt their actions. However, that is precisely the management agentic artificial intelligence (AI) demands.
These computer programs are autonomous, can make decisions, and can even spawn other agents to help complete tasks. They do not operate in the form of set roles or fixed sessions. They may be available for minutes or seconds to carry out a specific task and then disappear after the task has been completed. They can act on behalf of humans or on behalf of other agents, eliciting profound, nested delegation chains. This makes them fundamentally different from traditional applications or service accounts.
Organizations must act now to address the complexity of this technology, recognizing that agentic AI presents not only technical and compute challenges but also profound identity and policy dilemmas. As also explored in a recent discussion on compliance-first IAM many of the principles that underpin regulatory IAM, such as least privilege, segregation of duties, and auditable access reviews, provide a foundational framework that can be extended to manage autonomous AI agents effectively.1
Where Current IAM Falls Short
Existing IAM frameworks, including widely used protocols, such as OAuth 2.0, OpenID Connect (OIDC), and Security Assertion Markup Language (SAML), were designed for a more deterministic digital era. They presume predictable application behavior and a single authenticated principal, human, or static machine identity. Agentic AI violates those assumptions in several ways:
- Coarse-grained and static permissions—Legacy IAM relies on preexisting scopes or roles that are too coarse-grained and static to handle the dynamic operational requirements of AI agents. Agents may require fine-grained, task-specific permissions that dynamically change based on context, mission parameters, or in real-time data evaluation. Issuing coarse, long-lived tokens is an open invitation to catastrophic abuse.
- Single-entity model versus multi-entity delegations—Current protocols struggle to represent and secure intricate sequences of delegations, in which an agent might establish subagents or stand for multiple principals concurrently. This compromises accountability by making traceability to the initial delegator ambiguous.
- Limited context awareness—Static scopes or roles make limited use of runtime context, agent intent, or risk level. Access is provided at the beginning of a session and continues to exist regardless of altered situations.
- Scalability issues with token/session management—Scaling transient agents numbering in the hundreds or thousands, with each talking to several services, can put pressure on traditional IAM infrastructure. Issuance, validation, and more importantly, revocation of many temporary tokens can become a headache for operations.
- Dynamic trust models and interagent authentication—Agents often need to authenticate and authorize each other, sometimes across organizational borders, without the presence of a universal, preexisting trust fabric. OAuth and SAML are based on hierarchical trust model assumptions, which are ill-suited to peer-to-peer trust establishment between independent agents.
- Non-human identity (NHI) proliferation—Each autonomous agent may have NHIs for numerous application programming interface (APIs), databases, and services, leading to a multiplicative factor in secrets that need to be securely stored, rotated, and kept. This "secret sprawl" exponentially increases the attack surface.
- Global logout/revocation complexity—Traditional IAM systems are built around predictable, human-centric sessions. Agentic AI disrupts this model: autonomous agents can spin up ephemeral sessions, create subagents, and act on behalf of multiple principals across diverse services. Because each system often maintains its own session or token state, revoking access in one place does not automatically cut off access elsewhere. Without centralized, real-time revocation, compromised agents or sub-agents may continue interacting with resources, leaving a persistent and hard-to-manage security risk.
The underlying problem is a basic mismatch: organizations are attempting to safeguard dynamic, independent agents with security techniques optimized for human-operated, single-purpose programs.
Promising IAM Frameworks for Agentic AI
To address the authorization crisis of the AI agent era, an engaged, design-intentional architecture is required. Two compelling avenues are being researched to address these challenges:
1. Zero Trust Identity Framework With Decentralized Identifiers and Verifiable Credentials
This approach presumes the need for a new agentic AI IAM system based on high-fidelity, verifiable agent identities. This paradigm leverages decentralized technologies to redefine agent identity and provide fine-grained, dynamic access control. It consists of several components:
Decentralized identifiers (DIDs) and verifiable credentials (VCs)—DIDs allow globally unique, persistent, cryptographically verifiable identifiers in control of an agent or its controller to accommodate self-sovereign identity necessary for decentralized and cross-organizational multiagent systems (MAS). VCs are digitally signed attestations about an agent that allow for granular and dynamic expression of attributes, capabilities, or rights. These technologies are particularly suited to model NHIs.
Agent naming service (ANS)—The architecture includes an ANS for capability-aware and secure agent discovery. An ANS resolver resolves against an agent registry, which holds registered agent data, including ANS Names, DIDs, public key infrastructure (PKI) certificates, and protocol extensions declaring their capabilities and associated VCs.
Zero-knowledge proofs (ZKPs)—ZKPs enable privacy-preserving attribute disclosure and policy compliance verifiability. This means an agent can prove it meets specified requirements without disclosing confidential underlying data.
Unified global session management and policy enforcement layer—This layer addresses the issue of uniform security posture management across heterogeneous MAS with agents running over different communication protocols. It is a security and session management backplane that ensures policy decisions or revocations propagate instantly and uniformly to every point of interaction.
This multilayered architectural pattern enables high-fidelity dynamic access controls in accordance with zero trust principles by continuously confirming agent trust.2
2. Agent Relationship-Based Identity and Authorization
Agent relationship-based identity and authorization (ARIA) offers an integrated model to safeguard enterprise self-sovereign AI agents by managing delegation relationships as explicit, trackable entities. In this model, every delegation (from a human or service to an AI agent, or from an agent to a subagent) is recorded as a distinct, cryptographically verifiable relationship in a graph. These relationships can be dynamically created, monitored in real time for compliance or anomalous behavior, and revoked immediately when no longer needed or if a compromise occurs. By making delegations first-class, observable objects, ARIA enables fine-grained accountability, traceable chains of authority, and precise enforcement of policies across complex multiagent workflows. This model integrates and extends existing open standards, such as:
OAuth 2.0 Rich Authorization Requests (RAR)—Enables agents to express precisely what they need in business language, with extremely detailed permission requests.
OAuth 2.0 token exchange (on-behalf-of profile)—Cryptographically binds an actor (the agent) to a delegator (service or human granting permissions), preserving the chain of responsibility.
OpenID AuthZEN—Enables evaluation of fine-grained, context-aware policies without abandoning installed base OAuth infrastructure, applying constraints (e.g., geo-fences, budget thresholds) and requirements (e.g., audit trails, notifications).
Model context protocol (MCP)—Enables structured communication in AI-tool chains, allowing agents to learn about and comply with organizational policies during workflow authoring. MCP allows information owners to securely expose information to AI agents in a controlled, structured environment.
Graph-native relationships—ARIA's primary innovation is that it is graph-native. This means that delegation paths are declarative, directly traceable, and surgically revocable. This is necessary to terminate an AI agent's authority without interfering with other activity.
Dual enforcement model—Separates synchronous constraints (prohibiting malicious actions) from asynchronous obligations (compliance checks, audit logging), balancing security with performance requirements.
Agent-to-agent (A2A) communication—This novel open standard promotes secure interagent communication, allowing many agents (from different vendors or platforms) to discover each other, exchange data, and divide up work safely. A2A enables agents to transmit cryptographically signed messages with verifiable presentations establishing authorization for specific communications.
The underlying problem is a basic mismatch: organizations are attempting to safeguard dynamic, independent agents with security techniques optimized for human-operated, single-purpose programs.
A Call for More Work
The road to a comprehensive and internationally accessible agentic AI IAM framework is a daunting task. The rapid pace of AI development demands accelerated IAM security guidance, especially for heavily regulated sectors where compliance with frameworks, such as the US Sarbanes-Oxley Act (SOX), the EU General Data Protection Regulation (GDPR), and the US Health Insurance Portability and Accountability Act (HIPAA), introduces additional complexity.3 Continued research, the continued development of standards, and rigorous interoperability are required to prevent fragmentation into incompatible identity silos. Professionals must also address the ethical issues, such as bias detection and mitigation in credentials, and offer transparency and explainability regarding IAM decisions. Installation and regulation of a potentially global, federated, or decentralized IAM infrastructure is a mammoth undertaking that will require an effort of several stakeholders, potentially involving industry self-regulation, standards development, and government regulation.
The stakes are high. Without a comprehensive plan for managing these agents—one that tracks who they are, what they can access, and when their permissions expire—organizations risk disaster through complexity and compromise.4 Identity remains the foundation of enterprise security, and its scope must expand quickly to shield the autonomous revolution.
Endnotes
1 Gupta, V.; “A Strategic Model for Compliance-First IAM,” ISACA Journal©, vol. 5, 1 September 2025
2 Huang, K.; Vineeth, S.A; et al.; “A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control,” arXiv, 2025
3 Huang, K.; “Agentic AI Identity Management Approach,” Cloud Security Alliance, 11 March 2025
4 Hoffman, K.; “A New Identity: Agentic AI Boom Risks busting IAM Norms. A New Identity: Agentic AI Boom Risks Busting IAM Norms,” SC Media, 13 June 2025
Vatsal Gupta
Is a cybersecurity leader with 13 years of experience in identity and access management (IAM). He currently works at Apple and has previously held roles at Meta and Pricewaterhouse Coopers (PwC), advising Fortune 100 companies on securing complex digital ecosystems. Gupta specializes in building scalable, artificial intelligence (AI)-driven identity solutions. He is an active contributor to IDPro and a senior member of the Institute of Electrical and Electronics Engineers (IEEE), and he also serves on technical committees for leading cybersecurity conferences. His research focuses on AI, large language models (LLMs), and policy-based access controls (PBAC) to modernize IAM and enhance threat detection.