

In the modern digital world, identity governance concerns more than employees, contractors, and service accounts that connect to databases and other systems. The new frontier is filled with non-human identities (NHI)—from application programming interface (API) keys and service accounts to cloud roles and robotic process automation (RPA) bots.1 But the landscape continues to evolve rapidly, and a new player is entering the scene: agentic artificial intelligence (AI).
Agentic AI systems are capable of advanced decision making, including initiating actions and adapting as needed based on context. Their capabilities are similar to large language model (LLM)-based agents, autonomous code deployers, or AI security scanners that modify configurations based on threat intelligence.
Agentic AI presents a growing challenge for audit and governance functions, primarily because its decision-making processes often lack clear traceability. This absence of transparency can weaken accountability and complicate efforts to achieve regulatory compliance. Nevertheless, organizations that recognize the risk of agentic AI and take proactive measures are better positioned to harness the advantages of the technology while strengthening trust and enhancing their reputation.
This shift in the technological landscape also redefines the role of audit. It is no longer sufficient to be able to answer, "Who did what?" One must also be able to answer why an action occurred, particularly when the action is the result of decisions made by an AI system rather than direct human input.
What Makes Agentic AI Different?
Agentic AI refers to systems that are capable of making decisions and performing actions without direct human intervention.2 Tis that traditional automation, or rule-based systems, rely on static algorithms, whereas agentic AI can interpret complex objectives, plan solutions, and adapt its actions based on changing contexts. In other words, these systems have agency. This capability has evolved rapidly over the past decade. Early AI systems operated on predefined scripts with limited flexibility. The emergence of LLMs marked a turning point—these models introduced the ability to understand and generate human-like language, reason across diverse tasks, and take context-aware actions.
As agentic AI matured, it moved beyond passive response generation to taking initiative through solving problems and executing tasks with limited supervision. Traditional non-human identities—such as database service accounts, API Keys, or AWS Lambda roles—are relatively predictable and operate within tightly-defined scopes. Their behavior is based on fixed permissions and known functions. Agentic AI, however, behaves more like a human employee: It receives tasks or problems and determines how to accomplish or solve them. This shift introduces new complexities for managing, auditing, and governing AI-driven systems.
Agentic AI can:3
- Chain together tools and APIs dynamically
- Generate and deploy code
- Create new identities to execute tasks
- Make access decisions in real time
- Modify infrastructure in real time
Often, agentic AI does not offer human-readable reasoning unless explicitly programmed to log it. This lack of transparency creates serious challenges for both organizations and auditors. For organizations, it becomes difficult to understand or explain why an AI system made a specific decision, which can hinder operational oversight, trust, and accountability. For auditors, the absence of clear, traceable decision paths undermines the ability to assess compliance, detect errors or bias, and ensure that regulatory obligations are met. Without interpretable logs or justifications, both parties are left navigating a black box, increasing the risk of unchecked behavior, legal exposure, and reputational damage.
Why Agentic AI Breaks Traditional Audit Models
Several scenarios highlight the new challenges auditors face in this AI-augmented world. These scenarios are hypothetical, but they can easily become a reality as the technology gains prominence in organizations.
Scenario 1: The Self-Writing Script That Escalated Privileges
An agentic AI tool is tasked with optimizing system performance. To accomplish the task, the agent decides to rewrite a configuration script and, in doing so, requests higher API rate limits from a backend system. To do that, it elevates its permissions temporarily, using logic it learned from previous successful attempts. When auditors later investigate why a service account had elevated access for 30 minutes, there is no clear ticket or human approval. Instead, there is simply a line in a log: "Permission temporarily elevated to complete task." But who approved it? The answer: the AI system did. This scenario highlights a fundamental problem for auditors: the absence of traceable accountability. In traditional systems, elevated access is typically governed by clear workflows—such as tickets, change approvals, or manager authorizations—that provide a verifiable audit trail. When an AI system autonomously grants temporary access with no human involvement or documented approval process, it disrupts this model.
Scenario 2: The Invisible Botnet Risk
A DevOps team uses an AI agent to auto-scale microservices based on usage patterns. In the process, the agent spawns hundreds of new containers, each with its own identity, connected to internal APIs.
As a result, the organization's identity inventory explodes overnight, with ephemeral identities that:
- Never had a formal review
- Are not tagged to an owner
- Disappear before identity and access governance (IAG) tools can act
During an audit, this results in a massive gap in the identity life cycle trail—there is no record of provisioning, ownership, access rationale, or deprovisioning for most of these identities. This gap undermines governance and compliance, raising a critical question: How can the organization ensure that the AI agent did not leak sensitive data, access unauthorized resources, or violate internal policies?
Scenario 3: Who Gave Access to What—and Why?
An agent managing a knowledge system automatically grants read access to a new AI assistant after analyzing its tasks—there is no formal approval. From an audit standpoint, this violates the concept of separation of duties. Yet, according to the agent, the decision was technically sound. When asked why the permission was granted, the agent replied: "Access was necessary to complete the user’s requested objective." Since the AI agent’s autonomous decision bypasses established or documented processes, it is often difficult to logically describe its actions in compliance reports
The Audit Questions That Emerge
As these example scenarios demonstrate, IT professionals are approaching murky regulatory territory. Auditors are now grappling with:
- Who owns an AI decision? If an agent takes autonomous action, who is accountable?
- How can one prove intent or risk modeling? AI decisions are often probabilistic and opaque.
- How can organizations track the life cycle and usage of dynamic, ephemeral identities when they are created and destroyed in minutes?
- What logs are sufficient? Is a decision tree enough, or is it necessary to require full traceability of an agent’s reasoning?
- Do agents themselves need to be auditable? Should agents come with digital contracts outlining their scope and constraints?
Addressing these critical questions about AI accountability in identity life cycle management and auditability is essential for navigating the regulatory challenges posed by agentic AI. Doing so enables organizations to ensure compliance, maintain security, and build trust while unlocking the full potential of autonomous systems. Clear frameworks and transparent oversight will transform uncertainty into opportunity, allowing AI to be both innovative and responsibly governed.
Modernizing Governance in the Age of Agentic AI
Auditing non-human identities is complex because these identities often operate with minimal human oversight. The accounts often lack clear ownership or a documented approval workflow. This makes it difficult to accurately track their creation, usage, updates (changes in privileges, attributes, etc.), and deactivation. To stay ahead, organizations must rethink identity governance as they embrace autonomous systems leveraging AI.
To build robust governance for agentic AI systems, organizations must make sure that the agentic AI systems follow a documented account registration process similar to any other human user or service account. Every action taken by an AI system should be logged via an audit trail that captures who initiated the action, whether it was a human, an application, or an AI agent, along with the reason for it.
To maintain trust and meet compliance demands, governance frameworks must evolve at the same pace as technological innovation. This requires organizations to implement a new workflow that accommodates the dynamic and autonomous nature of AI-driven identities. Organizations need to adopt smarter tools capable of real-time monitoring and contextual analysis. The most important thing is that auditors need to embrace a fundamental shift in mindset when auditing agentic AI systems. These agentic identities are intelligent actors capable of independent decision making and consequential action. Recognizing this means treating these identities with the same rigor and oversight as any critical organizational stakeholder, ensuring accountability, transparency, and control in an AI environment.
Furthermore, agent logic should be treated as code, maintained under version control, and audited with the same rigor as business logic, ensuring transparency and accountability. Logging systems must also evolve to track not only what happened but also the intent behind each decision, capturing inputs, decision pathways, and even rejected alternatives to provide a full picture of the agent's reasoning. Finally, AI containment policies should be enforced to limit what agents can access or create, with built-in oversight mechanisms that trigger alerts or reviews when boundaries are crossed.
Conclusion
We have entered an era where cloud infrastructure may have been built, modified, and secured by systems that were not directly programmed by their operators, and whose decisions cannot be easily explained. This is a prospect that is both exciting and daunting.
Auditing non-human identities is already complex. Now, with agentic AI, auditors are not just tracing what happened, but why it happened, based on the evolving logic of software agents.
To maintain trust and meet compliance demands, governance must keep pace with innovation. This means new workflows, smarter tools, and perhaps most important, a new mindset. These identities are no longer restricted to people and systems—they are intelligent actors, and they need to be treated as such.
Endnotes
1 Bradley, T.; “The Crucial Role of Non-Human Identity and Secrets Management,” Forbes, 18 June 2024
2 Stryker, C.; “What is Agentic AI,” IBM
3 NiCE, “What is Agentic AI (Agent AI)”
Nirupam Samanta
Is a cybersecurity professional with more than 18 years of experience specializing in identity and access management (IAM), security audit, and governance, risk, and compliance (GRC). He has served in both technical and strategic leadership roles, driving the secure design of systems, ensuring regulatory compliance, and implementing robust governance frameworks. Samanta’s deep domain expertise enables organizations to strengthen their security posture while aligning with industry standards and business objectives.