

Autonomous artificial intelligence (AI) systems, known as agentic AI, are quickly becoming one of the most discussed technologies in the cyberlandscape.1 Rather than functioning like traditional chatbots that must be prompted, AI agents can plan and execute tasks with minimal human intervention; however, some level of human oversight typically remains within the overall system.
While this newfound capability sounds highly promising for enterprise productivity, it raises new security and governance questions. What happens if an attacker hijacks an AI agent? How can enterprises ensure that the agent only has access to the data it truly needs? What if the agent inadvertently shares sensitive info?
Agentic AI represents a breakthrough in technological innovation, but the capabilities of AI agents remain somewhat mysterious and warrant further exploration of their essential components. Organizations must understand this new technology and implement proactive security practices to harness agentic AI, a significant force multiplier in recent times.
Understanding Agentic AI
Agentic AI represents the next step in the evolution of AI. Unlike traditional autonomous systems that simply respond to prompts, agentic AI actively initiates actions and connects tasks with intent. Picture a digital assistant that can seamlessly execute verbal commands such as, “Create an account for a specific individual in the customer relationship management (CRM) solution, give them global read access, and send out a confirmation email when done.” In short, it functions as a chatbot with access to enterprise tools, enabling it to conduct meaningful, goal-directed work. To understand how agentic AI achieves this level of functionality, it is helpful to review the key components that make up an AI agent.
As seen in figure 1, an AI agent is typically composed of 3 main components:
- Large language model (LLM)
- Knowledge Base2
- External tools and integrations (enterprise tools)
Figure 1—Components of an AI Agent
The LLM is the agent's cognitive engine, providing natural language understanding and reasoning capabilities. It interprets the user’s instructions, plans a sequence of actions, and generates appropriate responses. The LLM's role is pivotal as it determines how the agent will accomplish tasks based on available information and tools.
The Knowledge Base is typically a search database that maintains organization-specific verbal mappings and contextual associations. Knowledge bases can also be Graph databases and even structured documentation.3 Consider the abovementioned example: “Create an account for a specific individual in the CRM solution, give them global read access, and send out a confirmation email when done.” The agent would first consult its knowledge base to understand that CRM refers to the customer relationship management solution within the organization. It would then craft a prompt for the LLM to retrieve the appropriate CRM application programming interface (API) call. Finally, it would use that information to execute the call to the CRM system and complete the task.
External tools and integrations act as the agent's enterprise toolkit, enabling it to interact with enterprise systems and perform tangible tasks. These include API connections to CRM systems, email services, document management platforms, and other enterprise applications. Without these integrations, the agent would merely be an AI-enabled conversational helper, nothing more than a chatbot.
Traditional applications typically have well-defined scopes. However, when these agentic applications are compromised, they remain limited to a single dataset or platform. In contrast, an AI agent has access to multiple tools, each of which introduces an expanded attack surface. Moreover, privileges can rapidly move across various enterprise tools. For example, if a malicious actor steals the agent’s tokens or hijacks its instructions, they could navigate within the CRM, access payroll data, and even reach supply chains.
This is why agentic AI requires human oversight and special attention regarding security and governance. For one, these agents act a lot like digital workers: They need their own identity (which could take multiple forms such as unique service accounts for tool access, Kubernetes pod identities for containerized deployments, or workload identities in cloud environments); strict rules regarding what they can and cannot do; and a mechanism of control to ensure that they cannot escalate privileges without proper oversight. Humans have identity providers (IdPs) for this very purpose, and access to an enterprise service or resource is usually controlled by multifactor authentication (MFA). However, AI agents have no equivalent protections, making their utilization a risky endeavor for enterprises.
Agentic AI Risk
Agentic AI introduces a new dimension to enterprise security and governance. However, despite its benefits, there are several key areas of risk for these agents, including:
Credential Sprawl
Because AI agents interact with many tools and platforms, they need numerous tokens and API keys. Without careful oversight, organizations can end up with an uncontrolled store of credentials, some forgotten, some improperly revoked, and all have the potential to be misused.
Lateral Movement
Once an attacker obtains an agent’s token, they can potentially follow the same path as the AI agent, jumping from one system to another. In large organizations, this can mean a swift spread of unauthorized access.
Lack of Traceability
When a human being performs an action, their intent and scope can often be understood. In contrast, the agent, powered by the LLM, is an opaque probabilistic system that generates steps based on its pre-trained weights. Without comprehensive logging that captures the full context of the agents’ decisions, it becomes difficult to determine whether unexpected behaviors stem from malicious intent or the agent’s quirks and limitations.
Stale Contexts
AI agents are capable of serving many different teams or tasks. However, without clear guardrails, these agents could apply privileges obtained in one context to a completely different scenario, inadvertently exposing data or triggering improper flows. To prevent this, each user must be authenticated every time the agent makes a call to an external system on that user’s behalf. This ensures that access rights are aligned with the user’s current role, permissions, and session state.
Strategies to Secure Identity Governance for Non-Human Identities
Non-human identities (NHIs) have become a thorny challenge in today's enterprise security landscape. These are digital identities that are similar to service accounts, API keys, and machine identities. They often wield significant system privileges while lacking the authentication safeguards we take for granted with human users. The problem? Traditional identity governance was built for humans, relying on authentication methods such as biometrics or physical tokens that do not apply to digital identities. This challenge has only intensified with the rise of agentic AI. While organizations have struggled for years to manage conventional NHIs, AI agents add several new layers of complexity. They can dynamically generate these credentials, interact with multiple systems simultaneously, and require permissions that evolve based on the tasks they are performing. Despite these challenges, there are several ways organizations can secure non-human identities.
Unique Identities for Each AI Agent
Each agent should have a distinct identity, such as a service account or a certificate. Additionally, each agent should have separate service accounts while interacting with an enterprise service. For example, “Agent-1” and “Agent-2” should not use the same service account to access the CRM. This is done to reduce the blast radius if Agent-1’s service account is compromised.
Granular Scope Assignments
An AI agent's role should be limited to its assigned tasks. For instance, if an agent is responsible for gathering and summarizing CRM data, its permissions should align with that function. The service account associated with the agent’s CRM integration should have read-only access to relevant tables—nothing more. Write access should only be granted when necessary. The goal is to minimize the impact of a misconfiguration or compromise.
Periodic Rotation and Revocation
AI agents use service accounts to access each external tool. The service account secrets should be rotated on a periodic schedule to minimize the risk of long-term exposure. Additionally, if an enterprise admin leaves the organization, all associated credentials should be updated or revoked immediately to prevent unauthorized access.
Zero-Trust Strategies for Agentic AI
Zero trust is often distilled into “never trust, always verify”.4 This principle is even more relevant for agentic AI, whose access patterns are not predetermined. Continuous verification, contextual access controls, and strict identity enforcement are paramount to ensure that agents operate within their secure and authorized boundaries. For humans, IdPs offer powerful ways to limit access to enterprise resources. There is no such out-of-the-box solution for agentic AI. However, a combination of several strategies can replicate similar controls.
Micro-Segmentation
Organizations should segment their networks into tighter zones rather than allowing an agent unrestricted access once inside the enterprise network. Each agent should be confined to the specific resources and systems required for its role. For example, if an agent is strictly internal, it should not be allowed to access data on the internet.
Credential Injection Using Service Meshes
Credential injection is an advanced technique that may require some additional setup, but it offers significant security benefits. In this technique, the AI agent does not store any credentials. Instead, it sends requests to a service mesh or an API gateway, which dynamically injects the appropriate credentials in the outgoing request. This can be done after verifying the agent’s identity and permission levels. Externalizing both authentication and authorization decouples credential access from the agent itself.
Continuous Verification
Using short-lived tokens ensures that agents regularly reauthenticate, reducing the risk of token leakage. If an agent attempts to access a new resource, the system should be able to reevaluate its permissions in real time, confirming whether the AI agent has the right to perform the action at a given time. If the agent’s role has changed or its behavior appears anomalous, such as sending out a flood of emails or attempting to download a large volume of files, the request can be flagged or denied.
IP-Aware Policies
For enterprise software as a service (SaaS) platforms, agent requests should be tied to known IP subnet ranges, device fingerprints, or workload identities.5 If a token is stolen and used from an unfamiliar location, these IP allow-list policies can immediately block the request. For AI agents located on-premises, this behavior can be replicated in next-generation firewalls (NGFWs).
By combining zero trust principles with strong identity governance, organizations can significantly reduce new security concerns introduced by agentic workflows. This is a rapidly evolving space, thus periodic reviews and adjustments may be necessary to ensure optimal security posture.
Recommendations
There are several ways to streamline the adoption of agentic workflows:
Do:
- Give each AI agent its own unique identity.
- Rotate credentials frequently.
- Log each agentic action.
- Apply least privilege to the service accounts used by agents.
- Use secure secrets management.
- Microsegment whenever possible.
- Implement human review mechanisms for critical agent outputs and decisions.
- Establish clear escalation paths for when agents encounter edge cases or anomalies.
Do not:
- Reuse service accounts or repurpose dummy human accounts as service accounts.
- Hardcode API keys.
- Over-authorize AI agents.
- Ignore ‘noisy’ SIEM logs corresponding to AI Agents
- Keep stale service accounts or contexts.
Conclusion
Agentic AI is capable of radically streamlining business processes; however, it also introduces new risk that conventional security measures were not designed to handle. Traditional security measures are quickly becoming obsolete, especially now, with the new scale and unpredictability that agentic AI brings. Above all, organizations must be aware that agentic AI is an evolving field. What works today may need to be adapted tomorrow as these systems expand their capabilities and become more deeply integrated into enterprises.
Endnotes
1 PricewaterhouseCoopers (PWC), Agentic AI — the New Frontier in GenAI; International Data Corporation (IDC), “Around 70% of Asia/Pacific Organizations Expect Agentic AI to Disrupt Business Models Within the Next 18 months,” 24 March 2025
2 KMS Lighthouse, “Knowledge Base in AI: What Is It and Why Do You Need One?,” 15 September 2023
3 GeeksforGeeks, “What is Graph Database – Introduction,” 1 July 2024
4 Microsoft, “What is Zero Trust?,” 27 February 2025
5 GeeksforGeeks, “Introduction to Subnetting,” 7 February 2025
Anirudh Murali
Is currently working at a leading cybersecurity company, developing cutting-edge solutions to address new and emerging challenges in AI security, Identity governance, and SaaS security. With more than 15 years of experience in the industry, Anirudh has built and delivered networking and security software solutions at scale.