


Editor’s note: The following is a sponsored blog post from QA.
Agentic AI is transforming cyber defense – not just by amplifying threats, but by empowering defenders with faster, smarter, and more autonomous response capabilities. As adversaries evolve, so must the tools and mindsets of those on the front lines.
Cyber defenders are facing a new reality. The rapid emergence of agentic AI - systems capable of autonomous decision-making and action – is reshaping the threat landscape at speed. These agents often look like real users, synthetically slipping past detection and even getting approved by security tools and compliance checks that can’t see what they really are.
But while much of the conversation has focused on the risks, there's another side to the story: the potential for defenders to harness agentic AI to outpace adversaries, reduce response times, and strengthen resilience.
This blog post explores how agentic AI can become a powerful ally for security teams, helping them move from reactive to proactive defense.
The ‘AI vs AI’ era
We’re entering an era of AI vs. AI in cybersecurity. Defensive strategies must evolve fast. If your threat model still assumes humans on both sides, you’re already behind.
Attackers could eventually deploy offensive agents to breach our networks, unless we build defenses that are just as capable. Imagine LLM-based agents that do red-teaming for you, running tests continuously and autodetecting vulnerabilities – levelling the playing field for smaller teams.
AIDEFEND is an open-source AI defense framework that sets out best practice countermeasures for securing and defending AI. It connects known threats with practical mitigations across the AI attack surface, from poisoning to prompt injection. It gives defenders an interactive, threat-informed way to plan and implement AI security, making it easier to translate risk into action.
Defenders have been looking for a “single pane of glass” for AI security, and now we’re starting to see the right tools to make it a reality. Agentic AI and three new protocols, MCP, A2A, and AG-UI, are changing how AI works inside the Security Operations Center (SOC):
- MCP lets agents access tools and data, although potential security weakness in the protocol demands additional scrutiny.
- A2A allows agents to work together and share tasks.
- AG-UI shows everything to the security analyst in real time.
This isn’t just another dashboard, it’s a live control layer for AI security telemetry where AI handles the heavy lifting and humans guide the outcome. Instead of jumping between tools and alerts, analysts can now see, steer and scale AI workflows from one place.
The future SOC isn’t just unified; it could share a language across a multitude of AI agents, tools and humans.
What to watch out for
There are pitfalls to watch out for as MCP and other useful AI protocol adoption surges. Frustratingly, security isn’t built in by design, with the protocol’s default settings offering little protection.
If you’re running MCP, now’s the time to lock it down and audit exposure, add access controls and shut off public endpoints. We’re seeing security wrappers emerge, built to guard MCP servers and tools from risky behavior like prompt injections, rogue configuration changes and line jumping attacks.
The idea is simple: don’t trust every MCP app, but don’t slow everything down either.
AI agents don’t just work for you; be aware they can be turned against you. The same power that makes them useful makes them a perfect target for attackers, who can hijack them silently, across platforms, and without leaving a trace. Stopping one malicious prompt isn’t enough. You need full, AI-agent-specific security to keep them from becoming your biggest insider threat. I’ll be writing soon about the AI “red team” role to collaborate with the rise of the AI Agent defender.
Another red flag to be mindful of is that you must allow AI to operate as AI, within reason. That may sound strange, but if you force AI to follow traditional human workflows, which many enterprises will default to (e.g., rigid rules, lock steps, structured playbooks), you may as well not bother. This will hold AI back, failing to deliver the return on investment.
Google’s AI-powered cyber agent, Big Sleep, marked a milestone in proactive defense. The agent uncovered a critical vulnerability before it was publicly known, even before the usual threat actors. It’s a first of its kind, with an AI agent helping to stop a zero-day in the wild, not by reacting, but by predicting. This type of use case will only improve defensive outcomes. I’m intrigued to watch this space for more revelations.
Combating the risks of Agentic AI
These are exciting times. New agentic security means most effective tools aren’t just accurate, they’re explainable, autonomous where needed, and tightly integrated into existing workflows. You do need to know whether their decisions can be trusted, how they handle false positives and if they actually reduce analyst workload or just shift it. Evaluating AI in the SOC means testing reasoning, not just results, and ensuring the platform can learn, adapt and operate safely within your environment.
When AI works on its own terms, with the ability to learn in real-time, it can spot patterns faster, handle the repetitive load, improve efficacy and free security analysts to focus on human tasks. It's a shift, from reactive defense to proactive orchestration. Benefits include addressing alert fatigue, resource constraints and reducing the ever-present risk of burnout.
Remember, it’s not about replacing humans – you are the backstop. It’s about letting AI become sufficiently integrated into the security team to meet the advisory head-on.