Artificial intelligence (AI) automation and ransomware as a service (RaaS) platforms have fundamentally altered the threat landscape by lowering the capability bar of entry and enabling nation-state actors to automate up to 90% of intrusions.1 A growing number of threat actors are shifting away from resource-heavy encryption tactics in favor of data-only extortion, slow-encryption tactics, and increasingly costly ransom demands.
The competition between organized criminal groups operating ransomware services often leads to innovation in criminal services as they seek to entice affiliates to their platform. In 2025, multiple examples of RaaS innovation emerged, such as in-house legal services from the Qilin group and AI chatbots promoted by the now defunct Global ransomware group.2 In both cases of ransomware innovation, the groups found new ways to put additional pressure on their victims, increasing the chances that they would pay the ransom. They also developed methods to make their attacks more efficient. Due to ongoing competition, the evolution of malicious services is likely to continue, resulting in a complex threat landscape and the need for organizations to adapt accordingly.
Emergent Groups
In 2025, 1,100 emerging threat actors were tracked across all threat vectors from ransomware, data sellers, and hacktivism, representing a 22.9% increase in emerging threat actors compared to 2024.3 While most new threat actors are individuals selling basic leaked data on dark web forums, a considerable number are significant criminal organizations, such as large RaaS groups.
In 2025, 55% of new threat actors were involved in data breaches, a 6-percentage point increase from 2024.4 This demonstrates the increasing interest placed on data extortion and its potential for financial leverage based on the operational, legal, and reputational risk posed to organizations.
The growth in new ransomware groups has likely been boosted by white label ransomware services such as DragonForce’s RansomBay as they make targeting accessible for groups that may have the intent but lack the infrastructure needed to extort organizations.5 As more white label services become available, new threat groups are likely to appear because they no longer need to build their own capabilities in-house.
Evolution of AI Use Across the Attack Life Cycle
Adversaries are increasingly moving beyond using AI for productivity gains and are now deploying AI-enabled malware directly in live operations. Early reporting suggests that modern agentic AI systems can operate autonomously for extended periods, effectively performing the work of entire teams of skilled operators.
One notable example involved a Chinese-backed threat group that leveraged Anthropic’s large language model (LLM), Claude’s, agentic capabilities to orchestrate attacks, with AI agents carrying out 80–90% of each operation. Human operators intervened only at 4 to 6 key decision points per intrusion, demonstrating how complex campaigns now require limited human oversight.6
As organizations adopt autonomous agents across internal workflows, attackers are expected to shift from traditional prompt injection techniques toward abusing these agents directly. With access to code repositories, ticketing systems, and databases, compromised agents could be manipulated into actions such as deleting environments, modifying records, or generating substantial computing costs. These incidents are likely to involve subtle adversarial manipulation that exploits the agent’s broad, legitimate permissions rather than overt misuse.7 To help counter the exploitation of such tools, organizations are advised to treat autonomous agents with the same security scrutiny as any other user. This can be achieved by applying the principle of least privilege and enforcing segmentation of the tools’ access, thereby limiting the ability to unauthorized network lateral movement.
A growing number of threat actors are shifting away from resource-heavy encryption tactics in favor of data-only extortion, slow-encryption tactics, and increasing ransom demands
AI-Assisted Malware Development and Code Transformation
New malware families such as PROMPTFLUX, PROMPTSTEAL, and PROMPTLOCK are now incorporating LLMs directly into their execution. These malware strains can dynamically generate malicious scripts, obfuscate their code in real time to evade detection, and create required malicious functions on demand rather than embedding them natively. This represents a significant progression toward more autonomous, adaptable malware capable of adjusting its behavior to suit the environment.
Similarly, reported analysis of a Python-based backdoor deployed by RansomHub (now DragonForce) affiliates shows signs of AI-assisted development. The code features well-structured classes, meaningful variable names, and robust error handling, all hallmarks of AI-generated output. The code is highly readable and straightforward to test once deobfuscated.8
AI is poised to make tailored malware significantly cheaper and faster to produce, allowing attackers to generate bespoke code with minimal effort. Organizations should anticipate a shift away from broad, indiscriminate malware campaigns toward highly focused operations aimed at specific systems or individual organizations. This transition reflects the cost and time efficiencies delivered by AI-generated code and marks the beginning of a new era of microtargeted exploitation.
AI-Enhanced Social Engineering
A campaign initially observed in mid-2023 dubbed “Ghost Call” evolved in 2025 targeting macOS users and leveraging AI-powered deception to create highly convincing social-engineering scenarios. Attackers initiate contact via social media platforms while impersonating venture capitalists, enticing victims into joining fabricated investment meetings hosted on phishing pages designed to mimic platforms. During these sessions, victims are prompted to install a supposed “update”, which deploys a malicious script as part of a multistage infection chain.9
In an even more sophisticated development of this campaign, the operators have begun replaying videos of previous victims to make interactions appear genuine. This tactic deepens the psychological manipulation involved and enables attackers to recycle data from earlier compromises, effectively using each victim’s likeness as a tool for future operations.10
This increased refinement will likely improve attack success rate by making outreach efforts appear more legitimate and trustworthy to targets.
Strategic and Operational Implications
AI is eroding traditional skill barriers by enabling individuals with minimal technical expertise to conduct attacks. This is accelerating the speed, scale, and overall impact of cyberattacks, significantly lowering the threshold for executing sophisticated operations. Less experienced or poorly resourced groups can now mount larger-scale campaigns.
AI functions as both a high-value target and a powerful force multiplier for cybercriminals. The diminishing distinction between novice and advanced threat actors is creating a volatile and unpredictable threat landscape where the potential sale of AI as a service may amplify both the capability and intent of groups. Dark web activity in the sale of stolen intellectual property (IP) and source code is likely to increase, driven by AI adoption within threat actor tooling, both for training purposes and AI’s capabilities for analysis.
To assist the reduction of threats posed by AI, organizations should:
- Increase phishing simulation exercises whilst focusing on how advances in social engineering are being implemented to make these attacks more convincing than ever.
- Document internal AI tools, such as copilots, and treat these tools with the same lateral movement and access restrictions given to employees. This will greatly reduce threat actors’ ability to hijack internal tooling for efficient compromise.
- Gain visibility into the organization’s dark web presence so that mitigation steps can be implemented should malicious activity occur. Without initial visibility, operations will remain reactive, not proactive.
Conclusion
AI automation and RaaS platforms are reshaping the threat landscape by lowering barriers to entry and enabling cybercriminals to launch highly automated attacks. Nation-state actors can now automate an ever-increasing amount of intrusions, while less skilled individuals can launch sophisticated campaigns using AI-enabled tools. Threat actors are also shifting tactics, moving away from traditional encryption-based ransomware toward data-only extortion, slow encryption, and higher ransom demands. In parallel, competition among ransomware groups is accelerating innovation.
To defend against these threats, organizations should strengthen phishing awareness, adapt to more advanced social engineering, restrict internal AI tools, and increase visibility of the organization’s presence on the dark web to detect and mitigate emerging threats early.
Endnotes
1 Quorum Cyber, 2026 Global Cyber Risk Outlook
2 Cluley, G.; “Qilin Ransomware: What You Need To Know,” Tripwire, 20 June 2024; Büyükkaya, A.; “GLOBAL GROUP: Emerging Ransomware-As-A-Service, Supporting AI Driven Negotiation and Mobile Control Panel for Their Affiliates,” EclecticIQ, 15 July 2025
3 Quorum Cyber, 2026 Global Cyber Risk
4 Quorum Cyber, 2026 Global Cyber Risk
5 Sophos, “DragonForce Targets Rivals - Behind the Scenes of the Ransomware Turf War,” 31 March 2026
6 Anthropic, “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign,”13 November 2025
7 Knowles, C.; “AI-Driven Cyber Threats and Defences To Reshape Security by 2026,” Security Brief, 20 November 2025
8 Nelson, A.; “RansomHub Affiliate Leverages Python-Based Backdoor,” Guidepoint Security, 15 January 2025
9 Kaspersky, “Bluenoroff Targets Executives On Windows and MacOS Using AI-Driven Tools,” 25 October 2025
10 Kaspersky, “Bluenoroff Targets Executives”
Jack Alexander
Is senior threat intelligence consultant at Quorum Cyber. He holds over a decade of intelligence experience in both the private sector and the British Royal Navy where he held roles including electronic warfare director of both HMS Lancaster and HMS Kent, senior strategic Middle East intelligence analyst and lead cyber threat intelligence analyst for the naval Cyber Protection Team.