Addressing the Rise of AI-Driven Cyberattacks

Addressing the rise of AI-Driven Cyberattacks
Author: Mathura Prasad, CISM, CISSP, ITIL V3, OSCP
Date Published: 8 January 2025
Read Time: 14 minutes

The advent of artificial intelligence (AI) has transformed many industries, increasing productivity, speed of decision making, and innovation. However, the same technology that drives progress also poses greater risk when weaponized by cybercriminals. The evolution of AI has changed the face of cybernetworks, making advanced AI-driven cyberattacks, deepfakes, and phishing techniques more common. These assault methods continue to be made more effective, posing a threat to society at large. Organizations require a deeper understanding of this newfound risk to improve their countermeasures.

Rise in AI-Driven Cyberattacks

AI-powered cyberattacks have risen in recent years, and cybercriminals are increasingly using machine learning (ML), automation, and predictive analytics to bypass traditional security systems to advance these new attacks beyond traditional methods. AI capabilities are more dynamic, scalable, and unpredictable. AI enables attackers to automate analytical phases, better identify vulnerabilities, and launch sophisticated attacks at unprecedented speeds. For example, AI algorithms can scan large amounts of data from publicly available sources, such as social media and enterprise websites, to create personalized and highly targeted attacks. AI-driven attacks are especially powerful because they can learn and evolve. Unlike traditional malware, which follows a set code, AI-powered malware can adjust its behavior based on the environment, making detection and mitigation far more difficult.

Recent advancements in AI have significantly influenced the methods employed by cyberattackers, enabling the creation of sophisticated and adaptive threats. For example, AI-enhanced malware has been observed to incorporate sandbox evasion techniques, allowing it to detect whether it is being analyzed in a controlled environment. By mimicking real user behavior or identifying virtual machine characteristics, such malware can delay malicious activity or behave innocuously in a sandbox, only unleashing its full capabilities in a live setting. This method complicates detection and mitigation efforts, leaving fewer clues for cybersecurity professionals to track and counteract the threat.1

Ransomware attackers have also utilized AI to prioritize their operations. By leveraging algorithms to identify high-value files within compromised systems, ransomware can selectively encrypt sensitive data that holds critical importance for the victim. This targeted approach not only maximizes operational disruption but also increases the likelihood of ransom payments, demonstrating how AI can be employed to optimize the efficiency and impact of cyberattacks.

Such examples highlight the dual-edged nature of AI in cybersecurity, showcasing its potential to enhance both defense mechanisms and the sophistication of malicious threats.

AI in Offensive Cyberoperations
AI models trained on historical attack data can predict the most effective exploitation techniques for a given target. For example, AI can be used to create zero-click attacks on mobile devices, which require no user interaction to compromise a system, drastically increasing the attack’s success rate. This approach is especially concerning as it enables the silent deployment of malware without the typical signs of a breach.

AI can quickly identify weak points in a network and automatically deploy exploits, enabling a more efficient and coordinated attack.

Threat actors use AI to automate vulnerability discovery, exploitation, and data exfiltration. AI reduces the breach timeline from months to just a few hours, allowing attackers to move laterally within networks at speeds previously unseen. For example, AI can quickly identify weak points in a network and automatically deploy exploits, enabling a more efficient and coordinated attack. 

Emerging Threats: AI-Driven Autonomous Cyberweapons
The global rise of autonomous cyberweapons, driven by AI models that can make independent decisions, presents a significant threat to international cybersecurity. Research from institutions worldwide has examined the development of self-propagating AI malware that can modify its tactics based on real-time environmental feedback. These autonomous systems, requiring little to no human intervention, pose a unique danger as they can continuously evolve and refine their methods of attack, potentially affecting nations across the globe.

Strategies for Mitigation
The increase in AI-powered attack vectors signifies a global transformation in the cyberthreat landscape, where conventional security frameworks are increasingly inadequate. The incorporation of AI into cyberattack strategies has not only accelerated the speed and scope of these threats, but also introduced an unprecedented level of adaptability that challenges defenses in both developed and developing regions. To effectively counter these evolving threats, organizations worldwide must adopt AI-powered cybersecurity tools capable of real-time detection and response. The ongoing battle between cyberattackers and defenders is rapidly becoming an AI arms race, highlighting the need for global cooperation, technological innovation, and relentless vigilance in cybersecurity efforts across diverse geopolitical contexts.

Case Study
A study by Darktrace, a leading AI cybersecurity company, documented the battle against Emotet, a notorious malware strain that evolved into a highly sophisticated AI-driven threat.2 Emotet leveraged AI to analyze network traffic patterns, adapting its behavior to blend in with normal activity and avoid detection by traditional security solutions. Darktrace’s AI-driven detection tools were critical in identifying subtle anomalies that indicated the presence of Emotet, highlighting the importance of advanced AI in both attack and defense scenarios.

Deepfakes as a Tool for Cybercrime

Deepfakes, or AI-generated synthetic media, have rapidly evolved from a novel technological curiosity into a serious cybersecurity threat. Utilizing deep learning algorithms, these manipulated videos, audio clips, and images convincingly impersonate real individuals, often to devastating effect. Cybercriminals leverage deepfakes to perpetrate identity fraud, execute social engineering attacks, damage reputations, and inflict financial losses on organizations and individuals alike.

Deepfakes exploit AI’s ability to learn and replicate human voices and facial expressions with uncanny accuracy. What began as an entertainment trend quickly became a tool for disinformation and cybercrime. The realistic nature of deepfakes poses significant challenges for detection, especially when used in real-time attacks such as fraudulent phone calls, video conferences, or manipulated recordings.

Academic research globally has underscored the increasing danger posed by deepfakes. Professionals from diverse regions struggle to differentiate deepfake media from genuine recordings, particularly when under stress.3 This highlights a worldwide need for improved detection technologies, such as AI-driven authentication tools, which can detect subtle discrepancies in voice patterns and facial expressions that often escape human scrutiny.

A growing global issue is synthetic identity fraud, where criminals fabricate identities using deepfake technology.4 Deepfakes not only enable direct financial theft but also erode trust in digital communications, resulting in widespread societal consequences that transcend geographic boundaries.

Strategies for Mitigation
The growing threat of deepfake-driven cybercrime demands immediate practical steps to enhance detection capabilities and strengthen organizational defenses. AI-based detection tools, which analyze inconsistencies in video and audio—such as irregular eye movements, unnatural speech patterns, or pixel-level distortions—are currently the most effective means of defense. However, these tools must be continually refined to stay ahead of rapidly evolving deepfake technology. For example, enterprises can deploy AI systems that monitor communications for unusual behavior in real time, flagging suspicious media for closer examination before critical decisions are made.

In addition to technical solutions, organizations must invest in comprehensive training programs that teach employees how to identify deepfake threats in real-world situations. This could include simulated deepfake attacks during training exercises to help staff practice verifying the authenticity of audio or video instructions from senior management. Reinforcing verification protocols for any sensitive requests—especially those involving financial transactions or sensitive data—can prevent costly mistakes, as seen in recent cases of deepfake-enabled fraud where attackers impersonated executives to authorize fraudulent transfers.

The use of deepfakes in cybercrime marks a shift in attack strategies, where criminals weaponize the same AI tools developed to improve security and productivity. As this technology advances, cybersecurity teams need to integrate both AI-driven solutions and human expertise to build a resilient defense. Collaboration between technology providers and cybersecurity professionals to continuously develop countermeasures is critical to staying ahead of these sophisticated threats.

As this technology advances, cybersecurity teams need to integrate both AI-driven solutions and human expertise to build a resilient defense.

Case Study
As AI-driven technologies, particularly deepfake tools, continue to evolve, their use in cybercrime has grown more sophisticated, enabling attackers to exploit human trust and bypass traditional security protocols. The following case studies illustrate how deepfake technology has been weaponized to target organizations, resulting in significant financial losses and highlighting the growing threat of AI-enhanced fraud:

  • Energy sector attack (2022)—A major energy company was targeted by a deepfake attack where the chief executive officer’s (CEO’s) voice was convincingly mimicked to authorize a fraudulent transaction of more than US$200,000.5 This incident not only resulted in financial loss but also demonstrated how deepfakes can bypass traditional verification processes, exploiting the trust placed in familiar voices. This attack is emblematic of the shift in cybercriminal tactics, as fraudsters now utilize AI to manipulate human trust and decision-making processes.
  • Financial institutions targeted—A study by Sensity AI6 found that the use of deepfakes in cybercrime rose by 43% in 2023, with a significant increase in attacks targeting financial institutions and large corporations. Deepfakes were used to manipulate high-stakes negotiations, impersonate executives in video conferences, and deceive employees into transferring funds or divulging sensitive information. The research also highlighted the emergence of deepfake-as-a-service platforms, where cybercriminals can commission customized deepfakes, further lowering the barriers to entry for these attacks.

Enhanced Phishing Techniques

Exploiting human vulnerabilities to steal sensitive information—phishing—has long been a staple of cybercrime. However, the incorporation of AI has transformed phishing from a broad, scattergun approach into a highly targeted and sophisticated weapon. Attackers now utilize AI and ML techniques, such as natural language processing (NLP) and ML algorithms, to craft personalized phishing emails that closely mimic legitimate communications, making them more difficult to detect.

AI-driven phishing attacks employ NLP to analyze a target’s communication style, behavior, and preferences. This analysis allows attackers to create messages that are contextually relevant, increasing the likelihood that the recipient will engage. AI can also automate the process of identifying high-value targets by scanning social media profiles, professional networks, and other publicly available information, making spear-phishing attacks more efficient and impactful.

AI can be used to create phishing emails that adapt to the linguistic patterns of the target organization, significantly increasing the chances of success. ML models trained on past email communications can generate phishing emails that closely resemble authentic messages, thereby evading traditional detection methods.

Fostering a culture of vigilance among employees, coupled with stringent verification protocols, can significantly reduce the impact of these sophisticated attacks.

In 2023, phishing played a role in 36% of reported breaches globally, with AI-driven phishing campaigns achieving higher success rates compared to traditional methods.7 Across regions, the use of AI to automate phishing email crafting was evident in 22% of incidents, demonstrating a growing reliance on sophisticated, personalized attacks.

A 2021 study examined the global impact of AI-enhanced phishing on organizational security, revealing that these attacks are not only more targeted but also executed at a much faster pace.8 The study highlighted that AI systems can generate thousands of customized phishing emails in minutes, tailoring each message based on real-time recipient responses, and refining the attacks to improve success rates.

Phishing continues to be the most prevalent form of cybercrime, with Google intercepting approximately 100 million phishing emails daily. Approximately 65% of cybercriminal groups leverage spear phishing, primarily for intelligence gathering. The rising volume of phishing emails has contributed to an increase in successful attacks, with 96% of organizations experiencing at least one phishing attempt in the past year. Notably, 52% of organizations have observed a marked improvement in the sophistication of these threats.9

Deep learning algorithms play an increasing role in phishing globally, enabling attackers to analyze large datasets, identify behavioral patterns, and craft messages that align closely with the target’s environment. For instance, AI-driven phishing emails often include contextual cues, such as references to recent events or the use of familiar language, which heighten the likelihood of the recipient engaging with the malicious content.

Strategies for Mitigation
Scholarly articles have noted the alarming trend of AI being used to automate and scale phishing attacks. Research published in 2023 analyzed the economic impact of AI-enhanced phishing, estimating that these attacks could cost businesses over US$1.2 billion annually due to their increased success rate and lower detection threshold.10 The researchers called for improved AI-based defense mechanisms that can match the sophistication of AI-driven attacks.

Another study explored how AI could be both a tool for attackers and defenders,11 highlighting the need for cybersecurity frameworks that incorporate AI for anomaly detection, threat prediction, and real-time response to phishing attempts. The study underscored the necessity of training AI models on diverse phishing datasets to better recognize and block evolving tactics.

The rise in AI-driven phishing emphasizes the need for continuous employee training, advanced technological defenses, and a proactive cybersecurity strategy. Organizations must invest in AI-driven security solutions capable of analyzing communication patterns and detecting subtle anomalies that may indicate phishing. Moreover, fostering a culture of vigilance among employees, coupled with stringent verification protocols, can significantly reduce the impact of these sophisticated attacks.

Case Study
A notable case involved a spear-phishing attack targeting the author’s company’s finance department.12 The attackers used AI tools to replicate the writing style of a known vendor, creating an email that was indistinguishable from genuine communication. The email contained a fraudulent invoice request, designed to appear as a routine transaction. It was only after a thorough investigation and cross-referencing with the actual vendor that the deception was uncovered.

This incident underscores how AI can be leveraged to bypass conventional security measures, particularly those that rely on human judgment. The attackers had not only mimicked the vendor’s email address but also replicated the tone, language, and signature used in previous communications, making the phishing attempt highly convincing.

Conclusion

AI-driven cyberattacks, deepfakes, and enhanced phishing techniques signal a new era of cybersecurity challenges. To effectively mitigate the impact of these evolving threats, specific action steps are essential:

  1. Regularly update security protocols and incident response plans. Cyberthreats evolve rapidly, and so must enterprise defenses. Organizations should establish a regular review and update cycle for security protocols, ensuring that they align with the latest threat intelligence and technological advancements. Incident response plans should also be revised frequently, with simulations conducted to test their effectiveness against new forms of AI-driven attacks.
  2. Invest in AI-driven defense tools. Just as attackers use AI, defenders must also leverage AI to enhance detection and response capabilities. Organizations should invest in AI-powered cybersecurity solutions that can analyze vast amounts of data in real time, identify anomalies, and respond to threats before they escalate. Tools such as AI-driven behavioral analytics, deepfake detection systems, and predictive threat modeling are crucial for staying ahead of attackers.
  3. Foster a culture of cybersecurity awareness. Human error remains a critical vulnerability in most cyberattacks. To combat this, organizations must implement continuous cybersecurity awareness training programs that educate employees on the latest phishing techniques, deepfake risk, and AI-driven threats. Regularly updating training content to reflect emerging threats and conducting phishing simulations will help employees develop a more security-conscious mindset.
  4. Strengthen multifactor authentication (MFA) and verification processes. With AI able to mimic legitimate communications convincingly, organizations should implement multilayered verification processes, especially for high-risk transactions. MFA, along with mandatory verification requirements for sensitive requests (such as financial transfers) through alternative channels, can significantly reduce the likelihood of a successful attack.
  5. Collaborate with cybersecurity professionals and researchers. Cybersecurity is a collective effort. Organizations should actively participate in industry forums, share threat intelligence, and collaborate with cybersecurity professionals and researchers. Staying informed of the latest AI-driven attack vectors and innovations in defense mechanisms will be key to adapting to an ever-changing threat landscape.
  6. Commit to ongoing innovation and research. The pace of technological advancement means that defensive measures must continuously evolve. Organizations should allocate resources for research and development in cybersecurity, exploring innovative approaches such as AI-enhanced threat detection, automated response systems, and blockchain for securing sensitive transactions.

As threats become more sophisticated, organizations must take a proactive, multifaceted approach to safeguarding their systems and data. By combining advanced technology with a culture of vigilance and collaboration, organizations can better prepare for the next generation of cyberthreats. The future of cybersecurity will depend on how effectively practitioners harness innovation, strengthen defenses, and foster a resilient digital environment in an increasingly AI-driven world.

Endnotes

1 Kolbitsch, C.; “Evasive Malware Tricks: How Malware Evades Detection by Sandboxes,” ISACA® Journal, vol. 6, 2017
2 Tilsiter, Z.; “How Darktrace AI Blocked Emotet Malspam,” Darktrace, 27 April 2022
3 Wei, M.; "The Psychological Effects of AI Clones and Deepfakes," Psychology Today, 13 February 2024
4 McCann, K.; "AI in Financial Fraud: Deepfake Attacks Soar by Over 2000%," AI Magazine, 1 June 2024 
5 Riley, D.; "Fake AI-Generated Voice of CEO Used to Defraud Energy Company," SiliconANGLE, 3 September 2019
6 Sensity AI, “The Rise of Deepfake Cybercrime,” Sensity AI Report 2023, 2023
7 Maniar, G.; “The Evolution of Phishing: A 2023-2024 Outlook,” Phisher Safe, 21 December 2023
8 Bose, R.; Leung, A. C. M.; “Understanding the Impact of AI-Driven Phishing on Organizational Security,” International Journal of Information Security and Privacy, vol. 15, iss. 2, p. 103-121, 2021
9 Griffiths, C.; “The Latest 2024 Phishing Statistics (Updated June 2024),” AAG, 1 June 2024
10 Kumar, S., Brown, P., et al.; “The Economic Impact of AI-Enhanced Phishing: A Global Analysis,” Cybersecurity and Infrastructure Protection Review, vol. 19, iss. 1, p. 88-102, 2023
11 Abbott, J.; Dinh, T.; “AI in Cybersecurity: Balancing Defense and Attack,” Journal of Cybersecurity Research and Innovation, vol. 14, iss. 3, p. 217-233, 2022
12 Eze, C.; Shamir, L.; “Analysis and Prevention of AI-Based Phishing Email Attacks,” Electronics, vol. 13, 2024

MATHURA PRASAD | CISM, CISSP, ITIL V3, OSCP

Is an experienced governance, risk, and compliance (GRC) professional specializing in application security, penetration testing, and coding. His cybersecurity journey has been marked by a relentless pursuit of innovation, with a focus on leveraging artificial intelligence (AI) to elevate daily work.

Additional resources