The proliferation of artificial intelligence (AI) will undoubtedly have implications for cybersecurity. Many envision a future of AI vs. AI in which humans are essentially taken out of the cybersecurity equation, with attackers using AI to launch attacks and organizations using AI to defend against them. In this scenario, humans would be responsible only for managing and overseeing the AI systems, not for carrying out the cybersecurity measures themselves.
A glimpse into this future will play out during the Defense Advanced Research Projects Agency (DARPA) AI Cyber Challenge (AIxCC) semifinal event at DEF CON 2024, where autonomous machines will discover vulnerabilities and then apply the necessary fixes to secure the network. Those capabilities have not been fully realized, but one can bet that models are currently being trained to make that envisioned future a reality.1
The AI vs. AI period will likely be Phase 3 in the cybersecurity-related upheaval expected due to AI influences. The phases, as they appear to be playing out now, are:
- Phase 1—AI-powered social engineering through techniques such as deepfakes
- Phase 2—Increased impact from polymorphic and metamorphic malware code developed by AI
- Phase 3—Full-blown AI AI offensive and defensive capabilities
Currently, we are fully immersed in Phase 1, and Phase 2 has already kicked off, though it is still in its relative infancy (which is somewhat concerning given the level of impact that has already been made). Phase 2, polymorphic and metamorphic malware code development by AI, is beyond the scope of this article, which focuses on Phase 1: the use of AI in social engineering cyberattacks.
The US National Institute of Standards and Technology (NIST) defines social engineering as “an attempt to trick someone into revealing information (e.g., a password) that can be used to attack systems or networks.”2 These attacks are successful through the manipulation of human emotions and psychology, such as by fostering a willingness to trust and a desire to help others, or through evoking fear or a sense of obligation.
In recent years, deepfakes have become a potent form of social engineering when used for exploitation and to disseminate misinformation. A deepfake is media content—such as a video, photo, or audio recording—that seems legitimate but has been manipulated in some way with AI.3
The first wave of AI-influenced cyberattacks began years ago. One of the first major events utilizing AI-influenced deepfakes occurred in 2021 when members of the European Parliament were fooled by individuals using deepfake filters to imitate Russian opposition figures during video calls.4
Like water or electrical current, hackers take the path of least resistance, and, right now, that is social engineering. Because deepfake technology has been in use for years, it has matured. Bad actors have become quite proficient in the creation and use of deepfakes.Deepfakes have been expanded to perform malicious acts ranging from extorting money from families thinking their loved ones were in danger to impersonating the president of the United States asking citizens not to vote.5 The influence of deepfakes is especially harmful in the enterprise space. In 2019, the first recorded use of deepfakes to defraud an enterprise occurred when a deepfake of a chief executive’s voice resulted in a US$243,000 illegitimate transfer of funds to malicious actors.6 More recently, an entire virtual meeting was faked with multiple participants involving AI deepfakes talking to each other (and the target), resulting in a HK$26 million illegitimate transfer of funds to the bad actors.7
Like water or electrical current, hackers take the path of least resistance, and, right now, that is social engineering. Because deepfake technology has been in use for years, it has matured. Bad actors have become quite proficient in the creation and use of deepfakes.
There are sophisticated threat actor organizations dedicated to using deepfakes for illicit monetary gain, and many are now working under the model of providing deepfakes as a service to anyone willing to pay the fee. Europol has reported that some organizations are employing generative adversarial networks (GANs), which use both generative and discriminating models.8 The generative model create deepfake content and the discriminating model evaluates whether content is likely legitimate or synthetic. The discriminating model is then used to train the generative model until the AI can no longer tell whether the content is synthetic or authentic. This enables very fast feedback loops for malicious actors, and if AI cannot tell whether the content is synthetic or authentic, what chance does the average user have?
Methods to Defend Against Deepfakes
Although the use of AI for social engineering can present significant challenges to enterprises and government agencies, there are steps chief information officers (CIOs) and chief information security officers (CISOs) can take to help protect their organizations.
Identity and Access Management
Rudimentary identity and access management (IAM) principles and practices are a reasonable start for many organizations looking to combat AI-based social engineering attacks. For example, some organizations might benefit from adopting more simplistic measures, such as secret passphrases, words of the day, or rotating watermarks. Although these techniques are unsophisticated, they can be effective if an organization is committed to their use. However, these methods are difficult to scale. Therefore, they are only really effective for internal communications.
A slightly more sophisticated and potentially much more scalable approach is the utilization of multifactor authentication (MFA) technologies. A phishing-resistant form of MFA is important, as traditional MFA is also susceptible to social engineering attacks. Malicious actors use techniques such as MFA fatigue, which is designed to flood users with notifications until the user clicks “accept”. Another common method for bypassing traditional MFA involves the use of social engineering tactics, which frequently manifest as service desk attacks. In these attacks, the malicious actor pretends to be a member of the organization’s service desk and persuades a user into providing the MFA code from their device, perhaps under the guise that something is wrong with their account that needs to be immediately rectified. Even more sophisticated forms of MFA can be susceptible to advanced techniques such as SIM swapping, wherein a malicious actor transfers a user’s phone number to another device, or exploitation of SS7 protocol vulnerabilities in communications infrastructure to obtain MFA codes sent via text message or voice. The most secure type of MFA widely available today is based on FIDO/WebAuthn authentication.9 Organizations should be looking to use products based on the WebAuthn protocol as part of their MFA strategy to secure against vulnerabilities found in traditional MFA.
Deepfake Detection
Currently, most measures to combat deepfakes and other synthetic content rely on detection. Detection efforts typically start with evidence of content altering or manipulation, followed by an alert that the content may require further analysis. This alert is often accompanied by a numerical expression of the likelihood that the content has been altered. For example, an AI-based deepfake detection system, in real time, evaluates a piece of content, providing a liveness score representing the likelihood that the content is legitimate. If the likelihood is above a predetermined threshold, that piece of content is given a “green light” to continue; if it falls below that threshold the content is either flagged as illegitimate or referred for further inspection.
Common detection methods include:
- Metadata analysis for signs of tampering
- Pattern analysis to determine AI-generated material
- Audio forensics, such as voice identification, and electronic and spectral analysis to determine if editing has occurred
- Assessing and verifying the authenticity and credibility of source content
One major problem with relying on detection is that as soon as new detection capabilities are developed and made available, malicious actors integrate them into their feedback loops (such as GANs) and use AI to figure out ways around them. This limits the window of efficacy for detection tools vs. expert actors.
Content Authentication
To combat the more significant global issues of potential misinformation caused by synthetic content, new standards and supporting technologies need to be created. New standards and technologies will need to preemptively mitigate the threats posed by AI-generated synthetic content to stay ahead of malicious actors. Organizations will need to employ these types of measures as a defense against malicious actors manipulating their content and then presenting it as legitimate. Without these proactive defense measures, significant reputational or even financial damage could occur from synthetic content believed to be legitimate.
Progress is being made in this area. For instance, the Content Authenticity Initiative (CAI) is a group of media and tech companies, nongovernmental organizations, academics, and others working to promote the adoption of standards for content authenticity and provenance.10 CAI has created open-source tools that allow users and organizations to integrate secure provenance signals into their content. At the time of media creation, asset hashing provides verifiable, tamper-evident signatures that the media has not been manipulated.
There are other organizations with similar missions, and the importance of verifying authenticity will become even more critical as the creation and spread of deepfakes continue to escalate. Techniques such as watermarks, digital signatures, and blockchain technologies are all being utilized to proactively identify AI-manipulated content. Content authentication mechanisms are perhaps the strongest defense an organization has against synthetic or manipulated content being distributed on its behalf.
Planning and Workforce Development
In addition to the methods outlined, there needs to be vigilance applied to AI-enabled social engineering attacks. For starters, plans need to be developed and rehearsed, just as they would be for other cyberincidents.
In conjunction with technologies to detect manipulated content and authenticate original content, proper cyber and privacy hygiene is essential and must be expanded to consider AI-based social engineering threats. As with any cyber or privacy incident, if an organization experiences an AI-based social engineering attack, it needs to be able to respond effectively. Employee training must be expanded to convey the potential effects of AI-based content manipulation and what employees can do to help protect their organization. Incident information needs to be shared within the proper communities, including federal partners, so that it can be consolidated, cultivated, and distributed to protect other organizations.11
Conclusion
The growth of the threat posed by the intersection of AI and cybersecurity is accelerating. Deepfakes have become substantially more effective and harder to combat in a short period of time. As AI technology continues to advance, so do the capabilities of malicious actors seeking to exploit it for their gain.
Organizations must adapt to this changing landscape and proactively employ measures such as a continued and amplified focus on identity management, detection of manipulated content, use of content authentication tools and techniques, and preparation for AI-based attacks. They should address AI-manipulated content the way they would other incidents, including through updates of their employee awareness training. It is critical for organizations to adopt the right strategies and roadmaps for handling AI-augmented social engineering incidents.
Endnotes
1 Defense Advanced Research Projects Agency, “AI Cyber Challenge,” https://aicyberchallenge.com/
2 National Institutes of Standards and Technology, “Social Engineering,” Computer Security Resource Center, USA, https://csrc.nist.gov/glossary/term/social_engineering
3 Government Accountability Office, “Science and Tech Spotlight: Deepfakes,” USA, https://www.gao.gov/assets/gao-20-379sp.pdf
4 Roth, ; “European MPs Targeted by Deepfake Video Calls Imitating Russian Opposition,” The Guardian, 22 April 2021, https://www.theguardian.com/world/2021/apr/22/european-mps-targeted-by-deepfake-video-calls-imitating-russian-opposition
5 McMillan, ; A. Corse; et al.; “New Era of AI Deepfakes Complicates 2024 Elections,” The Wall Street Journal, 15 February 2024, https://www.wsj.com/tech/ai/new-era-of-ai-deepfakes-complicates-2024-elections-aa529b9e
6 Damiani, ; “A Voice Deepfake Was Used To Scam A CEO Out Of $243,000,“ Forbes, September 2019, https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=2c79b9fc2241
7 ET Online, “Hong Kong MNC Suffers $25.6 Million Loss in Deepfake Scam,” Economic Times, 6 February 2024, https://economictimes.indiatimes.com/industry/tech/hong-kong-mnc-suffers-25-6-million-loss-in-deepfake-scam/articleshow/107465111.cms?from=mdr
8 Europol, “Facing Reality? Law Enforcement and the Challenge of Deepfakes,” Europol Innovation Lab, European Union, January 2024, https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf
9 Cybersecurity and Infrastructure Security Agency, Implementing Phishing Resistant MFA, USA, October 2022, https://www.cisa.gov/sites/default/files/2023-01/fact-sheet-implementing-phishing-resistant-mfa-508c.pdf
10 Content Authenticity Initiative, “Authentic Storytelling Through Digital Content Provenance,” https://contentauthenticity.org/
11 National Security Agency; Federal Bureau of Investigation; Cybersecurity and Infrastructure Security Agency; “CSI Deepfake Threats,” USA, 12 September 2023, https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
JOHN EVANS | CISM, CRISC, CISSP
Is chief technology advisor (CTA) at World Wide Technology. Before joining WWT, Evans served as chief information security officer (CISO) and deputy chief technology officer (CTO) for the US State of Maryland. Evans has also served as an adjunct professor of cybersecurity at the University of Maryland (USA) at the graduate school level.