Social media platforms and the internet have introduced new forms of media manipulation, with none more alarming than deepfakes. These artificial intelligence (AI)–generated synthetic media manipulate audio recordings, videos, or photographs to create false sounds or images.1 Concerningly, many deepfakes are likely to go unnoticed even by experts. Improvements in the efficiency and sophistication of the technology are worth noting, and its potential negative impact on individuals and society regarding privacy violations, reputational damage, security risk, loss of public trust in the media, and threats to democracy, must be contended with.2
Traditional approaches for detecting deepfakes are proving to be less practical and less effective, and laws and regulations have failed to keep pace with advancements in deepfake technology. These shortcomings can be assessed by systematically analyzing the impact of deepfakes and by critically reviewing current detection and regulation efforts.3
A Brief History of Deepfakes
When deepfake technology was introduced, it evolved rapidly. Initially, issues such as technical capabilities and the availability of data affected the quality of deepfake videos. The first deepfakes were not very convincing; it was easy to see that the sync did not match, the facial movements were not natural, or the lighting was poor.4 With advancements in machine learning (ML) algorithms and hardware, the quality of deepfakes improved. Present-day deepfakes are so realistic that even senior professionals can be fooled. In one case, scientists created a deepfake of former US president Barack Obama as a warning about the possibility of political manipulations by AI-generated media.5
It is essential to note that the development of deepfake technology is based on an “arms race” approach. As deepfake detection improves, the algorithms employed to create them will also improve. This cat-and-mouse game highlights the need for both constant vigilance and the creation of new mechanisms to combat synthetic media.
Essential Aspects of Deepfake Technology: GANs
Deepfake technology utilizes generative adversarial networks (GANs). A GAN consists of two neural networks that compete in adversarial learning. The first network—the generator—creates synthetic content by manipulating the training datasets, which are composed of real audio, videos, or photographs.6 The second network—the discriminator—evaluates this content relative to its real-life counterpart. Over time, the generator becomes capable of synthesizing more detailed media that closely matches the source media. Eventually, the discriminator improves at comparing the details between the real and fake versions. This process allows GANs to generate high-quality deepfakes without much effort.
Pre-trained GANs can flawlessly substitute one person's face in a video or image for another. Their design can be fine-tuned and optimized for various applications, such as to produce natural-looking human faces or natural-sounding language for voice synthesis.7 Other AI methods, such as autoencoders and recurrent neural networks (RNNs), can be used simultaneously with GANs to sharpen deepfake images and make them even more realistic.
Other Tools and Platforms
Almost anyone can create deepfakes thanks to the number of platforms available. Although many of these tools were developed for the entertainment and advertisement industries, others raise ethical issues due to their potential for misuse regarding negative or fake content:
- DeepFaceLab—This platform can replace the head of one person with the head of another in a video.8 With the requisite technical skills, this technology can be mastered by anyone. DeepFaceLab allows the manipulation of output at a granular level, making it a commonly used platform among researchers and content creators.
- FakeApp—This was one of the first deepfake applications. It is simple to use, allowing anyone to create fake videos simply by uploading photos to the app.
- Zao—This application was the center of controversy due to the app’s AI algorithm working in conjunction with deepfake technology to insert the user into popular movie scenes.9 The success of the app demonstrated the global accessibility of deepfake tools and generated concerns about data privacy and consent.
Possible Consequences of Deepfakes
Deepfakes pose various threats that can directly affect multiple industries. These negative consequences will only worsen as the technology improves and is more widely used.10
Misinformation
Misinformation is the most prevalent risk associated with deepfakes. Although deepfakes can be used for good, they can simultaneously be extremely destructive when utilized to disseminate believable hoaxes, false news reports, and deceptive videos.11 Deepfake videos make the headlines when they involve politicians, celebrities, and other public figures, proving that this technology can have far-reaching consequences.
It is one thing for the public to be given incorrect information about a certain product or a particular individual. However, it becomes especially dangerous when entire public perceptions are shaped by misinformation. The problem of misinformation in the media is becoming increasingly topical and difficult to solve. For example, a manipulated video of a politician giving an impassioned speech about a particular policy may lead people to form false ideas about the individual. In August 2019, a deepfake video of US House of Representatives Speaker Nancy Pelosi was posted on social media in which she seemed to be intoxicated, which was not the case in the original video.12
According to a study by DeepTrace, the number of deepfake videos available online nearly doubled between 2018 and 2019, from 7,964 to more than 14,698.13 This sharp increase suggests that both amateurs and seasoned cybercriminals are using deepfake technology to execute complex scams.
Social media platforms such as YouTube, Facebook, and X have not been able to contain the growing use of deepfakes. Attempts to expunge this content are almost always futile, as new fake videos are posted and go viral before they can be reported and pulled from the platform.14 This is an enormous problem for media web services that must fight misinformation and regulate content.
Privacy Violations and Harassment
Deepfakes are associated with high levels of privacy violations, particularly in the case of nonconsensual AI-generated pornography. When such videos are shared on the internet, victims can suffer severe damage.
The Digital Civil Society Lab (DCSL) of Stanford University (California, USA) revealed that of the 25 most popular deepfakes, 96% contain pornography, and 77% of the victims are women. This trend presents a major risk to privacy and individual security.15
Victims of this type of crime struggle to remove the content once it has been uploaded online, and the impact on their reputations can be widespread and unmanageable.16 Cybercriminals have also used deepfake pornography, threatening to publish fake videos if targets do not pay a ransom.
National Security
Deepfake technology can be used as a weapon by cybercriminals in cyberwarfare, making it a subject of interest to governments and defense agencies. Realistic deepfake videos could be misused by state and non-state actors to mislead the public, control the masses, erode confidence in legitimate democratically elected governments, and even disrupt diplomatic relations between countries.17
To this end, the most sinister exploitation of deepfakes occurs in international relations. For example, a realistic-looking video of a world leader announcing imminent war could trigger real-world actions by some nations. The fallout from a fake video could escalate until the deepfake is revealed to be an internet scam. With political processes dominated by social media and information warfare, deepfakes are a considerable menace to the world order.
Further, deepfakes can also be applied in psychological operations, using fake images or videos of military commanders or renowned politicians to cause anxiety or weaken the enemy’s morale. Likewise, they may be used in diplomatic deception. For instance, a deepfake video might depict a foreign leader declaring certain intentions, and other nations might react based on those statements.
The United States Department of Defense (DoD) added deepfakes to the list of threats to national security in 2020.18 This action prompted more investment in AI directed toward detecting and combating deepfakes. As a result, there is growing awareness of deepfakes’ potential to threaten electoral systems, manipulate voters’ opinions, and erode the democratic structure.19
Combating Threats Posed by Deepfakes
Countering the negative effects of deepfakes requires a blend of technology and policy measures.
Technological Solutions
Technology is the first line of defense against deepfakes. This is why AI-based detection systems must be constantly revised to address new forms of deepfake production.
One promising solution is to implement provenance-based systems that track where videos and images originated.20 This is commonly known as digital watermarking, allowing platforms and users to differentiate between genuine and fake content.
Blockchain technology can also help differentiate between genuine and fake content. As an immutable distributed ledger, it tracks the source of media content, making it harder to go unnoticed.21
Last, there are forensic tools being developed that are aimed at decoding deepfake media on the pixel level.22 These tools are capable of identifying certain glitches within the rendering process that point toward manipulation, including differences in lighting, shadows, or reflections.
Legal and Policy Interventions
Most countries are now aware of the need for laws addressing the production and dissemination of deepfakes.
Several countries have already passed laws that prohibit the use of deepfakes for unlawful objectives.23 For instance, in the United States, the State of California enacted the Anti-Deepfake Law or California Assembly Bill 730 (AB 730) in early 2019. It prohibits the circulation of deepfakes related to politics within 60 days of an election. Similarly, in 2020, China made it compulsory to inform others of synthetic media content.24
Though strides have been made with laws and regulations, issues remain over how these laws can be implemented effectively when deepfake content continues to circulate on various platforms. To solve this problem, international cooperation is necessary. The United Nations and the European Union are global regulatory structures that have already made strides in addressing deepfakes.
United Nations
The United Nations has, in its broader agenda on countering the spread of misinformation and disinformation, identified deepfakes as a threat. Initiatives such as the UN Strategy and Plan of Action on Hate Speech25 and the UN Global Digital Compact26 stress the need to redress the misapplication of digital technologies, including deepfakes. While the United Nations does not have a specific treaty aimed at deepfakes yet, it has promoted guidelines and regulations for the ethical use of AI to promote accountability in AI-generated content among member states. In addition, the United Nations engages member states through channels such as the International Telecommunication Union (ITU) in formal discussions on establishing international standards for the responsible use of AI, including the identification and prevention of the dangerous spread of deepfakes.
European Union
On the other hand, the European Union has adopted a more direct and holistic way of regulating deepfakes. The European Union passed rules to curb the spread of manipulated content within the Digital Services Act (DSA)27 and the Artificial Intelligence Act.28 Under these laws, online platforms are now obligated to give users an indication of when certain content was generated or manipulated by AI, which makes it easier for users to identify deepfakes.
The European Code of Practice on Disinformation, created with the cooperation of major tech enterprises, attaches importance to transparency and accountability when it comes to manipulated media.29
Media Interventions
Another major approach to combat the effects of deepfakes is through sensitization, awareness, and media literacy. As deepfake technology advances, people must become more vigilant while interacting with online content. The first step in this process involves raising awareness about deepfakes and equipping people with techniques to identify them.
Media organizations and social media platforms can contribute by ensuring the accuracy and credibility of the information they provide.30 Currently, Facebook and X are tagging manipulated content. This is a good first step, but more of this content needs to be identified and addressed.
By disseminating accurate information and counteracting misinformation, these organizations ensure that the public’s trust in digital media is not compromised.
Collaborative Efforts
To deal with the deepfake threat, efforts are needed at the national and international levels, and this requires the cooperation of governments, IT enterprises, universities, and nongovernmental organizations.31 For instance, collaborations between tech developers and police authorities can lead to the arrest and prosecution of individuals involved in the development and distribution of deepfake videos.
Conclusion
The emergence of deepfake technology has led to issues of trust, privacy, and data credibility. Deepfakes have become increasingly hard to distinguish from real images and recordings, however, AI-based methods are proving their effectiveness in this regard. Current laws and regulations are insufficient to contain the growth of this technology and address its adverse implications. This highlights the need for additional research into deepfakes and a better understanding of the measures required to counter the problems they cause individuals and society at large.
Online platforms are now obligated to give users an indication of when certain content was generated or manipulated by AI, which makes it easier for users to identify deepfakes.To address the threats posed by deepfakes, more advanced and thoroughly tested methods of detection using ML and AI are needed. Moreover, policymakers must develop and implement sweeping measures that respond to the spread of deepfakes in order to mitigate their dissemination.
It is up to technology professionals, lawyers, and academic institutions to work together to create new products, tools, and services that prevent the misuse of deepfakes. Further, public safety campaigns should educate people on deepfake detection and improve the public’s media literacy.
By combining robust detection methods, innovative research, and comprehensive public education, security professionals can better safeguard individuals and society from the potential harms of deepfake technology.
Endnotes
1 Anand, V.; "Deepfake Technology Was Always Dangerous—Then AI Came Along," CNBC, 5 October 2024
2 Afchar, D.; Niessner, M.; et al.; "MesoNet: A Compact Facial Video Forgery Detection Network," Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS)
3 Buffett, J.; "The Rise of Artificial Intelligence Deepfakes," Journal of National Security Law & Policy, vol. 11, iss. 2, 2020, p. 223-245
4 Helmus, T.C.; "Deepfake Technology and Its Implications for National Security," RAND Corporation, July 2022
5 Nordling, L.; “Scientists Are Falling Victim to Deepfake AI Video Scams—Here’s How to Fight Back,” Nature, 7 August 2024
6 Javed, M.; Zhang, Z.; et al.; "Real-Time Deepfake Video Detection Using Eye Movement Analysis with a Hybrid Deep Learning Approach," Electronics, vol. 13, iss. 15, 2024
7 Westerlund, M.; "The Emergence of Deepfake Technology: A Review," Technology Innovation Management Review, vol. 9, iss. 11, 2019, p. 39-52
8 Jacobson, N.; "Deepfakes and Their Impact on the Future of Misinformation," Media and Communication, vol. 8, iss. 2, 2020, p. 103-114
9 Temir, E.; "Deepfake: New Era in the Age of Disinformation and End of Reliable Journalism," Selçuk İletişim, vol. 13, iss. 2, 2020, p. 1009-1024
10 Gambin, A.; Yazidi, A.; "Deepfakes: Current and Future Trends," Artificial Intelligence Review, 2024
11 Kidwell, T.; "Why Deepfakes Are Set to be one of 2024’s Biggest Cyber Security Dangers," TechRadar, 16 July 2024
12 Grosse, K.; Zuber, S.; "Detecting Deepfake Videos Using Machine Learning Techniques," Journal of Digital Forensics, Security and Law, vol. 16, iss. 2, 2021, p. 67-84
13 Cellan-Jones, R.; “Deepfake Videos ‘Double in Nine Months” BBC, 7 October 2019
14 Gambin; “Deepfakes: Current and Future”
15 Kaur, A.; Hoshyar, A.N.; et al.; "Deepfake Video Detection: Challenges and Opportunities," Artificial Intelligence Review, vol. 57, 2024,
16 Yu, P.; Xia, Z.; et al.; "A Survey on Deepfake Video Detection," Iet Biometrics, vol. 10, 2021, p. 607- 624
17 Van der Sloot, B.; Wagensveld, Y.; "Deepfakes: Regulatory Challenges for the Synthetic Society," Computer Law & Security Review, vol. 46, 2022,
18 National Security Agency/Central Security Service, “Contextualizing Deepfake Threats to Organizations,” USA, 12 September 2023
19 Sareen, M.; "Threats and Challenges by DeepFake Technology," DeepFakes, CRC Press, 2022
20 Zandt, F.; “How Dangerous Are Deepfakes and Other AI-Powered Fraud?,” Statista, 13 March 2024
21 Zandt, “How Dangerous Are Deepfakes”
22 Kinsella, B.; “New Report: Deepfake and Voice Clone Awareness, Sentiment, Concern, and Demographic Data,” Voicebot, 1 November 2023
23 Kinsella; “New Report”
24 Anand; "Deepfake Technology”
25 United Nations, Strategy and Plan of Action on Hate Speech, May 2019
26 United Nations, “Global Digital Compact”
27 European Commission, “The Digital Services Act,” European Union
28 EU Artificial Intelligence Act, “The EU Artificial Intelligence Act”
29 European Commission, “The 2022 Code of Practice on Disinformation,” European Union
30 Angell, M.; “How Real Is the Deepfake Threat?,” American Banker, 2020
31 Wang, T.; "Deepfake Detection: A Comprehensive Survey From the Reliability Perspective," International Journal of Information Management, vol. 57, iss. 3, 2024
AZAD MAMMADOV
Is an IT audit manager with more than eight years of expertise in IT, audit, and cybersecurity. Mammadov is passionate about staying at the forefront of IT and cybersecurity by combining a strong educational background with practical experience.