Home / Resources / ISACA Journal / Issues / 2020 / Volume 1 / HelpSource Q A


HelpSource Q&A

Author: Sunil Bakshi, CISA, CISM, CRISC, CGEIT, Consultant
Date Published: 1, January 2020

Question  How relevant is the threat posed by deep fakes, and how should that threat be evaluated?


Answer  “Deep fake” is a term derived from deep learning, and it refers to modifying or creating fake videos using human image synthesis and artificial intelligence (AI). Although it started as academic research, it has the potential to create hoaxes and personal attacks by creating fake pornography and fake news by using politicians’ faces to create fake videos. Digital fakery is not new, but it has morphed from photos to edited videos. Face swapping techniques are not new either. But now, with deep learning by machines, these tricks can be automated, and the tools are accessible to many more people. The big problem is that current forensic tools are not able to detect this fakery...


Members, login to keep reading.

Not a member but want to read more?
Explore ISACA member benefits today.

language icon

Journal Translated Articles Are Currently Unavailable

We’re in the process of moving our translated Journal articles to our new platform. Please hold tight—they’ll be available again in mid-February.