

As of August, OpenAI is on track to have 700 million weekly active users of ChatGPT, a major increase from 500 million in March. While artificial intelligence spans far more than just generative capabilities, this relatively new field—emerging prominently since the advent of Generative Adversarial Networks (GANs) in 2014—has taken center stage in public perception.
We’ve entered a new era in which AI is no longer confined to academic labs or expert circles. It has become a mainstream technology—shaping the way we work, communicate, and solve problems. Society now stands at a crossroads, unsure of where this rapid development will lead, yet clearly moving forward on an accelerating path toward digital transformation.
As with every major technological shift, the benefits come with risks. And criminals are quick to exploit these advances for malicious purposes. Among the most concerning developments are new attack vectors such as prompt injection and a disturbing amplification of a classic threat: social engineering.
The Deepfake Threat
Deepfakes—synthetic media that convincingly replicate voice, facial expressions, and gestures—have been around for about five years. But what once required significant resources to produce is now accessible to almost anyone. Social media provides abundant training data, and AI has made the creation of realistic imitations easier than ever. Sixty-six percent of respondents to ISACA’s AI Pulse Poll expect deepfake cyberthreats to become more sophisticated and widespread in the next 12 months, but only 21 percent say their organizations are actively investing in tools to detect and mitigate deepfake threats.
These fabricated voices and faces compromise the non-verbal trust signals humans rely on—tone of voice, facial cues, and gestures—bypassing the logical skepticism our minds would otherwise apply.
What makes this threat even more insidious is the convergence of technologies:
- AI-generated voice clones
- Caller ID spoofing
- Real-time conversation capabilities powered by large language models (LLMs)
This fusion makes it possible to automate and scale personalized voice scams, targeting individuals and organizations alike with alarming precision.
Redefining Trust in the AI Age
This raises a pressing question: How can we still trust that we’re talking to a real person?
While AI can mimic appearance, speech and even personality traits, there is still one thing it cannot replicate: the unique depth of human character. Subtle intuition, emotional intelligence and authentic interpersonal connection remain difficult—if not impossible—for machines to truly emulate.
These may become our last line of defense in the race to preserve truth and trust in an AI-driven world.
To learn more on this topic, join us at ISACA 2025 Europe Conference 15-17 October in London, where I’ll be giving a session on this topic.