Editor’s note: P.W. Singer, strategist and senior fellow at the New America Foundation, will deliver the closing keynote address at ISACA’s 2018 CSX North America conference, to take place 15-17 October in Las Vegas, Nevada, USA. Singer recently visited with ISACA Now to discuss pressing cybersecurity considerations that governments much grapple with, the multi-faceted impact of artificial intelligence and more. The following is a transcript of the interview, edited for length and clarity:
ISACA Now: What are the primary strategic considerations for governments today when it comes to protecting their people from cyberthreats?
The essential problem is that all the issues we've been dealing with the last 10 years – cybercrime, IP thefts, botnets, etc. – are still with us, but we also now have a series of new challenges to face. Governments, not just national, but state and local governments, have to understand the combination of how the internet is changing, and, in turn, the threat landscape. We are nearing the 50-year mark of internet history, an amazing moment when you consider the change, but it is also shifting. Once it was just an internet of people communicating, but it is also now one of “things” operating.
This, of course, brings enormous gains and efficiencies, but also massively grows the attack surface, as well as raises the consequences of attacks, shifting them to the physical realm. In turn, the internet has become one of web 2.0 via social media, where we all share information but also now spread and fight disinformation (what I call LikeWar). Add in the rise of issues like ransomware, hybrid threats from states and criminals, the blight of mega breaches, and it’s a daunting time. So, the key for governments is to ensure they are keeping pace with these shifts in internet use and threats.
ISACA Now: How do you envision malicious uses of AI reshaping the threat landscape in the coming years?
AI – and by that, I mean everything from machine learning to neural networks, will be used by bad actors in everything from developing malware to scoping out for vulnerabilities. But one area I think we really are not ready for is “deep fakes.” created by AI. These hyper realistic videos, that aren’t actually true, will be weaponized against people, companies and governments. We’ve already seen examples tested in labs, where you can create a video of a speech that someone never gave, to how actresses have been put in adult films they never appeared in. This is just the start, where AI will be used to attack our very perceptions and sense of reality, in a malicious manner.
ISACA Now: Which new or emerging technologies can be most useful to governments in bolstering their security capabilities?
AI! Every technology has both good and bad uses, by good and bad people. For instance, AI is the very means to detect emergent cyber threats, scope out new anomalies before they can cause harm, sift through vast amounts of noise. Indeed, the means to detect AI-created deep fakes is other AI that can hunt for their tells. As I explore in an upcoming book, this creates a strange new world where the AIs battle, with us humans in the middle as the target.
ISACA Now: What appealed to you about joining the New America Foundation?
It is an organization that tackles the questions of what happens when technology and policy come crashing together, so people there are always wrestling with fascinating and important questions. At a recent staff meeting, for instance, we had people who were working on topics as varied as how to help the U.S. Army with cybersecurity to aiding the Rhode Island state government on adoption policy reform.