Artificial intelligence and machine learning are growing at a very fast rate, exceeding the growth of any other technology. The vast benefits along with the potential for associated catastrophic perils created by the impending advancement of AI requires lots of deliberation by security professionals like us.
To propel that thought process, a great report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation was written by a group of distinguished authors from prestigious institutions such as Future of Humanity Institute, University of Oxford and University of Cambridge, to name a few. They have come together to share their pearls of wisdom with remarkable alacrity. The contents could not have been more in context and relevant for the future of cybersecurity.
The report wonderfully articulates and explains how AI will impact the existing landscape of threats, and, in particular, the possible following consequences:
- It will accelerate the scalability of attacks, lowering the cost of attacks;
- New attack forms may arise that are otherwise impractical for humans to perform;
- More effective attacks will develop that are finely targeted and that potentially exploit vulnerabilities in AI systems.
The report has considered three important security domains and illustrates possible changes to threat scenarios within these domains:
Digital security. The use of AI will enable labor-intensive attacks to be accomplished much more easily, as well as exploit human vulnerabilities through the use of speech synthesis for impersonation and the exploit of vulnerabilities of AI systems.
Physical security. Increased attacks through drones and other autonomous weapon systems, plus attacks that subvert physical systems, such as causing autonomous vehicles to crash.
Political security. AI will enable quick analysis of mass collected data, creating targeted propaganda and manipulating videos that will expand threats associated with privacy invasion and social media manipulation. The ability to confidently process correct conclusions about human behavior, moods and beliefs on the basis of available data will undermine the abilities of democracies to sustain truthful public debates.
The report makes well-researched recommendations to prevent and mitigate the risks arising from malicious use of AI, including:
- Policy-makers should work jointly with technical researchers to investigate, prevent and mitigate potential malicious uses of AI. The policy interventions should be around privacy protection, coordinated use of AI for public-good security, monitoring of AI systems and resources.
- Researchers and engineers should look into both use and abuse cases of their work and reach out proactively to the relevant actors for harmful applications. The organizations and the researchers carry a huge responsibility of having proper education and following ethical processes, standards and norms.
- Best practices should be identified in research areas with more mature methods of addressing use and abuse concerns addressing computer security. There should be risk assessment in technical areas of special concern, openness in research and promotion of safety and security.
- Involve more stakeholders and domain experts in discussion of these challenges. There should be red-teaming, formal verification, responsible disclosure of AI vulnerabilities and the use of related security tools and secure hardware.
Additionally, the report offers a rich bibliography and materials for future research. It is a must-read for every cybersecurity professional.
What could ISACA’s role be in navigating this road map of rapid growth and evolution of AI, and in advocating measures to maximize the benefits to society?
ISACA’s core strength is its expert body of knowledge in the governance of enterprise IT (GEIT), with implications in audit, risk and compliance. Therefore, ISACA should actively participate in the setting of frameworks and best practices for contending with AI, assisting the industry in arriving at standards for safe and ethical practices and determining methods of detections and remediation of vulnerabilities in AI systems.
ISACA can play a very important role in connecting industry, academia and regulatory bodies. These efforts will not only enable and help these communities to reap and maximize the benefits of AI, but also to prevent and mitigate malicious uses and risks arising from this very important and imminent technological advancement.
Author’s note: The views expressed in this post are the author’s views and do not represent any of the professional bodies with which he is associated.