Although the majority of respondents to ISACA’s new artificial intelligence survey are very or extremely worried about generative AI being exploited by bad actors, the majority also say no AI training is provided at their organizations.
The need for training is evident, though, as only 25% of respondents—who are digital trust professionals—have a high degree of familiarity with generative AI. This is a significant concern as employees quickly move forward with use of the technology, popularized by platforms such as ChatGPT and Bard. Adding to the potential for confusion, only 10% say a formal, comprehensive generative AI policy is in place at their organization.
If enterprises fail to leverage AI effectively, cybercriminals are poised to fill the vacuum. While AI can be useful both for cybercriminals and cybersecurity professionals, nearly three times as many respondents think cybercriminals use AI more successfully than digital trust professionals (also inclusive of GRC, privacy and audit professionals).
“AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient,” said ISACA Now blog author Raef Meeuwisse. “However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values.”
Other key findings from the global survey, which had more than 2,000 respondents, include:
- While only 28% of organizations say their companies expressly permit the use of generative AI, over 40% say employees are using it regardless—and the percentage is likely much higher given that an additional 35% aren’t sure
- 41% say not enough attention is being paid to ethical standards for AI implementation
- Respondents identified creating written content (65%), increasing productivity (44%), automating repetitive tasks (32%), customer service (29%) and improving decision-making (27%) as the main ways employees are using generative AI.
- Misinformation/disinformation (77%), privacy violations (68%), social engineering (63%), loss of IP (58%) and job displacement and widening of the skills gap (35% each) rated as the top five risks from AI.
Despite the many risks, there is still an optimistic vantage point of AI’s capabilities, with 85% of respondents saying AI is a tool that extends human productivity and 82% anticipating AI will have a positive or neutral impact on their careers in the next five years.
For more artificial resources from ISACA, visit https://www.isaca.org/resources/artificial-intelligence.