


AI has taken the world by storm in a mostly good way, increasing productivity, creativity and profits faster and easier than ever before in recent years.
If AI is training computers to think like humans, I consider AI governance in its most simplified definition to be “Making sure AI thinks like a good human.”
In 2023 Generative AI like ChatGPT was in heavy rotation among businesses and for personal use. AI governance impressively stepped up to fill the gap as AI grew so fast that the world was left wondering how we would get ahead of the unique set of risks that included your traditional C.I.A. triad concerns of confidentiality, integrity, and availability, plus new social concerns such as governing bias, human rights and making AI models more responsible and transparent about model actions.
Frameworks and Regulations
Frameworks such as COBIT work well for addressing AI governance. Additionally, other frameworks were created explicitly for AI governance, such as NIST AI RMF, which provides good suggestions on what to remediate without getting too granular, and the ISO 42001, which is more specific about the controls to put in place and is also certifiable. Both address requirements from the EU AI Act that was put in place in 2024.
A Team Effort
While AI governance professionals are the quarterbacks for AI governance, they collaborate with key teams to make the overall effort successful. The aim is to minimize both traditional and AI-specific risks while still delivering innovation. Some of the key players in this collaboration are Privacy, Legal, Ethics, Security and GRC. While AI governance personnel should know how to collaborate with these teams, similar to the role of a BISO (Business Information Security Officer), they may not have the technical acumen to dig down into the weeds to present precise requirements to data scientists or software developers to make specific AI compliance changes.
The Key Role of AI Auditors
So, who has the technical acumen to dig into the weeds to present precise requirements? Insert a well-trained AI auditor.
With AI being an ever-evolving technology, chances are you can’t just walk to the next cubicle and grab a technical AI auditor. What’s the answer? Create them. Because AI changes every day, the time is right to train auditors who are already more technical in nature or who are ready for the challenge of upskilling in AI-specific auditing.
The relevance of AI in auditing cannot be understated. Auditors can leverage AI as a tool to be faster and more proficient with audits and they can increase their career stock by being a unicorn of sorts – an auditor who possesses AI auditing skills, which most auditors will start to need by default as AI becomes more and more embedded in all technical and business solutions.
Become Advanced in AI Audit
ISACA has created a credential that completes the final puzzle piece: Advanced in AI Audit (AAIA). The certification comprises three domains: AI Governance and Risk, AI Operations, and AI Auditing Tools and Techniques.
If you are reading this blog post, you are in the right place at the right time. Studying for this exam will prime your knowledge, confidence and skills for the innovation that is upon us with AI showing no signs of slowing down. With all the talk about AI taking jobs, let’s move toward adding the AAIA credential not out of risk or fear but out of forward-thinking and opportunity – an opportunity to be a highly skilled professional who knows how to harness the world’s newest and maybe greatest technology revolution.
If you are looking for other resources related to AI Audit, ISACA’s Artificial Intelligence Audit Toolkit is another great one to explore.