AI is reshaping how organizations operate, innovate, and compete, ranging from customer service chatbots to sophisticated fraud detection systems. The transformation presents both challenges and opportunities for internal auditors. Providing independent assurance and insight on risk and control has always been our role. Our responsibility is now to look beyond the "black box” as AI integrates deeper into the enterprise to ensure these powerful systems are governed effectively.
So, to start the journey, internal auditors need to commence with a solid, foundational understanding of AI models, the business considerations they entail, and the requirements for their responsible deployment.
Types of Intelligence
Understanding exactly what kind of intelligence they are dealing with is the initial step for any auditor approaching an AI system.
Think of the AI widely used today, Artificial Narrow Intelligence (ANI), sometimes called weak or narrow AI. It is also called the specialist AI, programmed to do one specific thing really well – for example, the system that helps a car drive itself. ANI is a type of AI that mimics human behavior but only within a very specific set of rules and situations.
Next up is Artificial General Intelligence (AGI), which is what many people think of as “strong” or “deep” AI. This is the kind of AI that could think, understand, learn, and use its knowledge to solve tricky problems, much like a person would. AGI operates based on a “theory of mind,” simulating human behavior – it understands the emotions, beliefs and thoughts of others.
Finally, there is an AI that would go beyond human intelligence, Artificial Superintelligence (ASI). It would perform any task better than humans can. It would understand human feelings and experiences; it could have its own emotions, beliefs, and desires. It would be able to think for itself, solve puzzles, make judgments and decide things on its own.
Some thinkers believe that reaching ASI could trigger a "technological singularity"— a point where machines become so powerful they could change our world in ways we can’t even predict. While we’re not there yet, the possibility of superintelligence is both fascinating and a little unsettling to think about.
Understanding How AI Learns and Auditors’ Role
To really get a handle on auditing AI, we first need to understand how it actually learns. It’s not magic; it generally happens in one of three ways.
The first one, Supervised Learning, feeds the AI model a ton of data that’s already been labeled with the right answer. For instance, if you want to teach it to recognize cats, you show it thousands of photos, each one tagged, “This is a cat.” After seeing enough examples, the AI gets the hang of it and can start identifying cats in new photos all by itself.
Then we have Unsupervised Learning where the AI gets thrown into the deep end with a massive pile of unlabeled data and is told to find interesting patterns. Think of an e-commerce site's AI scrutinizing millions of customer transactions. The machine might discover that people who buy bread also tend to buy butter on its own. The machine found that hidden connection all by itself without being thought to do so.
Finally, we have Reinforcement Learning where the machine learns through trial and error. Imagine training a robot vacuum cleaner. You give it a point (a reward) when it successfully cleans up, and it loses a point (a penalty), when it bumps into a wall or gets stuck under the sofa. Over time, the robot figures out the most efficient way to clean the room to score the most points and avoid getting into trouble.
Key Assurance Concerns for Auditors
Data and Performance: It all starts with the data. Are the labels on those "cat photos" accurate and consistent? Is the dataset big and varied enough to include different breeds and settings, or did we only show it pictures of tabbies in kitchens? The unlabeled data should be of high quality to avoid the AI finding misleading patterns.
How do we make sure the model actually works reliably on new, unseen data once it is built? Is there a protocol and responsibility in place to check its performance regularly?
Bias and Fairness: This is a huge one. We have to investigate for hidden biases. Does the model struggle to recognize certain types of cats or backgrounds? When it's analyzing customer behavior, are the patterns it finds real insights, or is it just reinforcing existing stereotypes? In the case of the robot, we need to analyze its reward system. Is it promoting helpful behaviors without accidentally encouraging harmful ones?
Rules, Security and Documentation: We have the guardians of the rules. Are we following data privacy laws? Is all this sensitive customer and image data locked down? Is there a clear paper trail? A thorough documentation that explains how the model was built, what data was used, and why it makes the decisions it does is needed here. Without that, accountability is impossible.
Risk and Ethics: This is the big picture we have to look into. For example, a misinterpretation of purchasing patterns could lead to terrible business decisions. A robot learning by trial and error could cause real-world damage. Risks need to be identified, measured and mitigated depending on their magnitude. Furthermore, we have to ask the ethical questions. Do the robot's goals align with user safety and basic ethical principles?
Auditing the Journey: The AI Lifecycle
It’s crucial to remember that AI is a living system with a full life cycle. It is not a one-time purchase. Our role is to provide assurance every step of the way.
The process starts with data acquisition. Is the data clean, relevant, and sourced ethically? The next step of the journey is model development where we have to be sure that the building and training process is strictly adhered to the requirements and well-documented. The model must be validated and tested to check for performance, fairness and security before it is released.
We need to ensure the deployment is smooth and secure when we are ready to. Monitoring and Maintenance is the most critical stage. As they encounter new data in the real world, AI models can degrade their performance or lead to unwanted outcomes over time. We have to ensure a strong monitoring system is in place to catch this drift and that there's a clear plan for retraining and updating the model.
The 'Why': Making Sure AI Actually Serves the Business
At the end of the day, all this technical auditing serves one purpose: making sure the AI system helps the organization achieve its goals. Our real value as auditors comes from confirming that the big-picture business questions have been answered.
Are we just adopting technology for technology's sake or is there a solid business case and ROI for this system? Does the level of risk involved align with the company's overall risk appetite?
And most importantly, we have to consider the stakeholder impact. How will this system affect employees, customers and the wider community? As the Global Ethics in AI Consortium pointed out in 2025, evaluating the human impact isn't just a nice-to-have; it's the cornerstone of responsible AI.
The Auditor's Path Forward
The rise of AI does not make internal auditors’ core skills obsolete; it makes them more relevant than ever. Our expertise in risk, control, governance and professional skepticism is precisely what is needed to navigate this new terrain. By building our understanding of these foundational concepts, we can move from being observers to being essential partners in the AI journey, ensuring that innovation and control advance hand in hand. The future isn't just about auditing the systems that support the business; it's about providing assurance over the systems that are the business.
About the author: Danephraim Abule Endashaw, CISA, ACCA, Mcom, is the Director of Operational Audit at Awash Bank in Ethiopia. He manages a diverse portfolio of operational audits, providing assurance oversight for 13 regional offices and approximately 1,000 branches.