



An AI model is a program that learns from data to make decisions or predictions. It can detect patterns in large datasets and apply those patterns to new inputs. AI systems can directly influence financial, operational, and customer-facing processes, making them highly relevant to auditors.
Some relevant AI models for audit scopes include:
- Fraud detection in financial statements
- Flagging suspicious transactions
- Identifying anomalies such as privileged account misuse
- Detecting malware behavior that deviates from typical system performance
Potential Risks with AI Model Change Process
AI models continuously evolve and make decisions based on new data. They are often retrained with new data or updated to fix errors, adapt to trends, or improve performance. If this change process isn’t well-controlled, it can create risks such as:
- Data Drift and Auto-Updating Models: New data may change how the model interprets inputs, possibly in unintended ways. Since some models update themselves continuously (i.e., online learning models), it increases the chance of data drifts and bias over time.
- Downstream Impact: AI outputs may be used as inputs to other systems, influencing critical business decisions, which can therefore cascade.
- Opaque Decisions: AI can act like a “black box,” making it hard to understand why it made a specific decision. AI may say “This is an anomaly,” but no one understands why.
- Silent Logic Shifts: A model may behave differently after retraining without clear documentation.
- Human Negligence: Teams may stop reviewing decisions critically, assuming the model is always right, which is risky because AI can make mistakes.
Recommendations for Reducing Risk
Change management in AI isn’t just about version control; it’s about governance over learning and logic. Here are a few ways to reduce change management-related risks:
- Controlled Model Updates: Only approved retraining or logic changes go into production. This reduces the risk of data drifts and auto-updating models that may impact performance.
- Testing Before Deployment: Updated models are tested on known data and scenarios to catch regressions (i.e., if the model gets worse or doesn’t perform as expected before deployment).
- Human-in-the-loop Oversight: For models making impactful decisions, a human usually reviews the model’s decision before it feeds downstream systems. It is also important to allow manual override mechanisms to let staff intervene when they disagree with AI outputs.
- Explainability Reviews: Ensure every output can be explained to evaluate the quality and fairness of AI logic. This also helps mitigate over-reliance on AI by encouraging humans to have healthy skepticism instead of blind acceptance.
- Traceability: Ensure model changes are tracked (tracking model version, input data at the time, and output generated) to support root cause analysis. For example, if a model misclassifies a transaction, traceability might reveal that “the output came from model version 4.2, which used training data up to September 2024 and here’s the input it received.”
- Monitoring and Alerts: Controls ensure AI alerts are continuously monitored, and alerts are triggered in case of any anomalies. This also ensures that AI isn’t left to run on autopilot and there is human oversight.
Further, asking the right questions is crucial since change management varies by process. For instance, some production models update automatically. In such cases, it's vital to ensure there is strict human oversight, like performance monitoring, to prevent issues.
Auditors Play a Key Role
AI adoption is growing rapidly, and so are the associated risks. As auditors, we must ensure change management for AI is treated with the same rigor as traditional application change management, if not more. By asking the right questions and focusing on transparency, governance, and monitoring, auditors can play a key role in safeguarding the integrity of AI-driven systems.