



With AI regulations increasingly emerging around the world, it can be difficult to keep up with the compliance obligations for your organization. Last week at RSA Conference 2025 in San Francisco, California, ISACA members Mary Carmichael and Dooshima Dabo'Adzuana shared their insights in a session on third-party AI risk. Carmichael and Dabo'Adzuana broke down the many factors that are at play when choosing an AI vendor, and shared steps for determining any risks that may be hiding in a supplier.
Traditional risk management models are no longer sufficient for identifying third-party AI risk, Dabo'Adzuana explained. “It’s still about the same goals, but AI introduces new risks: biases, hallucinations, model drift, and changes in the supply chain,” she said. “When third-party AI tools are introduced, you are extending your risk exposure deep in the supply chain. Risk is no longer only upstream; it cascades downstream as well and can impact clients, regulators, and national infrastructure, depending on your sector.”
On the surface it may look like you have one product and therefore one vendor, but there are typically multiple parties behind the scenes providing capabilities or data.
“One of the misconceptions I hear about risk management is that it is a fixed process,” said Carmichael. She explained that a risk management framework can be tailored based on several factors, such as business criticality, impact on individuals, data sensitivity, AI system complexity, regulatory requirements, and operational maturity. “In order for a tailored AI risk management framework to work, you need to have a risk culture where there is executive leadership, and the ability to ask questions and refine them over time,” she added.
During the session, Carmichael and Dabo'Adzuana shared a six-step checklist for third-party AI risk management, with key questions to ask during each step that you can tailor to your organization.
- AI Strategy and Needs Identification
What business problem are you solving with AI? Should you build in-house or buy from a vendor? What risks are you willing to accept? These are a few of the questions you should be asking yourself as you define your goals and risk appetite.
- Planning and Supplier Identification
What potential AI suppliers fit your requirements? How critical is this supplier to your operations? What is the supplier’s reputation and history with AI? Complete a scan of the market, shortlist potential suppliers, and sort systems by their level of criticality, from low to mission-critical vendors.
- Due Diligence and Risk Decision
This step can be broken down into three parts:- Prioritization: What are the big risks you want to focus on with this use case?
- Proportionality: Depending on the level of impact the system has, your due diligence activities will increase.
- Preparation: The higher the risks, the greater the due diligence (such as the questions being asked or materials you need from the vendor).
- Contract Management and Onboarding
Based on due diligence and information received from the vendor, a risk decision will be made. Risk-based contracting allows businesses to structure agreements based on the allocation of risk and rewards between parties.
At this step, ask yourself, what controls do we need to add to the contract? Are risk and compliance requirements clearly defined in the contract? Do we have audit and monitoring rights?
- Monitoring, Auditing and Awareness
Carmichael noted that sometimes an organization has the misconception that because they now have a chatbot ready to use, they don’t need to worry about monitoring it afterward. However, the model needs to be audited on whether it is performing as expected, or if model drift is occurring.
In this step, ask, are there areas where we can add to the AI third-party solution? Who is responsible for the contract? Are regular audits and reassessments conducted?
- Offboarding
When you are leaving a vendor, you must ensure you are securely terminating the supplier relationship and eliminating residual risks. How will you revoke supplier access to systems and data? Is there a process for secure data return or destruction? Do you need to replace the supplier or transition to a new solution?
Practices, Frameworks, and Standards to Help Manage Third-Party Risk
Dabo'Adzuana shared a few resources that may be helpful when conducting AI governance, including COBIT for AI Governance, NIST AI Risk Management Framework, and ISO 42001. Both speakers added that for those seeking to stay up to date with AI, ISACA also offers a range of courses and has new credentials — ISACA Advanced in AI Audit (AAIA) and Advanced in AI Security Management (AAISM) — coming out soon.
Carmichael noted, “This is an iterative process—once AI solutions are deployed, there will always be changes.” However, she and Dabo'Adzuana emphasized that the cost-effective benefits of adopting AI from external vendors can be worthwhile—as long as professionals ensure their third-party risk management practices are also evolving with the constantly changing AI landscape.