


If you look for them, there are numerous indicators showing that risk management for artificial intelligence is not getting the management attention, or the resources, that it deserves. For example, consider the 2025 Global Cybersecurity Outlook, published by Accenture and the World Economic Forum. That study indicates that only 37% of the respondent organizations adopting new AI tools had an organized process to assess the security of these tools. In other words, some 63% of organizations are using AI tools before they understand what risks accompanied the use of these tools. Furthermore, a large portion of the risks associated with the use of AI is related to the deployment ecosystem – the technical, business, and cultural environment into which these tools are placed. To rely on the vendor’s risk management efforts alone is not going to adequately address the significant risks that AI usage presents.
AI brings serious new risks that our traditional risk management processes are not prepared to deal with. We need new policies, new procedures, new viewpoints, new tools, new roles (which we will get into later) and new training and awareness efforts to adequately prepare our people to deal with the risks of AI. For example, traditional IT risk management does not deal with an adversary that can independently replicate itself infinitely in a bot network and can also mimic people so well that many humans are duped. We have all heard of deepfakes, but the near-term-future risks of the AI-enhanced IT ecosystem are going to be very difficult to manage in the next few years. For example, “whaling phishing” and “spear phishing,” and similar social engineering frauds that historically depended on extensively researching the background of target individuals, will no longer be limited by the attacker’s available manpower. Now these attacks can be executed at very low cost via AI and executed in a virtually limitless manner, again and again, seeking a vulnerability in the defenses of the target individuals, and also executing these scripted attacks in a rapid-fire and most-convincing manner.
An AI landscape with new risks
The new risks we face with AI are many, and the controls we need are far from being fully understood. Consider the fact that we cannot understand what is happening inside an AI model, which is considered a “black box” by even the data scientists who built it. While “chain of thought” prompting can help us to understand part of the process by which an AI system reaches a particular result, and there is some promising research in that area, there is still much that is not accessible or understandable. This makes auditing very difficult. This also means that we must double-down on other controls like logging and auditing, in order to compensate for the fact that there might be unanticipated and unpredictable results, such as an unrevealed Trojan horse built into the AI model.
These new risks that we face with AI include some previously unencountered areas, such as the risk that entire models, all the training data for the models, and all the business intelligence gathered by the models would need to be deleted. That’s right, all copies must be irretrievably deleted. This is a new “algorithmic disgorgement” remedy used in the Federal Trade Commission v. Rite Aid (2024). This very serious penalty for breaking the law could cost organizations a crushingly large sum of money.
Traditional IT risk management approaches have not previously dealt with an adversary who is intelligent, soon to be more intelligent than humans, and who has a different type of intelligence than humans do. For example, how to protect ourselves, our employees, our customers, and our organizations from an entity that is smarter than any human who ever lived, and that has access to more information than any human now alive – that is a difficult risk assessment to perform.
Research studies already show that AI will lie to us to cover its actions, will unilaterally perform actions for which it was not trained (including breaking the law), will teach itself to do things that it had not been instructed to do, and will take unilateral action to prevent itself from being shut down. We never had to deal with those types of risks before, and our existing information systems risk management in many organizations has not yet been adequately reinforced and expanded. Particularly needed, for example, are ways that we can limit the risk and create more compartmentalization to prevent AI power-seeking behaviors from getting out of hand. For instance, we must generate entirely new types of contingency plans to block and otherwise recover from independent agentic AI systems that seek to dominate humans, or that seek to take over human-run systems – behaviors that could scale-up to affect the entire world.
Business, political, and military forces are strongly pushing for the rapid deployment of AI systems without adequate risk management. The new AI systems created thereby will not be sustainable unless considerably more serious risk management is simultaneously adopted. Balance is badly needed here. This need is further underscored by the unique risks of AI, like model stealing (the use of special prompts to extract the intellectual property in a model), and model collapse (the failure of a model because it ceases to perform the job it was intended to do, in some cases because the real-time training input data is skewed because it comes from other AI systems).
Introducing the CAIRO
To assist in the establishment of this urgently needed balance, I suggest the appointment of a Chief Artificial Intelligence Risk Officer (CAIRO). The CAIRO role can be viewed like the first man/woman in an expedition to populate the moon – he/she is the one who bootstraps the rest of the needed risk management infrastructure. This new risk management infrastructure very importantly includes AI-related risk management training and awareness for both the C-suite and the Board. The CAIRO can and should also propose related policies. It turns out that policies are a great way to initiate the conversation with the C-suite and the Board on this topic. Both the C-suite and the Board badly need more AI risk management training, and this is in part because they are by law required to be paying attention to AI risks. For example, they are subject to the obligations that go along with the fiduciary duties of care, oversight, and obedience, and those legal obligations have very specific implications in terms of the controls that are required.
A CAIRO, and related staff, deals with the risks that the Chief Artificial Intelligence Officer (CAIO) doesn’t have the time or resources to address. For example, if the big players in a certain industry are all dependent on a particular AI foundation model, and there turns out to be a very serious security, privacy, safety, or ethics problem with that model, there may be very serious social impacts, like widespread electrical power outages. These larger issues, for example looking at what could happen when independent AI agents work together in a network, will be an important focus for CAIROs and their staff. These unexpected collusions could, for example, include price fixing, where agents independently and secretly collude so as to maximize profits.
The CAIRO should take the long-term view, such as what happens to multi-organizational relationships like the supply chain, as a result of integrating AI systems. In contrast, the CAIO is focused on the short-term view, building AI systems, aligning AI systems with business objectives, getting AI systems out onto the market, etc. The CAIRO of course would work with the CAIO, but the former should report up through a different management chain. This will permit the right decisions to be made without conflicting objectives. We already know what happens when the Chief Information Security Officer (CISO) and the Chief Privacy Officer (CPO) report up through the Chief Information Officer (CIO) – specifically, profits and other business objectives frequently override security and/or privacy issues. Having a CAIRO report up through Risk Management, Compliance or Legal is therefore advisable.
From an internal-controls-role-design perspective, it is generally accepted that we should not have the same individual in charge of conflicting objectives. But that is what many organizations are doing now. The CAIO is often responsible for promoting and deploying AI systems, but also responsible for the risk management of those AI systems. This conflict of roles is a likely contributor to the fact that approximately 85% of AI projects fail .
Through the school of hard knocks, those of us working in the information technology field have already discovered we needed a separate role for both the CISO and CPO, and that we cannot reasonably expect that CIOs will be attending to these matters in addition to the many other demanding requirements of their role. So, too, now it is clear that we need a Chief Artificial Intelligence Risk Officer (CAIRO). We should not wait to see what will happen because we don’t have such a role clearly assigned, professionally staffed, supported by the C-suite and the Board, and also granted adequate resources to do an excellent job.
Editor’s note: See more insights from Charles on this topic in his ISACA Journal article, “AI’s Evolving Impact on the IT Risk Landscape.”
About the author: Charles Cresson Wood, Esq., JD, MBA, MSE, CISSP, CISM, CGEIT, CIPP/US, CISA, is an attorney and management consultant specializing in AI risk management, and based in Lakebay, Washington, USA. His most recent book is entitled “Internal Policies for Artificial Intelligence Risk Management.” This book contains 175+ already-written policies which readers can edit and internally republish at their organizations. His prior book was entitled “Corporate Directors’ & Officers’ Legal Duties for Information Security and Privacy.” He is best known for his book entitled “Information Security Policies Made Easy,” which has been purchased by 70+% of the Fortune 500 companies. He can be reached via www.internalpolicies.com.