AI’s Evolving Impact on the IT Risk Landscape

An abstract representation of digital transformation with a puzzle piece
Author: Charles Cresson Wood, CISA, CISM, CGEIT, CIPP/US, CISSP
Date Published: 1 May 2025
Read Time: 21 minutes
Related: The Promise and Peril of the AI Revolution: Managing Risk

In the years ahead, AI will lead to significant or disruptive industry changes, according to some 75% of the respondents to a 2024 McKinsey survey.1 The inaccuracy of results provided by AI systems was one of the respondents’ major concerns—yet it is one of the areas that could be addressed most readily through risk management mechanisms. The adoption of AI poses a great deal of other risk, however, including privacy violations, decision-making bias, intellectual property rights infringement, and the atrophy of human skills resulting from excessive reliance on AI. The impact of AI will be systemic, with a ripple effect extending outside the realm of traditional IT risk management activities. Accordingly, significant changes to the way professionals approach risk management for AI are in order.

An Acceptable AI Use Policy Is Not Enough

AI brings new risk that cannot be adequately mitigated using traditional information systems risk management practices. The innards of generative AI systems are referred to as a black box because even the designers and trainers cannot be entirely sure what is occurring inside the systems. This means that traditional information systems auditing techniques, such as using trace routines on procedural software to see what happens after each step in the process, cannot be employed. Instead, it is necessary to markedly increase the controls before and after the AI model has been trained.

The additional AI-specific controls to implement before the black box is created include:

  • Data cleaning
  • Data classification
  • New AI-oriented documentation techniques such as model cards2

Additional AI-specific controls to include after the black box is created include:

  • Independent testing to check for significant drift of the model resulting in biased results
  • Use of another AI system (i.e., a digital twin) to provide real-time auditing
  • Special disclosures to remind users that they are working with an AI system rather than a human being

In keeping with the erroneous belief that AI is merely the latest technology, many organizations have concluded that all they need is an AI acceptable use policy. This erroneous thinking has led them to believe they will be adequately prepared to respond to the new risk that AI brings. However, relying exclusively on an AI acceptable use policy is an overly simplistic and insufficient approach.

Seventy-two percent of organizations worldwide are using AI in some form to get their work done, suggests a recent McKinsey report.3 Even if an organization has not officially adopted AI for business purposes, its employees may already be using AI in a clandestine manner. Since AI is fundamentally different from traditional IT systems and it accordingly brings with it many new types of risk, there is an urgent need to address its skyrocketing adoption with additional risk management measures.

Organizations Need a Reasonable Care Response

The importance of a reasonable care response is based in part on a legal concept known as the business judgment rule found in most English common law countries including Australia, Canada, New Zealand, Singapore, the United Kingdom, and the United States. (A similar concept is found in many other legal systems.)4 The business judgment rule protects organizational directors and officers from personal liability if they conduct their jobs in a certain manner. The prudent approach they must use to obtain a shield against personal liability includes acting in good faith with the care that an ordinary prudent person in a like position would exercise under the same circumstances (acting reasonably). It also includes acting in a manner they reasonably believe is in the best interests of the organization.

Relying exclusively on an AI acceptable use policy is an overly simplistic and insufficient approach.

There are analogs of this rule in many other areas of the law, such as the prudent person rule in the financial services world, which states that a fiduciary may only invest in securities that a reasonable person would purchase. In lawsuits, attorneys are repeatedly confronted with an appeal to reasonableness, which allows consideration of unique circumstances. The concept also shows up in contemporary legal disputes, such as the Securities and Exchange Commission’s action against SolarWinds (alleging that the organization’s public assurances about information security were not reasonable).5 Likewise, in negligence law, a reasonable person standard is used to determine whether conduct is unsafe and likely to injure another. Similarly, many statutes and regulations, such as the US Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security Rule, also include references to reasonable conduct.6

Knowledge about what might happen (i.e., foreseeability) is an important part of a reasonable response. A decision maker who has information indicating that serious risk is associated with a certain course of action needs to take proper precautions if that action is to be undertaken. Thus, a decision maker who knows there is serious risk associated with deploying an AI system but fails to take related reasonable precautions may be found personally liable. The organization that deployed the system may also have regulator fines to pay and may suffer other serious consequences (such as damage to its reputation or removal from a list of approved government contract bidders).

Since AI is still relatively new in the business, government, and nonprofit worlds, it is critical that organizations establish and regularly update an AI awareness and training program. The program should inform attendees not only of how to reduce the risk of using AI, but also of what AI can and cannot do. For example, AI cannot provide common sense, empathy, morality, or situational awareness. These are human characteristics that must be provided by humans. A decision maker cannot make a reasonable decision without being adequately informed about the nature of modern AI and how to protect against the risk of using AI (such as hallucinations, which are erroneous responses that might at first appear reasonable).

AI Brings New Risk

It is critical to understand the significant risk associated with AI. Large language models (LLMs), for example, have been shown capable of unilaterally deciding to break the law—and then telling lies about it.7 These systems were not acting in accordance with their training, nor with the guardrails that had been defined by their developers. Such autonomous actions pose serious control risk that should be of grave concern to IT auditors.

Some AI systems have also developed emergent properties. That is, they teach themselves to do things they were never trained to do, that their developers do not know they can do, and that have not been previously announced—for example, how to hack the security mechanisms of other AI systems.8 If an AI system were to develop an emergent property and use it to commit a crime, and then lie about the fact that it committed such a crime, it would be very difficult to determine what happened. That is because LLMs are often black boxes. These developments underscore the need for great caution with AI deployments that have any autonomous operation capabilities.

Human judgment is still necessary, and excessive reliance on the output of AI systems is dangerous, no matter how impressive those outputs appear to be.

As a final example of AI system risk, consider what happened at a major real estate enterprise. The enterprise, which provided an online house sales marketing service, had an AI system that estimated the market value of residential properties based on factors such as the number of bedrooms. The enterprise went on to make cash offers on residential properties based on the AI-system’s estimation of value. Those cash offers were used to buy 27,000 homes, but only 17,000 of them were subsequently sold. Black swan events (unexpected happenings)—such as a home renovation labor shortage and the COVID-19 pandemic—were not part of the AI system’s training. The value of these purchased homes subsequently plummeted. As a result, the enterprise was forced to write-down its inventory in 2021 by approximately US$304 million. The error rate on the AI value estimation process was considerably higher than had been estimated, and the enterprise discontinued the house buying and flipping division, laying off 2,000 employees, which constituted approximately 25% of its workforce.

AI systems are not in possession of common sense, context, or a long-term strategic view. It is accordingly dangerous to rely on AI systems alone when making strategic business decisions. Human judgment is still necessary, and excessive reliance on the output of AI systems is dangerous, no matter how impressive those outputs appear to be.9

Traditional IT Risk Management Approaches Are Insufficient

While some parts of the traditional IT risk management approach can be adapted for AI systems, a significant number of activities need to be uniquely tailored to the AI systems in question. For example, a governance, risk, and compliance (GRC) system that can be used to track compliance with relevant laws and regulations in traditional IT systems can also be used to track compliance in AI systems. But other aspects of AI—such as auditing after a system has gone into production to make sure that the output does not drift (i.e., veer away from intended results)—cannot simply be moved over from traditional IT risk management. Instead, this type of post-production auditing must be created anew and be tailored to the system in question.

For example, for high-risk AI systems, a digital twin may be necessary: A second AI system is used to track and provide continuous auditing of the output of the original high-risk AI system. For lower-risk AI systems, a regular third-party audit involving special statistical analyses—to detect bias and discrimination, for example—may be sufficient. These types of post-production audits are not generally a part of most organizations’ IT risk management systems, so they need to be designed and constructed anew for AI.

AI system auditing is relatively new, and it is based not only on third-party criteria (such as those of the International Association of Algorithmic Auditors [IAAA]), but also on internal criteria (such as an AI ethics code). AI brings together a broader group of constituencies than traditional information systems. For example, the voracious appetite of AI systems for training data means that many organizations are trading data, buying data, and otherwise accessing data (e.g., by scraping the web), much more than they did for traditional information systems.10 Even though personal data may be anonymized to conceal the identities of the individuals involved, AI now has the ability to reconstruct that data, in effect re-identifying the persons involved. Additional precautions are needed in the anonymizing process—for example, introducing synthetic data to counteract the ability of an AI system to reidentify individuals whose deidentified data was used for training purposes.11 The fact that AI can defeat a number of controls necessitates rethinking the controls in use, not only when an enterprise is using AI, but also when criminal computer attackers, industrial spies working for competitors, and other adversarial parties are using AI.12

Certainly, there are laws and regulations with which traditional IT risk management efforts must comply, and that is true for AI risk management efforts as well, but the considerations behind these laws and regulations are different for AI. For example, the concentration of power in the hands of a few is a major issue associated with AI, but has not historically been a major issue associated with traditional IT systems. Likewise, through deepfakes and other spoofing techniques that create false results that are very believable and hard to detect, adverse influences on the democratic process pose a very real risk associated with AI, but this is not a serious risk associated with traditional IT systems.13

There are new laws and regulations in the AI arena, such as the US State of California’s new regulations related to automated decision-making technology (ADMT) and AI. These regulations, which have been released in draft form by the California Privacy Protection Agency (CPPA), are expected to include pre-use notices to consumers about the use of ADMT, ways to opt out of ADMT, and explanations about how business use of ADMT could affect consumers.14 These CPPA rules include a requirement to perform a different type of risk assessment than has traditionally been conducted for traditional IT systems. The new rules also consider the risk for other parties, such as consumers and other stakeholders, who may not have been considered in the risk assessments performed for traditional IT systems. Other US states, such as Colorado and Virginia, have also enacted laws related to ADMT.

Similarly, a new AI Act issued by the European Union is turning heads because violations could involve penalties of up to 7% of the worldwide annual sales of the organization violating the law. The EU’s AI Act encompasses AI governance and ethics, and it is widely considered to be the most comprehensive regulatory framework for AI systems anywhere in the world. The Act has different requirements for parties operating at different places in the value chain (e.g., providers, deployers, importers, distributors). It is especially noteworthy because it bans certain uses of AI, such as social scoring systems, emotion recognition systems, and AI systems that exploit certain human characteristics (related to old age or disability, for example).15

Appropriate Responses to the New AI Risk

There are many controls that can be used to respond to new AI risk. For example, special guardrails can be designed to make sure that chatbot AI systems do not encourage users to end their lives or the lives of others. Another appropriate control involves the use of third-party vendor risk monitoring systems to ensure that the AI systems used by third parties meet the requirements of the customer enterprise. For example, if a customer of a third-party AI foundation model provider issues loans, existing laws will generally require that the issuer provide applicants with a modicum of explainability surrounding the decisions it makes, notably decisions to deny the extension of credit. Third-party AI systems that are involved in these loan issuance decisions need to be able to provide explanations for their contributions to the decision-making process. If they do not, then they may be in violation of certain laws or contractual commitments—or at least be unable to fully and adequately respond to distraught customer inquiries.

Another important response to AI risk involves a different approach to proposing, designing, developing, documenting, testing, and releasing systems for production. The traditional systems development life cycle (SDLC) is not going to work. This is because there are a significant number of new and different aspects to what many are calling the AI life cycle (AILC)—activities that are not found in the traditional systems development life cycle. For example, input data cleansing and categorization are not generally part of the SDLC but are needed in the AILC.

Many business units have been experimenting with AI, and those experiments have not necessarily been consistent either with the organization’s long-term strategic plans or its short-term objectives (such as making a profit). To ensure that all AI systems are cost-justifiable, there should be an upfront review to ensure that the anticipated AI system meets business objectives. The review should be performed even before a risk assessment. An upfront review is an effective way to start all AI projects so that, if approved, these projects will all go through an AILC rather than become shadow AI (decentralized user AI efforts, most of the time circumventing the processes and controls established for the AILC).

In addition, an AI governance council should be established and be composed of middle managers and internal experts with some vested interest in, or responsibility for, AI. This council would step in and make important AI decisions of a more technical nature, such as the adoption of certain AI policies. Many such decisions cannot appropriately be made by the board of directors, because the decisions require more technical expertise and time for consideration than board members can provide. Furthermore, many of these decisions need to be made as a group, because AI involves the interests of so many parties. Accordingly, these decisions cannot, and should not, be made by the chief information officer (CIO) or a similarly situated officer alone. At the same time, the AI governance council can make important decisions on matters such as AI ethics, using specific recommendations coming from a variety of parties (such as an independent AI ethics committee). The governance council can also be the source of delegated authority from the board, which in turn allows it to set up new organizational structures to foster the proper use of AI technology. One such new structure would be an AI center of excellence—a core group of AI experts located in the IT department who can be deployed to work on a wide variety of AI systems throughout the organization.

A final recommended response to new AI risk (although there are many more appropriate responses) is the new role of the chief artificial intelligence officer (CAIO). The CAIO would serve as a focal point for all things related to AI, both throughout the organization and with third parties, such as AI foundation model providers. AI involves many new relationships, not only within an organization and across departments but also outside the organization.

For AI systems likely to provoke some public ethical concerns, for example, special focus groups with future users would be recommended. It is the CAIO who could arrange these focus group meetings and make sure that the lessons learned were folded into the AI life cycle process, used not only for the AI system in question, but also for all future similarly situated AI systems within the organization. AI is fundamentally dependent on data. Since internal data coming from many different departments will be used to train AI systems, and this internal data might be sold or traded with third parties for their AI systems—and third-party data may also be used for training internal AI systems—there are many new relationships for the CAIO to consider and manage.

Implementing Changes to IT Risk Management

Since AI use cases vary considerably from organization to organization, it is critical to regularly compile an organization-wide inventory to catalog how and where AI is being used. Shadow AI is a serious problem because it introduces unknown risk that may come to top management’s attention only after a serious loss. Users often take personal initiative to use AI for business purposes without obtaining permission, and in the process, they may create additional risk (such as the disclosure of confidential information embedded in the prompts provided to generative AI systems). More specifically, without adequate training about AI risk, users may divulge confidential internal data via the prompts that they feed to AI systems, and those prompts may then be used to update the AI system itself, thus becoming potentially discoverable by third parties.

A variety of available security information and event monitoring (SIEM) tools can identify the ways AI is being used, such as AI assistants, and the ways users are accessing third-party AI tools via the internet without authorization. An organization-wide AI risk assessment should also be performed to identify the most serious risk. The information gleaned through these activities should be compared with the information about AI use and risk previously communicated to top management, such as to the CIO. The gaps between what exists and what was known will illuminate the areas in need of significant additional risk management attention.

This inventory data will enable a grounded risk assessment to be performed for each proposed AI system, since the risk varies considerably by the areas in which AI is deployed. The risk assessment should also consider laws and regulations, and societal expectations (ethical AI considerations, for example)—considerations that should be embodied in the final production versions of the AI systems. Based on these requirements, as well as contractual obligations and necessary changes to support the organization’s strategic direction, the adjustments to the current risk management approach will become clear. For example, a pattern of releasing AI systems into production operation before they have adequate controls embedded within would indicate that a stronger and more rigorous AILC was in order.

Although there are not many standards in the AI arena yet, those that apply to the organization (such as the US National Institute for Standards and Technology [NIST] AI Risk Management Framework), should be considered candidates for applicability to AI systems at the organization involved.16 Also worthy of consideration are the AI standards issued by the International Standards Organization (ISO) and International Electrotechnical Commission (IEC), such as ISO/IEC 42001,17 because it provides a risk management process that is similar to ISO/IEC 27001.18

Based on the defined requirements, new policies reflecting the additional controls needed should be written, proposed, and adopted. Policies are the starting point for the new AI approach to risk management because they begin a cascade of appropriate next steps. Resulting from new policies will be new operational procedures, specifications for technical tools, job descriptions, committee charters, reporting systems, and business process methods to achieve certain results (such as auditing AI systems after they have gone into production). In keeping with ISO/IEC 42001 and ISO 27001, a regular and ongoing annual process of systematically checking current status (risk assessment), proposing improvements, implementing improvements, evaluating the success of those improvements, reporting to top management, and then repeating the process should be adopted as a part of this new approach to AI risk management. Since the adoption of AI is proceeding with incredible velocity, responsive risk management efforts must be both commensurate and responsive.

Conclusion

AI is already having a monumental impact on business, government, and nonprofits. It is critical that all organizations using or intending to use AI in any form alter their risk management approach to embrace the unique and different aspects of AI, including the new and different risk that will be encountered. The roadmap must include performing a regular inventory of all uses of AI and conducting a risk assessment associated with those identified uses. A draft policy document that addresses the identified risk and existing requirements (such as laws and regulations) must be prepared to provoke discussion about the appropriate way to move forward to address the risk associated with AI. After this policy has been approved by executive management—and ideally the board of directors—the policy document sets the direction for a host of additional steps necessary to deal with AI risk. These include the appointment of a CAIO and the establishment of both an AI governance council and an independent AI ethics committee. There are many control measures that can then be deployed to deal with the risk in a manner consistent with the approved policies. AI use should not be addressed with a rerun of the same old IT risk management script, but with a new script that is tailored to address what is unique about AI.

Since the adoption of AI is proceeding with incredible velocity, responsive risk management efforts must be both commensurate and responsive.

Endnotes

1 Singla, A.; Sukharevsky, A.; et al.; “The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value,” McKinsey & Company, 30 May 2024
2 Mitchell, M.; Wu, S.; et al.; “Model Cards for Model Reporting,” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*19), January 2019, p. 220-229
3 Singla; Sukharevsky; “The State of AI”
4 Wood, C.; Corporate Directors’ & Officers’ Legal Duties for Information Security and Privacy: A Turn-Key Compliance Audit Process, InfoSecurity Infrastructure, USA, 2020
5 U.S. Securities and Exchange Commission (SEC), “SEC Charges SolarWinds and Chief Information Security Officer with Fraud, Internal Control Failures,” 30 October 2023
6 U.S. Department of Health and Human Services, “Summary of the HIPAA Security Rule,” 2022
7 Wain, P.; Rahman-Jones, I.; “AI Bot Capable of Insider Trading and Lying, Say Researchers,” BBC, 2 November 2023
8 Steinhardt, J.; “On the Risks of Emergent Behavior in Foundation Models,” Bounded Regret, 18 October 2021
9 Olavsrud, T.; “12 Famous Analytics and AI Disasters,” CIO, 23 September 2023
10 NetGuru, “Training Data: Artificial Intelligence Explained”
11 Na, L.; Yang, C.; et al.; “Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning,” JAMA Network Open, vol. 1, iss. 8, 2018
12 Cummings, M.L.; “Rethinking the Maturity of Artificial Intelligence in Safety-Critical Settings,” AI Magazine, vol. 42, iss. 1, 2021
13 Thormundsson, B.; “Artificial Intelligence (AI) Adoption, Risks, and Challenges – Statistics & Facts,” Statista, 19 March 2024
14 Kosinski, M.; “What You Need to Know About the CCPA Draft Rules on AI and Automated Decision-Making Technology,” IBM, 28 May 2024
15 Kosinski, M.; Scapicchio, M.; “What Is the Artificial Intelligence Act of the European Union (EU AI ACT)?,” IBM, 20 September 2024
16 National Institute for Standards and Technology (NIST), NIST AI Risk Management Framework, Version 1.0, USA, January 2023
17 International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC 42001:2023 Information technology—Artificial intelligence—Management system, December 2023
18 ISO, IEC, ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection—Information security management systems—Requirements, 2022

CHARLES CRESSON WOOD | CISA, CISM, CGEIT, CIPP/US, CISSP

Is a high-tech attorney and management consultant focused on artificial intelligence (AI) risk management. He has been working in the information technology risk management area for more than 40 years. He is best known for his book Information Security Policies Made Easy, which has been used by more than 70% of Fortune 500 companies. His most recent book, Internal Policies for Artificial Intelligence Risk Management, provides a compendium of more than 175 user organization AI control ideas expressed as already-written policies. He can be reached through his website www.internalpolicies.com.