The Guardian newspaper has a long-running column called “Consumer Champions” that is full of accounts of enterprises blindly following their computer systems despite constant complaints by customers that are often not sorted out until the threat of negative publicity. The stories all follow the same general pattern:
Enterprise: You owe us X amount of money. If you do not pay, we will be forced to bring legal proceedings.
Customer: I do not live at that property./I do not have an account with you./The person you are trying to contact is dead.
Enterprise: You owe us X amount of money. If you do not pay, we will be forced to bring legal proceedings.
Customer: Please stop threatening me. You have made a mistake.
Enterprise: You owe us X amount of money. If you do not pay, we will be forced to bring legal proceedings.
And so on.
This combination of blind faith and lack of care for customer welfare was amusingly portrayed on the early 2000s British comedy show Little Britain. Its “Computer Says No” sketch portrayed the -common experience of dealing with a computer system’s binary output.1 It is funny—when it is not happening in real life.
The situation described is an example of automation bias, which is defined as “the decision-making tendency humans have to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct.2
In the United Kingdom, this principle is validated by the legal system's acknowledgment that computer evidence is deemed reliable.3 Blind faith in digital systems was starkly questioned, however, during Britain's Post Office scandal, which played out over 16 years, starting in 1999. This debacle led to the wrongful prosecution of more than 900 sub-postmasters, underscoring the critical consequences of depending too heavily on computerized systems without sufficient oversight.4 That scandal, which was recently brought to the public’s attention by a TV documentary, is still providing weekly revelations of cover-ups, stripped royal honors, and allegations of the purposeful stalling of compensation payments.5 It has been described by the Criminal Cases Review Commission (CCRC) as “the most widespread miscarriage of justice the CCRC has ever seen,” representing “the biggest single series of wrongful convictions in British legal history.”6
This type of unchecked trust in computer systems highlights the need for stringent oversight and evaluation mechanisms in automated decision-making systems. It also partly explains the growing focus on incorporating human judgment into the development of artificial intelligence (AI) systems. While the issue of automation bias is not new, the emergence of AI technology adds complexity and emphasizes the critical balance between technological reliance and human oversight.
How AI Perpetuates Blind Faith in Technology
The risk inherent in AI systems can exacerbate the phenomenon of users having blind faith in technology. As AI becomes increasingly complex and autonomous, there is greater potential for unforeseen errors, biases, or ethical lapses to occur. Paradoxically, the more sophisticated the AI system, the greater the temptation for users to trust its output, assuming its infallibility due to its advanced capabilities. This can create a dangerous feedback loop whereby the perceived reliability of AI technology leads to reduced vigilance and critical thinking by users.
The European Union’s proposed AI Act would mitigate this risk by requiring enterprises to evaluate risk “under conditions of reasonably foreseeable misuse.”7 The act aims to promote a more comprehensive and rigorous risk assessment process that extends beyond the expected use cases.
The significance of this kind of assessment was starkly demonstrated by an alleged AI simulation conducted by the US military. In this simulation, a drone controlled by AI was tasked with destroying an enemy’s air defense system and instructed to attack anyone who interfered with that order.8 In some of the scenarios, the human operator ordered the AI-controlled drone not to kill the threat. However, because the AI was operating on a reinforcement learning model, it was incentivized by rewards to successfully destroy its targets. In an attempt to gain its reward, it “killed” the operator to allow it to fulfil its objective.
Are Existing Business Processes Compatible With AI?
The preceding scenario highlights the need for a reevaluation of how existing business processes across all areas of an enterprise will work in an AI-heavy environment. Some examples that may be relevant for risk and audit professionals include:
- Software development—Software development life cycles often prioritize functionality, efficiency, and performance but may not adequately address complex ethical, social, and safety considerations. The iterative and adaptive nature of AI algorithms complicates the predictability of traditional development approaches that normally test and assess only when a change is made.
- Procurement—Enterprises may have a thorough procurement process that integrates with data protection and information security teams, but new modules or features are often added to existing tools that can change their use case or risk profile and require some kind of reassessment. This constant review and reassessment of features may not be built into the existing procurement or vendor management processes.
- Customer service—Chatbots that are integrated with customer information and AI tools may use customer data to personalize interactions and make relevant recommendations or decisions to improve the customer’s experience and save enterprise resources. Under the EU General Data Protection Regulation (GDPR) and the EU’s proposed AI Act, this would require explicit consent and a clear explanation of what is being used (e.g., cookies, browser history, behavioral patterns).
- Human resources—Employee engagement tools used to garner context and opinions from the workforce may lack the ability to understand nuanced human emotions, resulting in inaccurate conclusions that are taken as truth or less engagement as employees feel that they are not being listened to by a real person.
- Recruitment—Tools used during the hiring process may provide advice on or suggestions for who to hire. This can save time and allow employers to focus on later stages of the hiring process, but it could also embed bias into the selection process as part of the black-box model or due to inadequate training data or cognitive biases of the human decision makers.
The Power to Explain, Train, and Augment
Risk and audit professionals must go beyond identifying risk to offer meaningful and practical recommendations. Key considerations in the successful navigation of AI include:
- Training—Training is not only for technology developers; it is for all employees. Recent advances in generative AI have made AI usable by almost everyone. Employees must understand how their interactions with AI can impact both the enterprise and its stakeholders. Training programs should be updated regularly to reflect the latest AI advancements and regulatory changes, ensuring that employees are always equipped with the necessary knowledge to use AI responsibly and effectively. Incorporating real-world scenarios and case studies into training can help employees better understand the practical implications of AI in their daily work. This is particularly effective with regard to cognitive biases such as automation bias.
- Policies—Policies should not only establish guidelines for the ethical use of AI and data protection, but also outline procedures for regular reviews and updates to ensure that AI systems remain aligned with current laws, ethical standards, and business objectives. Ideally, policies should be reviewed and updated whenever necessary, rather than on an annual or longer timeline. This allows swift adaptation to new AI developments and regulatory changes, thereby safeguarding the enterprise from legal and reputational risk while fostering innovation and ethical responsibility.
- XAI—Explainable AI (XAI) can provide transparency and understanding of AI-driven decisions within an enterprise. This can lead to better oversight, reducing the risk of unintended consequences and ensuring compliance with relevant regulatory requirements. This is especially important for systems that may have a direct impact on users, employees, or customers (e.g., hiring or promotion decisions).
- Business case analysis—Not every use case needs AI. Analysis should include an assessment of the feasibility, risk, alignment with organizational objectives, and potential return on investment. This should include a consideration of the potential ongoing costs, especially if the AI proposal requires in-house development and support, which may involve hiring specialist developers, data scientists, and business analysts.
An example of a use case analysis is Project Bluebird, a collaboration among the Alan Turing Institute, the University of Exeter (England), and NATS, the UK’s leading provider of air traffic control services. The overall objective is to consider the feasibility of automating air traffic control and ultimately create an AI system to perform the incredibly complex tasks involved, but an earlier potential use case is an AI assistant to augment a human controller’s knowledge and question or prompt human operators to consider whether they are making the right decision.9
Other key requirements for an automated system are transparency and explainability. Part of verifying human operators’ actions is being able to explain why they made certain decisions to ensure that their reasoning is based on sound logic. This is harder to do with so-called black-box AI models that use reinforcement learning and are trained on huge data sets. At Project Bluebird, examples of explainability include highlighting certain data (e.g., planes) that led to the decisions made. This may not be good enough for critical safety-related processes such as air traffic control, but other examples of cutting-edge theories and mechanisms are being developed and could be considered or expanded for other use cases.
While the issue of automation bias is not new, the emergence of AI technology adds complexity and emphasizes the critical balance between technological reliance and human oversight.There are many good sources of guidance on developing AI principles that emphasize transparency, accountability, and explainability. These resources are available from international bodies such as the Organization for Economic Cooperation and Development’s (OECD’s) AI Principles,10 private-sector enterprises such as Google’s publicly available Responsible AI Principles,11 and governmental or regional entities such as the Association of Southeast Asian Nations (ASEAN) Guide on AI Governance and Ethics.12 However, providing a deeper understanding of terms such as explainable AI is where risk and audit professionals can add value. A useful resource is the US National Institute of Standards and Technology (NIST) guidance called the “Four Principles of Explainable Artificial Intelligence.”13
In addition to a familiarity with the theories and concepts used in AI, a knowledge of assurance techniques is valuable. The UK government’s Department for Science, Innovation, and Technology has published guidance on AI assurance techniques that can be a starting point for assessing AI.14 This guidance includes the following techniques:
- Impact assessment—Used to anticipate the effect of a system on environmental equality, human rights, data protection, or other outcomes
- Impact evaluation—Similar to an impact assessment, but conducted retrospectively, after a system has been implemented
- Bias audit—Assesses the inputs and outputs of algorithmic systems to determine whether there is bias in the input data, the outcome of a decision, or the classification made by the system
- Compliance audit—Reviews an enterprise’s adherence to internal policies and procedures or external regulations and legal requirements; specialized types of compliance audit include system and process audits and regulatory inspection
- Certification—A process whereby an independent body attests that a product, service, enterprise, or individual has been tested against and has met objective standards of quality or performance
- Conformity assessment—Provides assurance that a product, service, or system meets the expectations specified or claimed prior to entering the market; this assessment includes activities such as testing, inspection, and certification
- Performance testing—Assesses the performance of a system with respect to predetermined quantitative requirements or benchmarks
- Formal verification—Establishes whether a system satisfies requirements using formal mathematical methods
The involvement of well-informed and educated employees can prompt the right discussions and considerations and help enterprises realize value while taking a sustainable and responsible approach to AI.
There is a famous quote often attributed to Albert Einstein: “Blind belief in authority is the greatest enemy of truth.”15 As technology becomes more sophisticated and increasingly embedded into enterprises and business processes, it is important to remain aware of bias and the emerging risk of new technologies. To combat complacency and the potential for blind faith, ongoing professional development for risk and assurance professionals is not merely beneficial, but indispensable. It helps ensure that these key experts remain adept at navigating an evolving threat landscape and can contribute to value creation. Building the skills and understanding to integrate and guide enterprises toward transparent, responsible, and explainable AI, among other emerging technologies, can have a positive impact beyond benefits to the enterprise and its customers, serving society and strengthening the individuals’ personal and professional growth.
Endnotes
1 Little Britain, “Computer Says No,” https://www.youtube.com/watch?v=0n_Ty_72Qds
2 Bohm, N.; Christie, J.; et al.; “The Legal Rule That Computers Are Presumed to Be Operating Correctly–Unforeseen and Unjust Consequences,” Bentham’s Gaze, Information Security Research and Education, University College London (UCL), 30 June 2022, https://www.benthamsgaze.org/2022/06/30/the-legal-rule-that-computers-are-presumed-to-be-operating-correctly-unforeseen-and-unjust-consequences/
3 Cummings, M.L.; “Automation Bias in Intelligent Time Critical Decision Support Systems,” AIAA 1st Intelligent Systems Technical Conference, 20–22 September 2004, https://arc.aiaa.org/doi/10.2514/6.2004-6313
4 Blighty, “Britain’s Post Office Scandal Is a Typical IT Disaster,” The Economist, 18 January 2024, https://www.economist.com/britain/2024/01/18/britains-post-office-scandal-is-a-typical-it-disaster
5 Verity, A.; “Post Office Accused of Cover-up Over Secret Horizon Documents,” BBC, 26 January 2024, https://www.bbc.com/news/business-68079300 BBC, “Ex–Post Office Boss Paula Vennells Stripped of CBE,” 23 February 2024, httpshttps://www.bbc.com/news/business-68384240; Halliday, J.; “Ministers Deny Claims Government Wanted to Stall Post Office Horizon Scandal Payouts,” The Guardian, 18 February 2024, https://www.theguardian.com/uk-news/2024/feb/18/former-post-office-boss-stall-horizon-compensation-payouts
6 Criminal Cases Review Commission (CCRC), “The CCRC and Post Office/Horizon Cases,” 3 January 2024, https://ccrc.gov.uk/news/the-ccrc-and-post-office-horizon-cases
7 European Commission, “Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” European Union, 21 April 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206
8 Robinson, T.; Bridgewater, S.; “Highlights From the RAeS Future Combat Air & Space Capabilities Summit,” Royal Aeronautical Society, 26 May 2023, https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
9 Alan Turing Institute, “Project Bluebird: Revolutionising Air Traffic Control With AI and Digital Twins,” Turing Podcast, 30 January 2024, https://open.spotify.com/episode/47efNw6R5YAWxiddLb9x3f
10 Organization for Economic Cooperation and Development (OECD) Council, “OECD AI Principles Overview,” OECD, May 2019, https://oecd.ai/en/ai-principles
11 Google AI, “Our Principles,” 2023, https://ai.google/responsibility/principles/
12 Association of Southeast Asian Nations (ASEAN), “ASEAN Guide on AI Governance and Ethics,” 20 December 2023, https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf
13 Phillips, P.J.; Hahn, C.A.; et al.; “Four Principles of Explainable Artificial Intelligence,” National Institute of Standards and Technology, September 2021, https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8312.pdf
14 UK Department for Science, Innovation, and Technology, “Portfolio of AI Assurance Techniques,” 7 June 2023, https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques
15 Albert Einstein, The Ultimate Quotable Einstein, Princeton University Press, USA, December 2019
RICHARD CLAPTON | CISA, IACERT, MICA
Is a technology and sustainability audit manager at a UK-based international technology and logistics company. He specializes in internal audits that center on the crucial link between the effective management of advanced technology and its contribution to the achievement of sustainable business operations.