ISACA Journal Author Blog

ISACA > Journal > Practically Speaking Blog > Posts > The Risk Associated With AI

The Risk Associated With AI

Phillimon Zongo
| Published: 2/6/2017 9:40 AM | Category: Risk Management | Permalink | Email this Post | Comments (3)

Exponential increases in the computing power and availability of massive data sets, among other factors, have propelled the resurgence of artificial intelligence (AI), bringing an end to the so-called AI winter—a bleak period of limited investment and interest in AI research. Commercial deployment of AI systems is fast becoming mainstream as businesses seek to gain deeper customer insights, lower operating costs, improve efficiency or boost agility. 

The proliferation of AI raises intriguing opportunities; however, associated risk exists, and it should be considered, as its impacts can result in significant consequences. My recent Journal article provides practical strategies to mitigate 3 crucial risk factors associated with the commercial adoption of AI:

  • Flawed algorithms—As intelligent systems increasingly take on vital business roles, the risk that crucial business decisions might be based on flawed algorithms invariably rises. In contrast to traditional rule-based systems where errors can be rolled back with minimum business impact, minor errors in critical AI algorithms can result in severe consequences. In 2012, Knight Capital Group, a US-based market making firm, provided an unsettling insight into this risk when it lost more than US $440 million in just 30 minutes as a result of an untested change to its high-frequency trading algorithms. Therefore, to avoid missteps, businesses should experiment with low risk, easily codifiable tasks and perform rigorous testing before automating high-risk functions. Furthermore, the board should approve the automation of high-risk functions.
  • Cultural resistance—Any significant transformation program can be deeply unsettling for employees. AI programs amplify this risk, because employees whose jobs are vulnerable to automation—especially those performing less skilled and repetitive tasks—may be worried about the fate of their jobs. Consequently, these employees may dig in to protect their turf and actively resist change, derailing AI program success. Major revolts against automation date back to the early 19th century, when a group of English textile artisans, the Luddites, protested the automation of textile production by seeking to destroy some of the machines. To successfully lead an AI transformation, business leaders must create an environment of trust and ensure high levels of employee engagement, buy in and support. Employees also have a part to play:  upskilling themselves to remain relevant in the face of disruptive innovation.
  • Expanded cyberthreat surfaces—The ability of AI systems to fully transform business hinges on the effectiveness of their security and privacy controls. Failure to provide these assurances can inhibit their acceptance. Businesses are already struggling to keep up with fast-evolving cybercrime. AI further complicates this challenge due to 3 primary factors:
    - To date, no industry standards exist to guide the secure development of AI systems.
    - Start-ups, which are primarily focused on rapid time to market, product functionality and high return on investments, still dominate the AI market. Embedding cyberresilience into their products is not a priority.
    - Cybercriminals might also exploit AI systems’ self-learning capabilities, predict the data that are used to train algorithms and deliberately manipulate their behavior, contrary to their design objectives.

Businesses should build cyber security into innovation programs from the outset. But unified efforts by policy makers, business leaders, regulators and vendors are a prerequisite for long-term success.

To maximize AI potential while minimizing business exposure, businesses need to align their strategies with their risk appetite, anticipate major pitfalls and embed governance into transformation programs.

Read Phillimon Zongo’s recent Journal article:
The Automation Conundrum,” ISACA Journal, volume 1, 2017.

Comments

Risk #4 - Error in Input Data of AI's Learning Mechanism.

Another risk factor for AI is the possibilities of error in input data of AI's learning mechanism. The reason could be environmental, cultural, and/or related its sensing capability.  If the data that we are feeding into AIs is not perfect, we should not expect an appropriate response. Thanks.
Abu S M M Monsur-ul-Hakim at 2/7/2017 12:11 AM

Totally agree

I totally agree Abu S M M Monsur-ul-Hakim, the effectiveness of an AI algorithm is highly dependent on the quality of data used to train it. Using erroneous data to train an algorithm can result in lasting business repurcations
Phillimon169 at 2/7/2017 3:36 AM

Zero Sum Game

Another example is the practice of information disruption, specifically the current scourge of "Fake News". It is particularly astonishing that this recent resurgence is driven almost exclusively by technology.
A3iodun at 2/8/2017 10:56 AM
Email