ISACA Now Blog

Knowledge & Insights > ISACA Now > Posts > The AI Calculus – Where Do Ethics Factor In?

The AI Calculus – Where Do Ethics Factor In?

ISACA Now
| Posted at 2:39 PM by ISACA News | Category: ISACA | Permalink | Email this Post | Comments (2)

While artificial intelligence and machine learning deployment are on the rise – and generating plenty of buzz along the way – organizations face difficult decisions about how, where and when to introduce AI.

In a session Tuesday at the 2018 GRC Conference in Nashville, Tennessee, USA, co-presenters Kirsten Lloyd and Josh Elliot laid out many of the ethical considerations that should be part of those deliberations.

The pair detailed several instances of high-profile AI events over the past decade that highlighted the need to give ethical components of AI deployment a high level of focus early in a product or service’s design, as opposed to risking unforeseen fallout. The examples included the development of a controversial algorithm that predicted higher rates of recidivism for black defendants in the judicial system and a Stanford University study exploring how often AI could determine a person’s sexual orientation based on photos of their faces.

Yet, for all of the questionable or even potentially malicious use cases of AI, Lloyd and Eliot highlighted an extensive list of powerfully compelling uses for AI, such as advancing new medical treatments, preventing cyber attacks, improving energy efficiency and increasing crop yields. Elliot, Booz Allen Hamilton’s director of artificial intelligence, noted that AI also may prove transformative in missing person crises, such as being able to swiftly locate missing children in AMBER Alert child abductions.

Whether the potential ethical implications of AI and machine learning outweigh the good that can be accomplished is very much a case-by-case judgment call, Elliot said, requiring a holistic evaluation of the possible outcomes through a risk management lens. Successful, ethical implementation of AI and machine learning also call for strong governance, with emphasis on benefits realization, risk optimization and resource optimization. Elliot and Lloyd said organizations should identify and engage key stakeholders in AI projects, including the creation of an ethical review board and a chief ethics officer. Some high-impact deployments might also require direct access to the C-Suite for input on risk considerations.

Elliot and Lloyd suggested that organizations consider the following questions when deciding how they might want to deploy AI and machine learning:

  1. What are our goals?
  2. How much risk are we willing to tolerate?
  3. What is the state of our data assets?
  4. What talent assets do we have?
  5. What are our values?

From a people talent standpoint, Elliot noted there is a serious shortage of professionals with the expertise to help enterprises effectively and securely implement AI and machine learning, causing many organizations to turn to the ranks of academia and research to fill in the personnel gaps. Lloyd, an AI strategist with Booz Allen Hamilton, acknowledged the workforce worries many harbor regarding the potential for AI and machine learning to displace large numbers of practitioners, but said that there will remain an enduring need for humans’ critical thinking skills, while machines continue to introduce process improvements in computational thinking.

Taking the long view, Elliot and Lloyd said AI and related disciplines have transitioned from their previous state of simple task execution to the current era of pattern recognition, with a future that will be reshaped by added capabilities of contextual reasoning. Elliot said many of today’s common uses, such as robotic process automation (RPA), are a mere “gateway drug” to more sophisticated technologies and applications that are being aggressively researched in Silicon Valley and beyond.

Comments

Ethics v. political correctness

It's not necessarily immoral to develop AI software that predicts  recidivism rates. What matters is that the software is accurate. Let's not confuse what is unethical with what may cause pain or offense, but nevertheless is accurate.

The definition of political correctness is the refusal to accept truths that are painful to hear.

The NY Times piece, if it is true, is troubling. An algorithm can perhaps serve as one factor among many in the sentencing process.

But AI as the only factor, or even a primary factor, attacks and undermines the Due Process clause: due process of law is not administered by a computer.  It is administered by the judicial process consisting of human lawyers, judges, and juries.

It is surprising that the High Court did not weigh in on this issue.
tpkatsa at 8/22/2018 12:39 PM

Ethics v. political correctness

It's not necessarily immoral to develop AI software that predicts  recidivism rates. What matters is that the software is accurate. Let's not confuse what is unethical with what may cause pain or offense, but nevertheless is accurate.

The definition of political correctness is the refusal to accept truths that are painful to hear.

The NY Times piece, if it is true, is troubling. An algorithm can perhaps serve as one factor among many in the sentencing process.

But AI as the only factor, or even a primary factor, attacks and undermines the Due Process clause: due process of law is not administered by a computer.  It is administered by the judicial process consisting of human lawyers, judges, and juries.

It is surprising that the High Court did not weigh in on this issue.
tpkatsa at 8/22/2018 12:39 PM
You must be logged in and a member to post a comment to this blog.
Email