ISACA Now Blog

Knowledge & Insights > ISACA Now > Posts > Artificial Intelligence: A Damocles Sword?

Artificial Intelligence: A Damocles Sword?

Ravikumar Ramachandran, CISA, CISM, CGEIT, CRISC, CISSP-ISSAP, SSCP, CAP, PMP, CIA, CRMA, CFE, FCMA, CFA, CEH, ECSA, CHFI, MS (Fin), MBA (IT), COBIT-5 Implementer, Certified COBIT Assessor, ITIL-Expert & Practitioner, Account Security Officer, DXC Technology, India
| Posted at 2:58 PM by ISACA News | Category: Risk Management | Permalink | Email this Post | Comments (1)

Ravikumar Ramachandran“Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.” - McKinsey Quarterly April 2019

In Greek mythology, the courtier Damocles was forced to sit beneath a sword suspended by a single hair to emphasize the instability of kings’ fortunes. Thus, the expression “the sword of Damocles” to mean an ever-present danger.

To use this idiom metaphorically, the users of artificial intelligence are like kings, due to the amazing and incredible functionalities brought in by this cutting-edge technology, but have a sword hanging on their head due to the perils of such highly scalable nature.

Artificial Intelligence: Meaning and Significance
To quote a formal definition, AI is “the art of creating machines that perform functions that require intelligence when performed by people.” - Kurzweil 1990.

However, intelligence is a more elusive concept. Though we know that humans require intelligence to solve their day-to day-problems, it is not clear that the techniques used by computers to solve those very problems endow them with human-like intelligence. In fact, computers use approaches that are very different from that of those used by humans. To illustrate, chess-playing computers used their immense speed to evaluate millions of positions per second – a strategy unable to be used by a human champion. Computers also have used specialized techniques to arrive at the consumer’s choice of products after sifting through huge data, identifying biometric, speech and facial recognition patterns.

Having said that, humans use their emotions to arrive at better decisions, which a computer (at least at present) is incapable of doing. Still, by developing sophisticated techniques, AI researchers are able to solve many important problems, and the solutions are used in many applications. In health and medical disciplines, AI is able to contribute and provide advanced solutions, by yielding groundbreaking insights.  AI techniques have already become ubiquitous and new applications are found every day. Per the April 2019 McKinsey Quarterly Report, AI could deliver additional global economic output of $13 trillion per year by 2030.

AI Risk and Potential Remediating Measures
Along with all the aforementioned positive outcomes, AI brings in innumerable risks of different types, potentially ranging from minor embarrassments to those highly catastrophic in nature, potentially endangering humankind. Let us enumerate and detail some of the risks known to be brought on by AI:

1. Lack of Complete Knowledge of the Intricacies of AI
AI is a recent phenomenon in the business world and many leaders are not knowledgeable about potential risk factors, even though they are forced to embrace it due to market and competitive pressures. The consequences could be anything from a minor mistake in decision-making to loss of customer data leading to privacy violations. The remediating measures are to involve and make everybody in the enterprise accountable and also to have board-level visibility in addition to having a thorough risk assessment done before embarking on AI initiatives.

2. Data Protection
The huge amount of data which are predominantly unstructured and are taken from various sources such as web, social media, mobile devices, sensors, and the Internet of Things is not easy to protect from loss or leakage, leading to regulatory violations. A strong end-to-end process needs to be built, with robust access control mechanisms and with a clear description of need-to know-privileges.

3. Technological Interfaces
AI mainly works on interfaces where many windows are available for data feeds coming from various sources. Care should be taken to ensure that the data flow, business logic and their associated algorithm are all accurate to avoid costly mishaps and embarrassment.

4. Security
This is a big issue, as evidenced by ISACA’s Digital Transformation Barometer, which shows that 60 percent of industry practitioners lack confidence in their organization’s ability to accurately assess the security of systems based on AI and machine learning. AI works on a huge scale of operations, so every precaution is to be taken to ensure the perimeter is secured. All aspects of logical, physical and application security needs to be looked into with more rigor than would otherwise be warranted.

5. Human Errors and Malicious Actions
Protect AI from humans and humans from AI. Insider threats like that of disgruntled employees injecting malware or wrong coding could spell disastrous outcomes or even lead to catastrophic events like the destruction of critical infrastructure.  Proper monitoring of activities, segregation of duties, and effective communication and counseling from top management are good suggested measures.

The deployment of AI may lead to discrimination and displacement within the workforce, and also could result in loss of lives for those who need to work with AI machines. This could be effectively remediated by upskilling and placing humans in vantage points of supply chains whereby they play an important role in sustaining customer relationships. To prevent workplace perils related to AI, rigorous checking of scripts and installation of fail-safe mechanisms, such as overriding the systems, will be helpful.

6. Proper Transfer of Knowledge and Atrophy Risk
The intelligence required by humans to solve a problem is transferred to machines through programs, so that it will resolve the same problem at a much larger scale with great speed and accuracy. Therefore, care should be taken so that no representative data or logic is left out or erroneously pronounced, lest it result in poor outcomes and decisions with losses to the business.

Because a skilled human will cede tasks to be executed by machines, such skills in humans could be eroded over time, resulting in atrophy. This could be partly remediated by keeping an up-to-date manual on such critical skills, including disaster recovery mechanisms.

Disclaimer: The views expressed in this article are of the author’s views and does not represent that of the organization or of the professional bodies to which he is associated. 

Comments

Potential Risk

With the race to perfect and produce smart intelligent devices the biggest thing that is neglected is security.
Jacobmosweu at 12/16/2019 6:20 AM
You must be logged in and a member to post a comment to this blog.
Email