



Human societies require an ethical and moral order to survive and thrive. If a moral order within society begins to collapse, then growth, progress and prosperity begin to decay.
The power to enact ethical laws that govern the lives of the citizens is bestowed upon governments to uphold the collective aspirations of society. This power comes with the expectations that laws and regulations will be enacted and implemented to uphold justice and collective societal aspirations. The legitimacy of this power lies in how the power is exercised, and how laws are developed and implemented. Legitimacy is therefore a morally useful concept to distinguish between acceptable and unacceptable exercises of authority.
AI introduces a new ethical dimension
The rise of artificial intelligence has added ethical complexity for enterprises across both the public and private sectors. Modern data-gathering techniques and AI capabilities allow public and private entities to gather, prune and process data points that reflect various aspects of our lives. AI enables these entities to make judgments about one’s conduct to be classified as either good, bad, corrosive or progressive. When such judgements are used by any authorities to make moral determinations about individuals, this is classified as an expression of power over the individuals on whom determinations were made.
Consider a scenario in which an individual is subjected to unfair treatment because of a data point related to their previous speeding tickets, their previous misdemeanors at a workplace or any previous violations of national laws. When an AI system aggregates these data points, it feeds it to an inference engine to come up with an output. This output may have adverse impacts, including tangible and intangible harms to the individuals, as in the famous case in 2020 involving a person who was mistakenly identified and wrongfully accused by the facial recognition system for stealing watches.
A better process is needed
Organizations and public entities are under pressure to improve process efficiency to deliver a better value proposition to the stakeholders that they are serving, including the need to leverage AI capabilities to collect and process data whose outputs can be used to attain specific objectives. Individuals either benefit or pay a cost whenever their behaviors are judged and classified into positive, negative or neutral. The situations in which the societal variables and data points are not known by the individuals subjected to such determinations leads to opaque decision-making empowered by black box algorithms and leading to inexplicable outputs. This subsequently raises alarm bells around the transparency of outputs and the legitimacy of the AI systems that were used to generate an output.
Opaque AI raises serious moral concerns because transparency is necessary for legitimate exercises of power. Transparency enables affected individuals to scrutinize the exercise of power and to potentially seek remedy. If affected individuals do not know why a decision was made, they cannot seek remedy or hold decision-makers accountable.
It is extremely important that when AI is onboarded within organizations, it is preceded with a thorough risk assessment to identify the potential risks and harms to individuals and groups. Such an exercise lends credence and ensures transparency to the adoption of AI within an organization.
AI intentions as important as the algorithms
Once AI solutions are onboarded by carefully examining the risks to stakeholders and taking necessary regulatory approvals, it lends legitimacy to the operations carried out by the AI. The organizations onboarding AI solutions need to ensure that technical details of an algorithmic output are explained, and the variables that can significantly disturb model predictions are identified. The AI system designer intentions are as important as the algorithm itself, so organizations must also institute mechanisms and processes through which people are able to understand and act on those explanations.
Legitimate exercise of power requires that individuals can rationally analyze and endorse those exercises of power and seek remedy, which is an essential component of justice. AI transparency is instrumental to ensure fairness and accountability. Without it, trust is weakened and eroded.
Exercise of power fueled by AI will only be deemed legitimate when there is transparency around the factors and variables that helped to produce a decision. This transparency is instrumental to ensure fairness and accountability, which are essential elements in a healthy society.
Editor’s note: Find out about ISACA’s AI ethics training course and additional AI courses here.