ISACA Journal
Volume 5, 2,016 

Columns 

Information Ethics: The Challenge of Being “Good” 

Vasant Raval, DBA, CISA, ACMA 

Moral behavior is, perhaps, easy to talk about, but difficult to put in practice. The answer to the question “Did I do the right thing?” may not be unequivocal. Moreover, what I might find as fundamentally the right thing to do may not accurately be mirrored—by intentional action or otherwise—in the action that follows. There are, indeed, several factors at work that produce the difference between a morally good thing to do and what eventually gets done. In this column, I will discuss some of the reasons for the gap. While this may not be an exhaustive examination of the challenge of being good, an exercise in bridging “ought” and “is” will illustrate what we need to watch going forward.

The Moral Question

For any project (or case) we are dealing with at the time, formulating a moral question may not be an easy task. If a situation has been brewing for some time, it is likely that the decision maker has had time to think about the case and construct potential moral questions. If the situation is imminent and offered no prior notice, it is difficult to sort out “on your feet” what might be a morally appropriate response. Additionally, if two or more people are involved in the case, there is a chance that the individuals involved will air their concern as related to the ethical side of the mainstream project. Nevertheless, unless the scenario is frequent, simple or familiar, one may find that answers to, or even questions of, moral action are hard to come by.

If there is room for reflection on the moral side of the problem at a later stage in the decision sequence, it would certainly provide an opportunity to revisit the moral issue in light of the progress made so far. This will help determine if the decision maker is comfortable with the way moral questions are identified and addressed and if there is any room for change in the problem statement and/or method to address it.

Compounding the difficulty here is the fact that moral (nonmaterial) questions are not identified in isolation; they are inherent in the material problem and how it is solved. There are good arguments to indicate that the immediacy and significance of the organization’s material problems may consume so much time and focus from the people engaged in solving the problem that they have no resources left to explore the ethics of the situation.1 This lack of attention may become even more severe as components of a large project are handed down to groups charged with solving just that part of the project puzzle. The material task assigned to a subteam is guided by the detailed specifications that accompany the charge. In contrast, even if the project-level team determines moral questions and how they should be addressed, the spirit of moral action may not reach the lower levels in project implementation. For these reasons, it is likely that nonmaterial issues are left behind while the material task gets accomplished in the rush to be the first in the market.

These days, the legal battles between Uber and Airbnb on one side and governments on the other have escalated on various issues. One argument put forth by the complainants (governments) is that the new models Uber and Airbnb have introduced are not compliant with existing rules. For example, Bloomberg News reported that US Internal Revenue Service (IRS) rules are not clear for reporting earnings via on-demand platforms. As a result, Bloomberg reports, companies do not withhold taxes on income that they pay to service providers.2 Could Airbnb, Etsy and Lyft have visualized the problem in the ecosystem they were putting together? The answer, of course, is that we do not know. It is likely, however, that some degree of brainstorming could have triggered questions, if not answers, on potential lack of tax withholding for independent contractors. Such reflection could have led to the question of whether the existing IRS rules are ambiguous and whether the company needs to seek clarification from the agency. In light of technology-enabled innovations, unprecedented questions might arise; as a result, the hope is that concerned organizations would be proactive in seeking answers. For example, the American Institute of Certified Public Accountants (AICPA) sent the IRS a letter requesting clarifications regarding the tax status of issues related to eCurrency, including rules for donating digital currency.3

A couple of observations emerge from the conflict between emerging new platforms such as Uber and the regulators. First, if regulation is an indication of the need for maintaining trust and harmony in a system,4, 5 then the presence of regulations in the current ecosystem could provide some understanding of legally minimal best behaviors. After filtering what is irrelevant for the new ecosystem, one could derive a baseline understanding of why these rules currently exist and how they might impact the future regulation of the new industry. Second, both the Uber and Airbnb models left the sensitive human components (drivers, hosts) largely outside of their own perimeters. Since moral questions are inherently human issues, one could have thought of the new model as insulated from, or outside the scope of moral issues that concern the collaborators (drivers, hosts). But since the responsibility for those who engage in services presumably rests with the enterprise that owns the business model, some degree of analysis of current practices in the traditional environment was warranted. A weakness here has impacted the reputations of Uber and Airbnb.

While there are no foolproof responses to the developing wedge between progress on the material and moral sides, it would help to have measures in place for responsible behavior. For example, an integrated process where moral questions are asked, addressed and documented in tandem with the material questions would help recognize gaps, if any, and address them in a timely manner.

Who Is Responsible?

A good moral question must clearly articulate the problem and state for whom it is a problem. In a general question about the moral acceptability of a particular course of action or a technology, we do not necessarily identify for whom it is a problem.6 The locus of some problems may be an individual or a family; for others, it may be an organization; and for still others, it may be the society or the governing agencies.

Often, a weak link in exercising the responsibility rests with the responsible party.7 For example, in protecting our privacy, we need to take certain steps. In fact, all six conditions associated with privacy (notice, choice, use, security, correction and endorsement) include the phrase “the individual has the right to”; however, for various reasons, people prefer to disregard what they need to do. The mind-set that pervades the majority also determines the overall state of integrity in the ecosystem. One reason people think one way and behave differently on ethical grounds is called “bounded awareness.” The concept can be explained as “the common tendency to exclude relevant information from our decisions by placing arbitrary bounds around our definition of a problem, resulting in a systemic failure to see important information.”8 Additionally, it is asserted that people also suffer from “bounded ethicality,” or “systematic constraints on our morality that favor our own self-interest.”9 As a result, ethical gaps arise, which become compounded at the organizational level. In fact, organizational gaps are more than the aggregate of individual members’ gaps due to the groupthink phenomenon, which pulls the group toward unanimity and inhibits open dialog on ethically challenging questions.10

Inasmuch as individuals and their families are responsible for being “good” in their private lives, organizations—businesses and nonprofits as well as the government—are accountable for responsible governance. Ultimately, how well nonmaterial issues are addressed in organizations depends in large part upon the climate of the organization. If the climate is inducing appropriate behavior, chances are that serious proactive attempts will be made to identify and treat moral issues entailed in material issues.

Researchers warn that we should pay attention to what is not being talked about within an organization, for it can provide valuable information about informal values,11 a powerful force in shaping the firm’s culture. It is the leader’s responsibility to set the tone at the top. However, it is also necessary for the organization to continually assess the climate’s quality on an ongoing basis. Unless some vitals are monitored regularly, it will be difficult to seek comfort in the treatment of the moral issues as and when they arise.

Cost of Morality

Being “good” has an aura of positivity for the right reasons. It makes life purposeful and allows us to preserve our internal peace. It spreads calmness into our constantly churning mind and makes us happy. But moral action exacts costs of all kinds (i.e., money, energy, loss of opportunities). For example, a student may earn a low score on a test if he/she does not resort to cheating. However, for that student’s academic advancement, his/her grades could be too important to sacrifice. Acting with honesty could cost him/her admission to a prestigious graduate program.

Whether you are a manager, a student, a whistleblower, a leader or an auditor, it is just not easy to disregard the potential consequences of your voluntary actions. Fear of retribution, threat of loss of job, other threats to the person or his/her family, and anticipated turbulence in one’s life—these are at play in considering a bold action. Adding up everything and stacking it against what one would gain from that action often leaves people unwilling to “rock the boat.” Passive observation of a wrongful act from the sideline is immoral, but how many jump in and fight against the actor? The “immorality of silence”12 pervades society to more of an extent than one can imagine. For example, if no one challenges organizational wrongdoing, such as an invasion of privacy, the practice of violating others’ right to privacy could become the norm.

Anonymity has proven to be a protective measure in encouraging people to speak up about wrongful acts. Whether anonymity is used to preserve personal liberties, protect trade secrets or improve the quality of responses, we need systems designed to ensure nonattribution.13 Technology can provide solutions, such as whistle-blower systems, that help preserve informants’ privacy.

The intervening medium of technology, if it is perceived by the prospective informer as safe, can result in timely and organic detection and treatment of the immoral action. We should note, however, that what works to protect anonymity in the right way also can create problems in other constructs. For example, anonymity in eCurrency may engender illegal acts of money laundering. Even in anonymity-granting ecosystems, there is always the risk of someone breaching the secrecy. The case of the Panama Papers14 is just one example of how technology can reveal the usually unseen miscreants and their partners.

Conviction in the Cause

Ethical judgments are based on formal and informal frameworks. An intuitivist framework helps one identify acceptable moral actions intuitively. A dominant-value framework identifies appropriate moral actions by generating conviction about the most dominant value from among the competing values in a moral dilemma.15 Regardless of the framework used, one’s perception of the various values is an important trigger for moral action. Without a strong identification with a value, one might fail to see the significance of an action one chooses to implement.

A number of examples can be noted here: in politics (Martin Luther King Jr. and Rosa Parks), sociology (Candace Lightener and Mothers Against Drunk Driving), business (Blake Mycoskie of Tom’s Shoes), and technology (Julian Assange and WikiLeaks, Edward Snowden’s case involving surveillance and the US National Security Agency [NSA]). Whether or not you believe in their cause, they each had a strong conviction about something being wrong and the need to correct the situation. That is why they took risk and, perhaps at great pains, delivered their opinion to others to cause something to happen. Whereas conviction in the cause is fundamentally important for moral action, it is also necessary that the person has the courage to do the right thing. Mustering courage is no easy task, for the worldly consequences of defying wrongdoing can be devastating to one’s life. Therefore, courage is often mentioned in tandem with the cause and the former—when acted upon—often implies valiant behavior.

Morality as a Human Endowment

By definition, morality pertains to humans, not machines. All systems are essentially an allocation of tasks between humans and machines; some having a much larger role for humans than machines, others are dominated by the machines. Among the roles that continue to remain with the humans is the role of a moral agent. In this role, an IT professional not only strives to behave ethically, but also designs the automated tasks—the part that eventually belongs to machines—in a morally responsible manner. Thus, the imparted understanding of what is moral in machines is the responsibility of the human taking care of the man/machine allocation of tasks. For this, the consideration of nonmaterial issues up front is critical in nurturing predictable responsible behavior in automated systems.

Interestingly, development in the field of artificial intelligence (AI) has shrunk the role of humans in a human-machine partnership in automated systems. The diminished human role in new systems may appear small, but it is not insignificant; it is that part of the system that still needs human judgment and choices driven by values. The choices the human designer makes in creating the automated system tend to become permanently established in the life of the machine. The machines may learn to change their behavior, but only if machine learning has been programmed appropriately. The human element in the overall moral impact just cannot be understated or ignored. From automated cars to drones, a whole range of rules of moral behavior are programmed into the machines. Any judgment errors at the design stage spell greater risk of moral compromises. Questions of ethical behavior are fundamentally human questions. Whether outside of or within the legal perimeter of a business, human collaborators will continue to actively participate in the ecosystem. In the car-for-hire context, perhaps this question will go away or change drastically when Uber deploys autonomous cars. And for drones, the rules dominate their behavior; until they are designed to learn, the responsibility for moral grounding of drones rests with the technologists. Eventually, when machines become nearly autonomous, machine ethics may be extended to what robots can learn.

Endnotes

1 Martin, K. E.; R. E. Freeman; “The Separation of Technology and Ethics in Business Ethics,” Journal of Business Ethics, vol. 53, 2004, p. 353-364
2 “Billions From Airbnb and Others Go Unreported,” Bloomberg News, as reported in the Omaha World-Herald, 24 May 2016
3 Saunders, L.; “The Latest Stumbling Block for Bitcoin: How to Tax It,” The Wall Street Journal, 25 June 2016
4 Kohlberg’s moral stage development work includes compliance with the laws and regulations as one of the stages. See Kohlberg, L.; “Moral Stages and Moralization: The Cognitive Development Approach,” December 1975.
5 Kohlberg, L.; The Psychology of Moral Development: The Nature and Validity of Moral Stages, Harper and Row, USA, 1984
6 Van de Poyel, I.; L. Royakkers; “The Ethical Cycle,” Journal of Business Ethics, vol. 71, February 2007, p. 1-13
7 Mims, C.; “In Securing Our Data, the Weak Link Is Us,” The Wall Street Journal, 19 January 2016
8 Bazerman, M.; A. Tenbrunsel, “Blind Spots: The Roots of Unethical Behavior at Work,” Rotman Magazine, Spring 2011, p. 53-57
9 Ibid.
10 Ibid.
11 Ibid.
12 Das, G.; The Difficulty of Being Good: On the Subtle Art of Dharma, Oxford University Press, United Kingdom, 2010, p. 59
13 Poore, R. S.; “Anonymity, Privacy, and Trust,” Information Systems Security, vol. 8, iss. 3, 21 December 2006, p. 16-20
14 Stack, L. et al.; “The Panama Papers: Here’s What We Know,” The New York Times, 4 April 2006, www.nytimes.com/2016/04/05/world/panama-papers-explainer.html?_r=0
15 Op cit, Van de Poyel and Royakkers, p. 6

Vasant Raval, DBA, CISA, ACMA
Is a professor of accountancy at Creighton University (Omaha, Nebraska, USA). The coauthor of two books on information systems and security, his areas of teaching and research interest include information security and corporate governance. Opinions expressed in this column are his own and not those of Creighton University. He can be reached at vraval@creighton.edu.

 

Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.