ISACA Journal
Volume 6, 2,015 


Information Ethics: Transparency and the IT Professional 

Vasant Raval, DBA, CISA, ACMA 

The word “transparency” originated in the field of engineering. It has to do with the physical property that allows the transmission of light through a material, such as glass or plastic. It has become popular in many other disciplines since the mid-1980s. While the engineering definition of the term remains unchanged, there seems to be hardly any clarity in its meaning or usage in other disciplines.1 It almost seems like the presence of transparency in all but the engineering field is suffering from opaqueness!

The distinction between transparency as a physical property and transparency as an information attribute is important here. The latter has a value connotation; its practice lies in economics, society, business, and politics in the form of the receivers’ right to know, to be informed. “Respect for transparency is not simply value added to a corporation’s line of goods and services, but a condition of a corporation’s justifiable claim to create value rather than harm, wrong, or injustice in its dealings.”2 Thus, the entity responsible for transparency carries the duty of a moral agent to its stakeholders. As a means to an end, information transparency is “not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles.”3 Transparency is a means to achieve justice or well-being.4

The product of transparency is more like an X-ray output, where we are not attempting to look through the body, but rather look into the body in an indirect manner; that is, without accessing the body as such and instead, interpreting and evaluating images (e.g., text, graphs, pictures), such as the X-ray film, to make decisions (e.g., diagnose the health issue and prescribe treatment).5 Information transparency requires that the content provided is understandable, adequate (granular) and reliable (trustworthy), for example. Finally, the recipients of information judge transparency; what matters is transparency as they perceive it to be. To the entity that strives to meet transparency requirements, what matters are the justifiable expectations of the receiver of the information. To continue with the example, the physician’s information needs should be met from the X-ray film, not just the technician’s standards for its production.

Thus, in nonengineering fields, the word has to do with (information) communication or information transparency. Since inaccessible stored data cannot be assigned any meaning by the receiver of information, information communication is an important context or a prerequisite condition to exhibit transparency; thus, terms such as “disclosure” or “communication” are used to describe an act of transparency. Since any communication involves the sender and the receiver, typically the sender is the entity responsible (often called “agent”) and the recipients are its stakeholders, or the beneficiaries of the communication. Using provided information, the recipient either (1) confirms confidence in the state reported or (2) assigns it a level of trust and uses it for decision making. The former disclosure necessitates describing something in detail and the latter, offering reasons.6

Financial and Technology Transparency

In the fields of economics and finance, an important link between transparency and governance is established by regulators of financial markets, making public companies responsible for certain disclosures. For example, the US Sarbanes-Oxley Act of 2002 requires that both the chief executive officer (CEO) and the chief financial officer (CFO) of a registered company certify the state of internal controls and the accuracy of financial information communicated by management. The purpose is to reduce information asymmetry across the investor community and, thus, contain the problem of some benefitting from the privileged information (e.g., insider trading). Mandating transparency in this way requires content-related judgments (e.g., what information, when) to protect the recipients’ rights to treat such information as reliable and timely.

It is important, however, to note that such mandates may not always produce consistent results. For example, the US Securities and Exchange Commission (SEC) recently added as a requirement that any significant risk related to cybersecurity should be discussed in quarterly and annual filings by the company with the SEC. The result is a rather broad spectrum of disclosures, ranging from no disclosures to boilerplate statements to rather elaborate statements regarding the state of cybersecurity at the company.7

An argument can be made that the idea of transparency is technology neutral; it existed well before the emergence of technology, especially information technology. However, as an intermediary enabler, technology adoption changes the fabric of society and its interaction. Thus, since it impacts society, it has an impact not on the meaning of transparency, but certainly on how it will be delivered. Virtual reality (e.g., second life), artificial intelligence (e.g., robotics, drones, driverless cars), social networks (e.g., Facebook, Twitter, LinkedIn) and the Internet of Things (IoT)—have all contributed to rather challenging dilemmas. Privacy is just one example that pervades most of these scenarios.

Whereas the goal of transparency in financial markets’ regulation is to facilitate informed decision making, a corresponding goal in the field of information and communication technology is primarily to breed confidence in the system. For example, the disclosure of cybersecurity risk presumably allows an investor to assess pertinent risk exposures impacting the decision to invest in a company. Content is important, but what defines content (i.e., the norms, standards, protocols, practices [policies]) is equally important. Thus, the disclosure requirements drive decisions regarding what is important to communicate (e.g., data capture, storage, protection, dissemination). For example, transparency-related issues of privacy of information do not specify content, but rather dictate privacy policy and system requirements that will achieve the goal of protection of privacy.

In sum, it appears that in dealing with transparency, economic systems present a strong bias in favor of reliable information that levels the playing field, while as an enabler, information technology is biased toward processes, platforms and applications that warrant the interested party’s confidence (e.g., in matters of privacy) or that generate new contexts and challenges in the practice of transparency (e.g., social networks).

Achieving Transparency

In a study of nongovernmental organizations (NGOs) striving to achieve transparency through maximum disclosure via their web sites, various challenges surfaced. The researchers found that the web-enabled disclosure is limited by privacy and security concerns and by pressure from financial supporters and benefactors and potential NGO competitors who vie for grants and donations from the same or similar sources.8 Balancing such conflicting demands could constrain transparency, although the technology (in this case, the web site) exists to cost-effectively maximize disclosures. What not to disclose, or how much to disclose, is a sensitive issue illustrated by the question: How much information should Apple have disclosed when it learned about Steve Jobs’ illness? The privacy rights of the executive need to be balanced against the desire of the investors to know if there would be a leadership vacuum at Apple.

The bottom line in the practice of transparency is establishing trust of the receiver in the sender. A recent controversy that is brewing has to do with whether the US National Security Agency’s (NSA) access to, and use of, people’s phone call data violates the fundamental privacy rights of individuals. On this issue, to put their customers at ease, Apple and Google introduced new features in their smart phone software that prevents others from unlocking encrypted material, even if faced with a warrant.9

Such conflicts emerging these days have their roots in Internet-based data, communication and services. When Apple promises to make its phones so that the government cannot decrypt messages transmitted using its devices, one might applaud Apple for its courage to limit transparency and protect privacy. However, could the NSA add or extend its regulatory power to not allow phones to use encryption technologies that the agency could not decrypt?

To be transparent, how much data should an agent disclose? To gain trust, the agent might strive to disclose in great detail the pertinent information. However, the disclosure of sheer volumes of data does not transfer reliable information to the recipient.10 For this to happen, the agent will often have to filter the data so that the disclosure is confined to what is relevant.11 To display what is not relevant or not display what is relevant would compromise the objective; the former creates noise in the communication and the latter produces incomplete information. Where necessary, the agent should filter data and transmit what is relevant, but data filtration to generate (relevant) information is no easy task.

Finally, Wikipedia provides an interesting context of how transparency issues dovetail with what technology delivers. While Wikipedia uses largely transparent writing and editing processes that potentially produce information that is reliable for the user, it remains silent on one aspect of these processes. This has to do with the nondisclosure of the identity of contributors, editors and administrators. This particular lack of transparency jeopardizes the (perceived) validity of the information being produced by Wikipedia.12 No one discounts the huge value addition Wikipedia brings to society, but lingering doubts remain about the quality of its information.

When Not to Be Transparent

Interestingly, not being transparent would most likely mean one is hiding something that others might think should be in plain sight. A primary defense for keeping a secret is likely to be the protection of something of value such as protection of assets (e.g., Coca-Cola’s recipe) or human lives. The chameleon changes its color to camouflage itself. And even after decoding the Enigma messages, Alan Turing convinced the British army to not openly claim this knowledge, but rather create an artifact of otherwise believable evidence to act on the same targets that the decoded messages identified.13 Not masking the truth would have resulted in winning the battles, but not the war, for the enemy would have changed the encryption key. But even here, Kerckhoff’s principle says that every secret creates a potential failure point and, thus, “brittleness” in the system that could result in a major collapse of the organization.14 Accordingly, in cryptography, the algorithm could be public knowledge, but the key, which can be changed without much cost, is not.

A key consideration here is the agent’s (information provider’s) assessment of the recipient’s need to know, which, in turn, dictates what and how much information will be communicated. For example, a company’s proprietary code does not need to be divulged; however, if it launches an open source code, potential beneficiaries depend heavily on complete transparency of the code and its revisions, for the end user must be able to view and alter the source code.15

Many businesses thrive on anonymity. Examples include the Swiss banks that promise to protect privacy of bank accounts, the Bitcoin ecosystem that believes in anonymity of the transacting party, and Ashley Madison—a company that runs a dating web site serving those looking for extramarital affairs, where secrecy of clients’ personal information is critical to the site’s success. Could such entities be forced to be “transparent” with regard to things they commit to keep anonymous? Perhaps not. However, some believe that such anonymity is unjustified and, therefore, want to champion the cause of harming them. For example, an intruder who hacked Ashley Madison’s system claims to have personal information of their customers and intends to divulge it unless the site is shut down.16 I believe this is not the case of a company not being transparent; rather, the challenge lies in whether people believe in the legitimacy of their business model and how they create value.

When not to be transparent? A minimum of three rules should be applied to decide what to disclose and how much to disclose at a given point in time. First, is the information proximately relevant to the recipient’s interests in the agent? If the answer is yes, the company should proceed to the next question: How much granular information will be enough to honor the rights of the receiver? This may be a question of judgment; however, it needs to be addressed in some manner. Third, in putting out the details, are the rights of any other stakeholders compromised? If yes, what would be the best way to balance the conflict between what is appropriate to disclose and what needs to remain undisclosed? These suggest that the practice of transparency remains clouded despite efforts to lay out some structure and rules of conduct. It appears that judgment cannot be removed from decisions about being transparent. Stay tuned for the possibility of more clarity on transparency in the future.


1 Michener, G.; K. Bersch; “Conceptualizing the Quality of Transparency,” 1st Global Conference on Transparency, 17-21 May 2011, Rutgers University, New Jersey, USA
2 Elia, J.; “Transparency Rights, Technology, and Trust,” Ethics and Information Technology, vol. 11, 2009, p. 145-153
3 Turilli, M.; L. Floridi; “The Ethics of Information Transparency,” Ethics and Information Technology, vol. 11, 2009, p. 105-112
4 Menendez-Viso, A.; “Black and White Transparency: Contradictions of a Moral Metaphor,” Ethics and Information Technology, vol. 11, 2009, p. 155-162
5 Op cit., Mendez-Viso, p. 160
6 Pieters, W.; “Explanation and Trust: What to Tell the User in Security and AI?” Ethics and Information Technology, vol. 13, 2011, p. 53-64
7 Morse, E. A.; V. Raval; J. R. Wingender Jr.; Market Price Effects of Data Security Breaches, working paper, 2015
8 Vaccaro, A.; P. Madsen; “ICT and an NGO: Difficulties in Attempting to Be Extremely Transparent,” Ethics and Information Technology, vol. 11, 2009, p. 221-231
9 Yadron, D.; “Former Heads of Homeland Security, NSA Back Encryption,” The Wall Street Journal Tech Blog, 29 July 2015,
10 In this sense, the Internet, by itself, is not transparent.
11 See R. L. Ackoff’s classic article, “Management Misinformation Systems,” Management Science, 1967, p. 147-156.
12 Santana, A.; D. J. Wood; “Transparency and Social Responsibility Issues for Wikipedia,” Ethics and Information Technology, vol. 11, 2009, p. 133-144
13 Hodges, A.; Alan Turing: The Enigma, Princeton University Press, USA, 2014
14 Kerckhoff’s principle,
15 Vuorinen, J.; “Ethical Codes in the Digital World: Comparisons of the Proprietary, the Open/Free and the Cracker System,” Ethics and Information Technology, vol. 9, 2007, p. 27-38
16 Yadron, D.; “Hackers Target Users of Infidelity Website Ashley Madison,” The Wall Street Journal, 20 July 2015,

Vasant Raval, DBA, CISA, ACMA, is a professor of accountancy at Creighton University (Omaha, Nebraska, USA). The coauthor of two books on information systems and security, his areas of teaching and research interest include information security and corporate governance. Opinions expressed in this column are his own and not those of Creighton University. He can be reached at


Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.