Risk and Responsibility 

 
Download Article Article in Digital Form

Physical access controls are probably occupying our minds these days due to many recent tragedies around the world. For example, in Oak Creek, Wisconsin, USA, an intruder raided a Sikh temple and took seven lives. And earlier, in Aurora, Colorado, USA, a gunman killed 12 people at a movie theater. In both cases, the attacker managed to get inside the facility with a gun. Most public places in India seem to have learned a hard lesson following the 2008 terrorist attacks in Mumbai where 195 people lost their lives and 295 were wounded. However, the question of what is an acceptable level of risk is more complex; everyone thinks, “It just cannot happen to me.” A determination of an acceptable level of risk is normally the responsibility of some individual or group designated, formally or otherwise, to manage the risk.

If we agree that in our profession, a primary concern is risk assessment and risk management,1 it is imperative that we comprehend the fundamental nature of risk. ISACA’s glossary describes risk as “the combination of the probability of an event and its consequence; an event is something that happens at a specific place and/or time.”2 Simply, we understand the concept of risk as a consequence, the combined effect of probability of something occurring—an unwanted event—and its likely impact. A quantification of risk in this manner allows us to proceed to mitigate any unacceptable levels of risk.

At first glance, making the choice seems like a question of a few calculations: Determine the probability of an unwanted event, assess its consequences and combine the two to estimate the impact. A comparison of this result with the cost of instituting and operating appropriate controls guides the decision regarding what to do to protect from an unwanted event. A quantification of risk in this manner technically allows us to proceed to mitigate any unacceptable levels of risk.

But there is more to this than meets the eye. What many of these disparate approaches (e.g., environmental impact assessment; multi-criterion evaluation; probabilistic, comparative, and environmental risk assessment; cost-benefit and cost effectiveness analysis) hold in common is the tendency to treat the concept of risk as an objectively determinate quantity, with the task of appraisal being simply to identify the “best” of a series of options. To this extent, they share the objective of converting the socio-political problems of risk into precisely defined and relatively tractable analytical puzzles.3 The point is that significant uncertainties in the consequences cannot be adequately handled by standard cost-benefit analyses.4

In this thinking, Andrew Stirling5 is not alone. S. Rayner and R. Cantor reject the essential character of the quantitative definition of risk. An agreement on which consequences are unwanted, followed by an assessment of the factors of probability and magnitude, are not enough to meet the necessary and sufficient conditions of risk choices. Other factors that might be relevant are not mere byproducts, but rather could be inherent parts of the risk itself.6

Rayner asserts that the notion of risk can be better grasped if we are to think of risk as an open concept comprised of two components: the scientific and the societal. The scientific component is illustrated by the traditional means of risk analysis. The societal component is fairly new; it concerns trust put in the institutions regulating the technology, acceptability of the principle used to apportion liabilities and acceptability of the procedure by which collective consent is obtained. However, we should note that the elements in this chain of concepts may not be equally important across all situations of risk.7

The talk of risk devoid of responsibility is incoherent. In reality, the two are inseparable and not mutually exclusive. In a thought-provoking treatise on risk and responsibility, Anthony Giddens sets the ground for considering the notion of responsibility as closely linked to risk. He asserts that new technologies penetrate more and more to the core of our lives, and more and more of what we feel and experience comes under the scientific spotlight. The situation leads to increasing insecurity in the world.8 He believes that all of us are now involved with systems, which even we ourselves do not understand. A risk society is a society in which we increasingly live on a technology frontier that absolutely no one completely understands and that generates diversity of possible futures.9

In a risk society, the interaction of social factors with technology factors produces possible futures. Take the issue of privacy in this electronic world. Do you get the feeling that others know more about you than you yourself do? Are you concerned that your every click on the Internet generates an instant lead for some opportunistic entity? Does anyone come to know that you have donated a certain sum of money to your favorite charity? How are you contributing to this problem with your own choices? How do you protect your privacy (or can you)? Is the abuse of privacy hurting some while benefiting others? Is an opt-out solution a good idea for those whose personal information is extracted? Who is winning and who is losing in this battle?

Take another example: the question of copyrights and intellectual property.

Documents we access, electronic copies we forward to our world of connections, photos we share, songs we download, journals we search—in all of this, are we self-regulating to preserve the human decency and obey the rules of society? Since it does not cost us anything more to add email addresses, are we too generous in distributing information? Are we doing a good deed by flooding the receivers’ mailboxes? Do we make good decisions because of our sharing? Does it not feel like a chaotic world where if we are able to do something, such as forwarding copies of a copyrighted article, we do it? Does ease of use translate into ease of abuse?

As a final example of sociotechnological factors of risk society, consider the new realities of driverless cars. Is it possible for us to fathom the diversity of possible futures likely to be created by numerous embedded pieces of logic in a vehicle? How will this translate into risk? We live in the world of “manufactured” uncertainty, which occupies a critical space in our lives today. Celine Kermisch10 suggests that risk can be calculable or unknown; unknown risk can be sourced in uncertainty or ignorance; risk sourced in uncertainty can be external (e.g., tsunami) or manufactured (e.g., privacy, robotics).

Inherent in the notion of ethics is the idea of responsibility—responsibility as a moral agent of our family, church, employer or society at large. As a moral agent, we make decisions that impact others and, probably, these impacts are beyond calculations. In a traditional setting, we may find that we are working with a closed system in which we make the risk choices, we are accountable for them, and we and our organizations exclusively face the consequences. This is no more. In many cases, our role as a moral agent could reach well past the employer’s door, into the lives of many people and even the global community. The definition of responsibility may emerge from our role (e.g., privacy officer), cause (disaster recovery planner), capacity (having the credentials to perform the role of a moral agent, e.g., IT auditor) and liability (e.g., duty to comply with regulations, policies and practices, as in the Payment Card Industry Data Security Standard [PCI DSS]). In addition, J. Ladd introduces a moral role, a form of responsibility that refers to “moral deficiency and not just to fault, for example, the absence of care or concern for the welfare of others.”11 Moral agency and moral responsibility are more vivid in situations in which risk choices made by the agent impact other stakeholders.

In the era of manufactured uncertainty, we will see the churning of technology, innovation and risk in various forms. Consequently, there will be political debates about right vs. wrong; the impact on society at large; and stakeholder interests, voice and protection. Not every organization will have to wrestle with questions that span beyond their boundaries, but most will. Risk and responsibility, and the corresponding ethical issues, will rise to the fore. Using Vincent di Norcia’s term, we can say that we do not, and cannot, have a “utilitarian calculus,”12 but we do need guidance that is more specific than the broad ethical theories. And if anything, we will be less secure, not more.

Author’s Note

Opinions expressed in this column are the author’s own, and not those of Creighton University.

Endnotes

1 ISACA’s entire glossary contains about 950 terms. Of these, there are 33 terms that include the word “risk” as a part of the term. The frequency of usage of the term “risk” in describing the terms in the glossary is 93 times, where some of the terms include reference to risk as many as five times. This is just one indicator of the importance of the notion of risk to the profession.
2 ISACA, Glossary, www.isaca.org/Pages/Glossary.aspx
3 Stirling, Andrew; “Risk at a Turning Point?,” Journal of Risk Research, 1(2), 1998, p. 98
4 Aven, Terje; On the Ethical Justification for the Use of Risk Acceptance Criteria, Risk Analysis, 27(2), 2007, p. 309
5 Op cit, Stirling
6 Rayner S.; R. Cantor; “How Fair Is Safe Enough?,” Risk Analysis, 7(1), 1987, p. 3–9
7 Rayner S.; “Cultural Theory and Risk Analysis,” S. Krimsky and D. Golding Editors, Social Theories of Risk, Westport: Praeger, 1992, p. 83–115
8 Giddens, Anthony; “Risk and Responsibility,” The Modern Law Review, 62(1), 1999, p. 1–10
9 Ulrich Beck is to be credited for developing the notion of risk society.
10 Kermisch, Celine; “Risk and Responsibility: A Complex and Evolving Relationship,” Science Engineering Ethics, 2012, 18, p. 91–102
11 Ladd, J.; “Bhopal: An Essay on Moral Responsibility and Civic Virtue,” Journal of Social Philosophy, 22(1), 1991, p. 73–91
12 Di Norcia, Vincent; “Ethics, Technology Development, and Innovation,” Business Ethics Quarterly, 4(3), 1994, p. 237

Vasant Raval, DBA, CISA, is a professor of accountancy at Creighton University (Omaha, Nebraska, USA). Raval is the coauthor of two books on information systems (IS) and security. His areas of teaching and research interests include information security and corporate governance. He can be reached at vraval@creighton.edu.


Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.

The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.

© 2012 ISACA. All rights reserved.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.