ISACA Journal Author Blog

ISACA > Journal > Practically Speaking Blog > Posts > Risk and Responsibility: How Do They Connect?

Risk and Responsibility: How Do They Connect?

| Published: 11/5/2012 7:01 AM | Permalink | Email this Post | Comments (34)
Vasant RavalVasant Raval, DBA, CISA, ACMA
Recent years have seen a surge in the use of algorithms. The popularity of algorithms emerges out of the computing power that makes algorithms touch our everyday lives and impact us deeply in many ways. We should remember that the research on algorithms is not a new domain. Just recently, Lloyd Shapley and Alvin Roth won the Nobel Prize for their research on algorithms that originated with Shapley in the 1950s. Shapley’s algorithm ensured “stable matching,” for example, the matching of men and women for marriage. Roth used that research to address real-world problems. For example, he applied “stable matching” to assign medical school graduates to residencies at hospitals and New York City (USA) students to high schools. The 60 years of effort by these two researchers, who worked independent of each other, has done a lot of good to society at large.
In a brief review of “Algorithms That Run Our Lives,” Bloomberg Businessweek (10-16 September, 2012, p. 87) notes five of the most influential formulas out there:  Black-Scholes, Swarm, bin packing, the fast Fourier transform and EdgeRank. To name one, EdgeRank was developed by Facebook to figure out what should appear in your newsfeed. The formulas that dissect our relationships in this manner are the drivers behind the engines of social networks like Facebook, Linkedin and others.
The goodness of algorithms should be acknowledged. However, the attendant risk that their use brings along also requires careful analysis. For example, algorithm-based speed trading of securities in the financial markets and deployment of robots in the health care industry have become major forces in today’s society. In the near future, driverless cars will be added to the list. The impact of what businesses innovate today materially extends far beyond each business’s boundaries. The question is:  Who is exposed to these new manufactured uncertainties and how are they going to be protected from them?
Never before in the history of risk have we come to a crossroads as we see now. In the egalitarian age, the societal impact combined with the technological or scientific impact help us to realize that indeed, there is a moral thread to managing risk. Who will take the responsibility to consider in the risk assessment equation this wider universe of impacted societies? Where do we draw the line, for example, on the issues of privacy or protection of intellectual rights in this age of data sharing?
What do you think? On a concrete level, ask yourself:  If you were a traffic police officer in 2016 investigating a 10-car pileup in which 9 were driverless vehicles, how would you assess the accountability?
Read Vasant Raval’s recent Journal column:
Risks and Responsibility,” Information Ethics column, ISACA Journal, volume 6, 2012


Responsibility in the New Age

You bring up a great point of the accountability factor in the future with all of the proposed advances. Technology has come a long way in ten years and it will go even further in the next ten years. You bring up the idea of driverless cars. Although it seems crazy to think about, these inventions are already being tested and could be in our world very soon. How are we going to adapt to all the new technology and everything that will come about in the future? To answer your question about the accountability of driverless cars, I believe it will have to be a situation that is dealt with as the technology comes about. Text message was a very new concept when it came about years ago. Situations came up that many people did not know how to handle because they had never dealt with text messaging before. Rules and laws were put in place once situations came about because of the concept of text message and I believe driverless cars would be handled in the same manner. We may have great ideas about how to handle situations in the future, but we don't actually know how to handle them until the situation is in front of us. New ideas and concepts become the next best thing, but how do we make sure that the next best thing doesn't become the next best problem?
Michael057 at 11/26/2012 9:50 PM


I definitely agree that the risk that algorithms bring, along with their good, require careful analysis.  We have yet to experience these new uncertainties and we should take extra precautions before exploring them.  I understand that algorithms are a major part of our lives, but that doesn’t mean we know all we need to know about them.  We must continue to grow our knowledge about algorithms and definitely learn more as we go.  With algorithms there has to be a limit, we need to know when to stop.  If I was a traffic police officer in 2016 and there was a 10-car pileup and 9 of them were driverless vehicles I wouldn’t know who would be accountable for the accident.  Would I blame the 1 none driverless car just because driverless cars are supposed to be perfect and have no problems?  As of now, I’m not sure.  I definitely agree with Michael, we can’t really know what we would do until it happens and we experience it for ourselves. 
Christine830 at 11/26/2012 9:59 PM

Re: Risk and Responsibility: How Do They Connect?

This post brings up some very interesting situations and questions. We are definitely in the beginning of an age of automation. As mentioned, algorithms are used in finance, medicine, and day-to-day life. As this is very new for everyone, someone will need to create rules for all of the automation. As the post mentions, who is to blame for a car accident with driverless cars? Who is to blame for a surgery that went wrong with robots? At this point in time, no one is to blame. I don’t believe it would be fair to blame the programmers and the people who created the algorithms. However, they need to be responsible until their programs and algorithms are tested and confirmed. As for situations like the car accident, I think it will take a lot of investigation in the future. People will have to look at exactly what went wrong mechanically with the cars. So, I would say it looks like the car manufacturers may be at blame if something was overlooked during inspection and testing. Although 2016 is close, it is too hard to say what technology will be at that point. Maybe all the driverless cars will be required to have cameras that capture what happens in case of an accident. The owner of the car that caused it could be to blame. At this point, it is too hard to say who is accountable for this accident. I would say many rules and regulations need to be put in place before driverless cars are permitted. Every field will have to make their own rules and judgment calls as technology in that area progresses.
Catherine066 at 11/26/2012 11:39 PM

Our Future

The rapid increase of technology in today’s world undoubtedly poses various risks on how we live our lives. When do we know whether we have crossed the line? An algorithm to ensure “stable matching” in marriages, robots in the health care industry and driverless cars, is it all too much? As Dr. Raval mentioned, it is hard to determine who will take responsibility for how all these innovations affect society as we start to rely more and more on technology. I agree with Christine when she says that we need to know when to stop, but I would say it should be with everything, not only algorithms. Of course technological advances, like algorithms, have brought amazing innovations to our lives and have enabled us to connect with people around the world in ways we never imagined could be possible. I am an advocate for technology. However, when so much is happening so fast we need to be aware of the risks it brings and try to take things slow in order to properly assess the implications of the changes happening in our lives. In addition, the issues of privacy or protection of intellectual rights have to be considered. With so many media outlets that allow us to share and access information, it has become increasingly easy to obtain information and media but not necessarily in the most honest manner. Piracy is a big issue that affects people in various industries. Who is responsible for it when we all allow it to happen and often participate in it? The world is heading to an age where technology rules our lives. It is a time of amazing change and increased quality of life. Nonetheless, we need to remember there is always another side to things and keep in mind the risks that come with all the changes.
Andrea907 at 11/26/2012 11:47 PM


We are at a point in life that everyone is turning to technology to assist us with our everyday lives.  Computer software is one of the leading topics in this area of discussion.  Much advancement in software includes algorithms and how they improve efficiency and can assist with daily activities.  I think algorithms can be useful in everyday life.  They are used for many things we use in our everyday lives.  There is also risk with everything anybody does in life.  I think algorithms, if designed correctly, can be use to minimize risk in all situations.  You would also need to make sure it is aware of any possible connections that can put the activity at a higher risk.
In the coming years, technology will be taking over daily activities.  In the example given in the blog, cars will be driven by computers is a great example of how technology and software can do not only simple activities, but some more difficult ones that involve more risk.  In the question about who has accountability of the car accident, the police officer should start off by looking at the creator of the algorithms involved with the cars and see if they have had accidents in the past and investigate if they correctly designed the algorithms so the car can drive correctly with no driver.  Then look at the driver of the 10th car and see if he made a mistake.  Having only one driver in a car versus having nine driverless cars can cause much more risk and many different factors come into play instead of everyone driving their own car or everyone having driverless cars.  There is a lot of investigating that can be done to see who is responsible for the accident.  Technology will one day take over many simple and more difficult activities in our daily lives.
Michael053 at 11/27/2012 12:16 AM

Growing Reliance on Technology

Algorithm technology is a tool that has used data to unlock many doors that may have stayed closed. But as Dr. Raval mentions, there is a fine line between usefulness and over use. The final sentence of the blog wraps up the overall message of the post quite nicely. “If you were a traffic police officer in 2016 investigating a 10-car pileup in which 9 were driverless vehicles, how would you assess the accountability?”. Would you blame the one human driver? It should be safe to assume that this driverless technology has passed stringent safety precautions, and should be very low risk. Or, would you blame the driverless cars, and their inability to react to a changing environment?
 I agree with Dr. Raval that society is becoming too dependent on advanced technology in day-to-day life. It blows my mind that some professionals have quit their jobs to stay home and “day trade”. This phenomenon has both negative effects on the micro-economy existing in the day trader’s home, and the macro-economy with millions of quick short-term transactions happening each week.
Another example of an ever growing dependency on algorithms, and technology is the growing accuracy of the Edge Rank algorithm used on Facebook. As a Facebook user, I now trust and expect Facebook to sort out my “news feed” between relevant and irrelevant information. I came to this level of expectation because it had been so accurate in the past! By streamlining this information, Facebook is accomplishing its goal to keep the user captivated by the information presented. At the same time however, the fact that Facebook can predict accurately which album or status I want to look at is a bit unnerving. It may be good business, but it is daunting that if Facebook can predict my viewing tendencies, what else can Facebook discover about me from my account.
John133 at 11/27/2012 10:32 AM

Future Risk v. Future Responsibility

The idea of risk and responsibility are very much interconnected in today’s society.  But, how will they connect in the future?  With the uprising surge of algorithms in technology today, people are becoming more responsible as risk become more apparent.  These algorithms are touching our everyday lives in more and more ways, most of which we do not even realize.  The fact that you pull up a certain website or social media site and it will feed you advertisements is absolutely astonishing.  These algorithms are connecting your every day events together.  They are adhering to your likes and dislikes and making those things you want more available to you.  Big companies are paying top dollar to use these mathematical equations in order to bring their product to their customers more easily.  Dr. Raval makes a very good argument about Facebook’s use of these algorithms just to create a personalized profile page.  This is all about creating relationships in a way that the company does not even need to do anything, the algorithm does it all.  This is where we run into a problem.  As described in the blog, tasks, such as driving, will soon be automated by technology.  The situation that is described in the blog is that of being a traffic officer in 2016.  What will happen when a ten car pileup occurs?  There are 9 automated or robot driven cars and 1 human car.  Who is accountable?  Who is at fault?  Now, in lies the risk of using such technology in a human society.  With great power, comes great responsibility.  In this equation, power is equal to risk. And, as a traffic officer in this situation 9 times out of 10 the responsibility will fall on the human error of the accident.  This is the situation, this is the trouble that will fall on humans as the world or technology continues to take control of the world around us.
Taylor150 at 11/27/2012 11:38 AM

Responsibility for Future Risk

Your points regarding the evolution of technology and uncertainty it brings create some very interesting ethical questions that will need to be answered in the coming years.  I think that we, as humans, need to be responsible, and also held responsible, for the programs and devices that we create.  The risk that problems can arise without human inception will be growing as we continue to create technology that is designed to “think” for itself.  We must remember, however, that we have inherently imparted part of our human error upon these designs when we created them and we must remember that as these technologies become more common in our future.  I think we as a world community will continue to carry the responsibility of making sure that all humans are treated fairly and adequately.  I am not sure if there is any way to impart this feeling of responsibility on the machines we create and I think that is a problem that needs to be addressed before the public will fully accept the risk that goes along with technological advancement.
Ryan063 at 11/27/2012 11:49 AM

Societal Component of Risk

Prior to reading this article I looked at risk simply from its scientific definition.  I addressed risk by looking at the “probability of an event and its consequences.”  However, I think your article brings up a very interesting analysis regarding the societal component of risk.  Our growing reliance on technology has made the definition of risk much more complex.  When an individual buys cutting edge technology they expect this technology to work and to be free of flaws.  Individuals rarely, if ever associate the risks associated with a new piece of technology.  Inherently, we put trust in the manufacturers of technology products and automatically assume that the manufacturer has worked out all of the bugs associated with the product. Today, many people fail to acknowledge the possible risks associated with technology and the exploitation of technology.  Cyber Monday is the biggest online shopping day of the year.  Millions of people all over the world trust the companies that they are doing business with to keep their credit card confidential and secure.  Most of these online shoppers never even consider the risks associated with putting a credit card number out over the Internet.  This is a perfect example of how our society is becoming more and more trusting of technology.  In order to avoid the accountability conflict created by the driver-less car scenario, we must place less trust in technology and manufacturers, and more trust in our own risk assessment procedures.  Throughout history, humans have analyzed risk using their own criteria and have maintained a general sense of skepticism.  Technology has made it acceptable for us to ignore our own risk assessment procedures and skepticism.  If we lose the ability to assess accountability, we lose the ability to point blame on manufacturers or individuals for doing wrong.  Overall, your question regarding the driver-less vehicles is essentially asking; at what point does our reliance and trust in technology become too great?  I believe that we are almost at the point of an over reliance in technology.  In order to avoid this point of over reliance, there must be a culture shift that moves society away from certain trust in technology to a society that places more trust in themselves and in individual risk assessment procedures.
Andrew149 at 11/27/2012 12:03 PM

Algorithms and Risk

Dr. Raval brings up an interesting point with his discussion of algorithms. The algorithms are being used for good and making tasks easier for people to complete. The use of algorithms can reduce human error, through the use of robots in surgeries or formula driven online calculations as opposed to manual. However, with this reduction of human involvement in certain processes it is harder to trace accountability as is discussed above, after all who will can be sued if a robot performs the surgery, or drives the car. I feel that algorithms should be used as they have significantly reduced error, and made processes more accurate and easier. I do not think that algorithms should ever replace humans completely, someone needs to be held accountable, and verify that the algorithms and programs are operating as they should.

In his recent article Raval discusses the moral implications of advanced technology and algorithms in respect to the reduction and protection of individuals privacy. That while websites, like your Facebook newsfeed are tailored to the individual user there is also a risk of loss of privacy. I feel that it is imperative that individuals right to privacy be respected even if it reduces the effectiveness of the websites or the companies that operate them. I do not want my personal choices/transactions made online, or the pages I view or interact with to be exposed to the general public without my consent. I would rather have less personalized answers to my search on google, and still have my personal private information in tact.
Marie151 at 11/27/2012 9:09 PM

Risk and Ethics

Dr. Raval makes an interesting connection of algorithms and the changing idea of privacy (what you do online) and how it ties to ethics.  On the one hand, how much information should we openly display about ourselves but on the other hand, how much of our personal lives should be available to world?
I think that, while our ability to create algorithms and to access a wide variety of information at the drop of a hat (essentially) is a great thing, and has many benefits for the world of business, accounting to be specific, there are also downsides.  There is a risk of getting too caught up in the data that the person collecting it forgets the boundaries of ethics.  Just like we learned in Dr. Raval's class that anyone can commit fraud, anyone can cross the line from accessing enough information to accessing too much.
I think that it is good to note the importance of a "moral thread" associated with managing risk.  If I was hiring someone to do such a task, I would definitely want a person with high moral standing in the position.  This is not only because he/she would be less likely to ignore or fake certain things (aka fraud) but also because he/she would be more likely to think about the moral consequences of his/her actions and their impact on others.
I also found the idea of the driverless cars interesting in the sense of who would take the blame, not in that situation but in one where someone thought that he/she was just "doing my job" and ended up violating someone else's rights.
Sarah065 at 11/27/2012 9:46 PM

Risk and responsability

It is amazing the way that technology has improved in the past 10 years. It is also kind of scary that these improvements are being introduced to our normal daily live faster every time. Like Dr. Raval says, we are not too far from having driverless cars. As Christine said, I believe that if technology keeps growing so fast, we are going to have to put a stop on it. We cannot let technology keep growing to the point were human live basically becomes pointless. If everything was automated and easy, humans would not have to think about anything. Also, because of the fast growing technology people don’t stop buying. For example, if you buy an Ipad today, it is also a given that one year from today you are going to want the new Ipad that just came out. I believe that technology is making people spend their money on unnecessary things. Moreover, when a new product is introduced to the economy, there is a huge risk. Humans usually expect everything that they buy to be perfect and have no defects. However, this is not always the case. Many times, someone will introduce something new to the market and end up causing harm either to a person or to the environment. I believe that companies involved with technology should be very careful with what they do, and look at every possible algorithm that there is involved in the new technology.  Also, they should do many experiments before introducing the new technology into the market and if something goes wrong the company should take full responsibility over it. It is also important to state that algorithms are also used to plan terrorist attacks for example. We need to be really careful about were we let the technology take us in the near future, and specially make sure that it is for good, and not to cause harm.
Natalia909 at 11/27/2012 10:27 PM

Risk and Responsibility in Ethics

With risk comes responsibility.  As technology rapidly changes exponentially it is hard for IT Professionals and auditors, etc to keep up with the trends of technology.  This leads to holes in the system that people will find and commit fraud or some sort of illegal actions.  Preventions must be made to stop these actions such as conducting penetration test so we find the errors before the criminals do. 
All good things must come to an end.  With the rapid increase of technology, it will lead to corruption.  We then will start a cycle again as we revert back to our natural state, cavemen.  This is the future for our race if technology keeps increasing.  Therefore it starts now with our generation to recognize the trends of our world and the doomsday we are setting ourselves up for.  It is our jobs the world's leaders of tomorrow to act as one to prevent such Armageddon acts from occurring. 
To go on topic with Dr. Raval's short blog, algorithms are apart of our daily lives.  We see them in many technological aspects, such as Netflix, iTunes, Spodify, etc.  Algorithms make live easier for humans to live and become lazy. 
Dr. Raval points out numerous examples how technology has hindered our society to be ethically sound.  First example is privacy.  Technology has made it so that the internet is public to the world.  This idea takes away privacy.  We no longer have privacy and anything we say or write on blogs such as facebook, twitter, linked-in could be seen by anyone around the world.  Of course there are positive and negatives to any side, but I believe there are more negatives than positives in this situation.  Just turn on the T.V. and watch the news.  Everyday we see pedophiles and sex offender cases from the internet.  The numbers are increasing as more people use the internet. 
Second example is technology rapidly increasing so they we have driveless cars.  Google is trying to innovate to develop this car that drives itself.  Is it cool? Yes, but will it lead to corruption in the future? Maybe.  2016 if there are driveless cars there will be no need for human police officers, but robot officers.  We see numerous examples in Sci-Fi movies such as Terminator how robots take over the world or iRobot.  I understand these movies are fictional, but with technology increasing anything is possible.  Just 10 years ago we would have never guessed that you could store 10,000 songs onto something that will play it on command.  The future looks dark for our world, but as future leaders of tomorrow we must act now and tell the world that we can live in simplicity before we all destroy it. 
Kent373 at 11/27/2012 11:52 PM

Re: Risk and Responsibility: How Do They Connect?

Dr. Raval makes good points in his article about how we perceive risk. He is spot on when he says, "everyone thinks, 'It just cannot happen to me.'". I completely agree with the notion that in the world today, we do not concentrate on risk as much as we should. This pertains to everyone, from individuals to corporations. As mentioned in the article, think of the amount of information that we share, individually, on Facebook, Twitter, or other social medias. I am willing to bet that the majority of us do not take into consideration the amount of risk we are assuming once we post or upload something. Businesses are a bit better at this because many have different procedures or systems put in place that tend to risk. Looking at algorithms, some of us enjoy them and others do not. Do you ever see one and think that it is spot on, completely related to something you would buy? When you see something like that, does it scare you that the computer can access so much information that it is able to put you together with another product or website? I think this is another thing that we do not pay much attention to. There is so much of out information just floating around on the Internet and technology is so advanced that it is become our personal responsibility to look at everything we are putting on the Internet and know that the information can be used anywhere at anything by anyone.
Brittney986 at 11/28/2012 10:19 AM

Technology and our impacted society

With so much change in the last decade, it is unbelievable what technology has in store for us in the future. The implementation to expand and strengthen broadband connectivity throughout the United States will allow a greater amount of information sharing. In the future, we can find average affordable speeds of 100 mbps download speed and 50 mbps upload speed. Everyone will be interconnected through mobile devices or any other device. Driverless cars like you mentioned will be able to apply real time information technology to make driving decisions base on current conditions; weather, accidents, construction, etc. With so much to look for in the future, it is scary what these technological advancements can bring. Risks and exposure to fraud, identity theft, copyright of intellectual property has become a prevalent issue. In the future, the possibilities are endless! But no need to fear! With the advancement of technology, we can see a trailing force of security, standards, and regulation. There is always a counterbalancing force to maintain equilibrium. So in regards to your question of the ten car pile up, it is ultimately up to the companies to produce quality products to prevent this from happen. Cars must go through quality inspection before it is sold to the public. Standards must be set for these companies. In reality, some products do slip through and cause accidents; but with standards and regulations, this occurrence should be rare.
VIET155 at 11/28/2012 5:20 PM

Risk and Responsibility

With social media and the internet being such a strong player in our everyday lives, the questions Dr. Raval brings up in his article are important to consider. We often don't even think twice about what we post on the internet, what we click on, where we enter our email address or phone number. Doesn't this seem a little strange? Why would some sites ask for all of that information and what do they plan to do with it? Where is it being stored and for how long? Unfortunately, we rarely think about these questions and the consequences to them. I think we need to become more aware and be more responsible when it come to the internet and giving away personal information. It is a huge risk to have all of that information out there and we don’t even realize it. The article states that most people have the mentality that certain situations won't happen to them. I sometimes think that also, but that mentality is what will get you in trouble because you aren't being careful and responsible. Like Dr. Raval states in the article, our decisions have the ability to impact others so we must take responsibility for our decisions and always think before clicking enter on the internet. As technology continues to advance and new products come about, it is important to continuously think about risk and responsibility. I agree with Micheal that we cannot really plan ahead for future technology. We must deal with it as it evolves and put rules and regulations in place once we know what we are dealing with. We may believe we know how to handle the situation when driverless cars come about but until they are here we are actually clueless.
Melissa050 at 11/29/2012 12:22 AM

Re: Risk and Responsibility: How Do They Connect?

With the changes that have been occurring with technology within even the last few years, it seems that just about anything is possible looking forward.  The new developments in technology not only provide huge potential gains in efficiency and availability of knowledge, but they can provide a huge growth in the risk that every person is presented.  With an increase in information availability, there is also an increase in the risk a potential identity thief can pose.  One would think the developments in technology would help prevent thieves from being able to easily get this information, but in many cases, the ones that design these risk controls are the very ones that are taking advantage of them.  This can be a frightening fact to deal with. 
The increased use of algorithms also presents its own unique problems.  There's the old saying that any technology is only as smart as the person using it, and it seems to hold true especially with algorithms in computers.  For example, Knight Capital was trying to implement an algorithm to keep up with the growing competition within the high-frequency trading industry, but they didn't properly test their algorithm before implementing it.  This turned out to be a disaster for Knight, and they might not be able to fully recover from this.  The algorithms that would be used in these driverless cars to determine how they would drive based on their real-time information would undoubtedly be somewhat flawed for a few years.  If these cars weren't tested to an extreme, there could even develop a public fear of this sort of technology.  It would be up to the companies developing these driverless cars to prove that they are safe.  Even then, any time there is a human element involved at all, even in something as basic as the logic sequence a computer is to follow, the chance of error is present.  To answer Dr. Raval's question about the pileup, an officer would need to know if the driverless cars have a history of either malfunction or process error.  In theory, the driverless cars would be much less likely to have an error since they don't have as much of a human element, however, since this seems to be in the near future instead of far into the future, I'd be inclined to think the technology is new and potentially faulty.  Unless it was obviously the human driver's fault, I would be more likely to place blame on a faulty piece of logic within the driving algorithm or an equipment malfunction.
Christian087 at 11/29/2012 12:40 AM

Risk vs. Responsibility

The topics addressed in both “Information Ethics: Risk and Responsibility” and Raval’s post “Risk and Responsibility” challenge society to address risk at the same rate as reward. In this post, Raval discusses the implications of moving into a technologically advanced society. I believe that as we move forward with these advances, the success of these advances will hinge on their intrinsic assessment of risk. For example, the 10 car pile-up with 9 driverless vehicles could easily have a convicted offender. If the technology has advanced enough to build these cars, the technology has advanced enough to assess the risk. In this case, the cause of this accident would easily be assessed via video cameras etc.. Essentially, as technology advances, the assessment of risk advances. Having worked with algorithms such as Google’s PageRank system mathematically, it is clear that the human input to create the system also mitigates the risk.
I completely agree that algorithms have a huge positive benefit to society. If you want them to make a decision, they can make that decision for you and execute on that. After discussing with a trader who designs algorithms to trade on both domestically and internationally, he stated that although these algorithms were designed by him with a mathematically savy background, there is still monitoring to be done. Until it hits perfection, it will be monitored. In Malcolm Gladwell’s book “Blink”, it was stated that giving humans more information or more time to make a decision did not necessarily result in a better decision. It seems as though we have concluded that algorithms need the correct and most likely more input than humans in order to mitigate risks. Until as humans we develop technology to mitigate risks, there will be a human element involved. The creator of this technology will be responsible for the risks.
Nicole153 at 11/29/2012 12:48 AM


I like Dr. Raval’s point that, “The impact of what businesses innovate today materially extends far beyond each business’s boundaries.”  What a business does affects more than just its employees.  A business’ actions affects its stakeholders, which include stockholders, customers, and the community. In the case of the driverless cars, a whole society is exposed to this manufactured uncertainty.  How we go about protecting society from these manufactured uncertainties is a good question. 

Google actually came out with a driverless car in 2011, and it got into a five car crash.  Who was responsible for the crash?  Should the user/owner be held accountable?  Or maybe Google?  What about the cars themselves?  I don't know enough about how the cars work, but I think that the user/owner of the car that caused the crash should probably be held accountable.  By buying and using this car, they are assuming responsibility for it.  But a lot of testing is still needed to ensure reliability before driverless cars will be allowed.  
Cecilia057 at 11/29/2012 1:02 AM

Risk and Responsibility

Dr. Raval introduces some very interesting points regarding responsibility and accountability as more actions become automated and begin to, in some ways, eliminate the human element from what were once purely human practices.  First, the introduction of algorithm-based trading on the stock market has eliminated the human elements of possible uncertainty and even mathematical error.  However, assigning responsibility, blame, for poor returns generated by funds picked by algorithms becomes difficult.  Does one blame the programmer or the stock brokers from whom he took advice when constructing the trading algorithm?  And how far can algorithm trading come before the trends in the markets are made entirely by computers rather than the people with actual stake in the companies?  To whom are business executive accountable then?  Without owners and brokers giving an indication of company performance by selling stocks, the selling of stocks may become impersonal when driven by computers.  This may desensitize companies to the effects the stock market has is dictating business performance.

Additionally, Dr. Raval introduces a very interesting and possibly troubling point regarding the advent of driverless cars.  In some ways they seem a godsend, eliminating the dangers of distracted and even drunk driving and the element of human error.  However, they also present the problem of accountability when failures do occur.  And believe me they will.  Are manufacturers to blame?  The men who wrote the program that handles the vehicles motion? Maybe the owner of the car itself?  One must find ways to assign responsibility.  Without someone who is accountable for the safety and proper function of the car, things could become crazy.

Essentially, I do agree with Dr. Raval’s point that – in an age where technology drives our lives and the impersonality of actions and processes is ever increasing – it is more important than ever to assign responsibility to an accountable party for all processes and actions, even ones driven by computers and algorithms.
jgorman1 at 11/29/2012 11:38 PM

Risk and Responsibility

Dr. Raval posts some interesting questions about risk and reward in this article. Most fascinating, I believe, is the argument that we would eliminate the human element from what were once human methods. In order to accomplish this, one needs to segregate reponsibilities so that the risk of eliminating the human element is lessened. The stock-market example proves that things can definitely become more efficient, but at what price are we paying for that efficiency? I believe the best example of the risk of algorithm-based technology is seen in the driverless car example.


Driverless cars point to numerous social and physical risks. Will the technology ever be up to par to do this? What sort of technology do we need to develop. Personally, as a driver for the last five years, I cannot imagine a computer being able to do what the human mind does when driving a car. Have we gotten to the point in society where we believe computers are smarter than the human mind? After all, it was the human mind that developed computers and the technology that we need in order to create these sorts of issues. There are great rewards and benefits in both scenarios, including the possiblity of a more productive society overall. However, that productivity is very risky.

The risks and responsibility in our technical society are endless. There are risks and rewards to every single source of technology that we use, and if we don't assign responsibility to this, the dangers of fraud, physical harm, and other damage are immense. I agree with Dr. Raval's point that in an age where technology drives our lives, and the human element is become more and more obselete, it is critically important to keep this under control and assign responsibility.

Megan408 at 11/30/2012 4:17 PM

The Association between Risk and Responsibility

It is difficult to form a definitive conclusion when analyzing the relationship between risk and responsibility.  The definition of risk associates the likelihood that an unwanted incident may occur and also considers the consequences.  New technological advances produce situations that involve new risks and responsibilities that bring about ethical debates. The two parts of risk mentioned in the article were scientific and societal.  The scientific aspect has been evaluated for some time, but societal assessments are relatively new.  Confidence is placed in institutions and technology.  However, the reality is that the amount of risk involved is uncertain because the new developments in technology are often not fully understood.

The article also mentions, “the interaction of social factors with technology factors produces possible futures.”  This quote raises many questions ranging from how the data was accessed to how long the data is stored, as well as to the concern over who has access to the information.  Uneasiness can also be detected surrounding algorithm technology and its invasion into many aspects of our lives including various websites such as Pandora and Facebook.  It also pertains to specialized ads that are operated by cookies on numerous websites where individuals are frequently being tracked and analyzed.

The dependence on technology creates new security risks.  These can range from DOS attacks to personal identity theft.  Individual data can be exploited and stolen.  One example concerns Sony’s online gaming network where many credit card numbers were taken.  In this case it was determined that Sony could not have reasonably predicted that this was a threat and had taken proper steps to prevent this from happening.  Personal negligence can also occur when individuals do not properly log out of a computer for example in a computer lab or hotel.  The individual assumes the risk in this case.  Risk assessment procedures, responsibility, and awareness must be implemented at all levels possible—from the corporate level to the individual.  Our reliance on technology should not totally eliminate our innate skepticism. Ultimately, ethical decisions have many facets and require a considerable amount of contemplation for those who are evaluating the actions and responsibilities of others.
Paul051 at 12/1/2012 12:54 PM

Risk and Responsibility

It is important to note that people as well as companies try to cut their costs as low as possible.  A company isn’t going to spend money to put certain controls in place when risk is minimal.  Referencing the horrible tragedies in Colorado, Mumbai, and Wisconsin, the people involved probably never in their worst nightmares could’ve seen those events coming.  After the fact, it is easy to say that more controls should’ve been in place to prevent these things from happening.  The “It just cannot happen to me” line in the article perfectly reflects the thought process behind an entity’s decision to not further control lessen risk.  When the cost of implementing controls outweighs the benefits, it doesn’t make financial sense for a company to go through implementation.

As far as driverless cars, I agree with Megan in that it is hard to imagine cars being driven by computers.  Computers are capable of a lot; they automate many processes and improve efficiency by cutting tasks that would take hours or days to do by hand down to a single second.  However, I don’t believe a computer has the ability to completely replace the logic that the human brain uses.

Data sharing is certainly a hot topic today that will only become more controversial the further into the future we advance.  Morality comes into the picture when companies like Google and Facebook, that have amassed data on tons of people, are essentially selling this data for profit.  It’s hard to say exactly where the line should be drawn on privacy.  The people who have Facebook accounts were not coerced into writing down all of the information about themselves, they freely chose to do it.  Is Facebook in the wrong for simply taking what people give them and using it for a profit?

The World Economic Forum in collaboration with Deloitte launched the report, “Risk and Responsibility in a Hyperconnected World: Pathways to Global Cyber Resilience,” in which they discuss information sharing, trust, and economics of cyber security.  The report examines the need for individuals and organizations to adapt to new ways of thinking in a new environment.  Two things need to be done: new behaviors for individual nodes must be identified, and the terms of contracts between nodes must be reconsidered.  In its entirety, the report is very interesting and can be found in PDF form at:
Philip154 at 12/2/2012 12:54 PM

Risk and Responsibility - John Hazuka

Risk is the combination of the probability of an event and its consequence according to ISACA’s glossary.  However, Rayner and Cantor reject the essential character of the quantitative definition of risk.  Rayner asserts that the notion of risk can be better grasped if we are to think of risk as a concept made up of two components; the societal and the scientific.  The scientific component is illustrated by the traditional means of risk analysis.  The societal component is fairly new.  It concerns trust put in the institutions regulating the technology, acceptability of the principle used to apportion liabilities and acceptability of the procedure by which collective consent is obtained. 

Risk and responsibility are inseparable and not mutually exclusive.  Giddens believes that responsibility and risk are closely linked.  He states that new technologies penetrate more to the core of our lives, and more of what we feel and experience comes under the scientific spotlight.  This situation leads to increasing insecurity.  This reminds me of Facebook.  Facebook is a social media website.  On this website, nothing is completely private and employers can see all of the pictures and statuses you write while you are applying for a job.  Before this technology, no one had the ability to see everything you have ever thought before, or every picture someone has taken of you.  This leads to a question of who’s responsibility is it?  The website’s or the person’s?  I would argue that it is the person’s fault because they are in control of themselves and the website is not.  Also, the website does not make anyone post stuff they don’t want the public to know about.

Now, to answer Dr. Raval’s question about the driverless cars and who is at fault for that accident, I would argue that again it would be the human driver rather than the driverless car.  The reason I say this is because I am assuming that the computers that control the driverless cars are not prone to error.  However, in 2016 I don’t see that happening.  The reason I don’t see that happening so soon is because usually the first couple of models almost always have some type of defects in the early models.  A good example of this is the X-Box 360.  When that first launched a bunch of them were defective and broke within the first year of the consumer owning the console.  In 2036 however, I think that we will have the computers for the driverless cars perfected and it will never be the driverless car’s fault and will always be the driver’s fault.
JTH86974 at 12/3/2012 7:14 PM

Risk and Responsibility - Josh Wiggins

Information technology is advancing at a blinding pace, with new products, services and technologies being developed constantly. This has a very a unique effect on ethics and morality, because along with new technologies come new ethical questions. Whenever a significant advancement is made in the technological arena, it often leads to a new ethical situation that must be dealt with that simply did not exist before. These policy vacuums create an important question of responsibility and risk, because policy is often not implemented as soon as a new dilemma surfaces, as deliberation and planning is required. In this sense, ethics in IT must be treated as dynamic and constant, with ethics being considered in the development of every new technology to lessen negative effects that occur as a result of those using these new products and services in an unethical manner (Raval, 2012, Policy Vacuums)

Risk and responsibility is a complicated subject, especially in the world of IT. As users of technology, every individual certainly has a responsibility to act in ethical manner as well as to act in a manner that protect themselves and others. IT presents new ethical dilemmas and risks constantly, and users must be cognizant of them, because regulations generally lag behind new developments, leaving a latency period where unethical behavior is a larger risk.  For instance, the use of sonogram ultrasound technology in Southeast Asia and the favoritism of male children resulted in female infanticide, especially before regulations could be put in place (Raval, 2012, Changing Times and the Eternality of Ethics).  Regardless of what new technologies are developed, the human element is still the most important aspect of any question of risk or responsibility. Developers of a new technology must look at the ethical implications of its usage, and users must do the same. Improvements in technology often create scenarios where risk is increased despite improvements in security in certain areas, leaving the responsibility up to individuals despite the level of sophistication.

For example, the issue with the car accident is quite complex, and liability in reality should be shared. The singular driver is most likely to be at fault, due to not only the likelihood that the others cars are sophisticated enough to avoid accidents but also due to the fact that as an individual, he is the only one who could reason and prevent an accident. Technology will always be limited by those who develop it and use it.  However, the developers of the self-driving cars could also be considered at fault, because this possibility should have been foreseen and dealt with prior to the sale of the system. In a simple insurance conflict, the issue does not seem that important, but what if one of the occupants were killed in the accident? That could fall on the driver, but it would also bring the future of that technology into question. Increased technology requires increased care, because risks will continue to be heightened in the near future.

Raval, V. (2012). Changing Times and the Eternality of Ethics. ISACA Journal, 2, 1-2.

Raval, V. (2012). Policy Vacuums. ISACA Journal, 3, 1-2.

Raval, V. (2012). Risk and Responsibility. ISACA Journal, 6, 1-2.
joshwiggins at 2/21/2013 10:04 PM

Risk and Responsibility - Jean Fitzpatrick

     This blog and the columns you have written for Information Ethics blog, bring up the recurring themes of risk, responsibility and trust in the face of rapidly changing technology with a global reach.   In this blog you note that “there is a moral thread to managing risk” and you pose a question that touches both technology professionals and technology users, “Where do we draw the line, for example, on the issues of privacy or protection of intellectual rights in this age of data sharing?”   

     As users of technology, each time we consider opening an on-line account, whether social media, banking or e-commerce, we are forced to make a decision on whether to share personal information and whether we trust that the company offering the online service will protect the information we provide and use it only for the intended purpose.  It is our responsibility to read and understand the privacy policy of each account we open, but the sheer number of accounts we amass and the length and variation of the various privacy policies limit our ability to truly understand (or later remember) and render our acknowledgement a mere token.  Also, as users of technology, we cannot give “fully” informed consent, as we cannot fully understand the technology used to store and process our data and the business model that determines how the data will be used and shared.  Ultimately we are acknowledging that we know that the company has a policy and that there is some risk involved. 

     So what is the responsibility of the company who signs up users who have acknowledged and agree to assume the risk?  A baseline requirement is to abide by the written policy, whether or not the users have actually read and understand it and to put in place controls and audits to detect when the policy is violated.  Beyond that, it is incumbent on the managers of the company to ensure that the company is alert to new threats as they emerge, and that perhaps are not covered by the existing policy, and take action to minimize the risk to the security and privacy of the data that users have shared.  According to Kant’s moral imperative, managers have a duty to treat their customers / users not as a means to an end (such as profitability) but as ends in themselves.  The personal data shared by customers is a reflection of the self and managers must treat it with respect by implementing the appropriate controls to ensure privacy.  In the face of rapidly advancing technology and communication capabilities, the law will never be able to keep pace with legislating the specifics of keeping data private.  Managers of companies that employ technology have a moral responsibility to understand the technology and to provide a corporate culture in which ethical protection and use of personal data is part of the conversation as new technologies and capabilities emerge, particularly capabilities that allow them to use data already in hand submitted by users before the capability existed.  In these situations, the scope of the user’s original consent must be honored until the user extends approval for extended data use.   When a user’s privacy is violated, management has a moral responsibility to notify the user and offer some form of remediation. 

     As use of online technologies continues to expand, both users and technology professionals must continue to educate themselves or the moral and legal aspects of privacy.
jeanfitzpatrick at 2/22/2013 11:33 AM

Risk and Responsibility-Bryan Brothers

In today's non stop evolution of technology and the way we interact with it risk and responsibility is a common theme. In the readings we have accomplished thus far trust is a common theme. As a society we must decide how to  establish trust among autonomous actions as well as other technological advances.

I'd like to discuss some impacts of technology in military operations at this point:

Drone warfare has become a common tactic for the current Administration. The President is authorized to utilize forces for a period of no longer than 60 days (with an added 30 day withdrawal period) without Congressional approval. His/Her duty is to notify Congress of the action within 48 hours. The technology of drones allows him to take military action if he sees fit without prior approval from Congress. The question is should the President have this right. Arguably the previous guidance was written with the assumption of a manned deployment to an area and subsequent military operations. Technology now gives the President to authorize strikes and be in and out of an area of operation within minutes. Now the question arises how does the United States accept the risks and responsibilities that accompany these operations, which other sovereign nations can see as acts of war. These operations also take lives of civilians occasionally, a side society generally does not see in main stream media coverage. Expanding upon this argument, this technology was used to assassinate an American citizen working as a terrorist operative, within legal guidelines as they stand now. There have been talks of a appointing a drone court, which would determine the fate of those listed for drone strikes. The expansion of technology into the battlefield has greatly increased our need to determine who is responsible for these risks. Is the young man or woman flying the drone responsible for civilian casualties since they couldn't see them from above? Or should we accept civilian casualties as collateral damage of a technology that enables us to strike high value targets at almost anytime with minimal human risk.

This dilemma is equivocal to Dr. Raval's statement on the 10 car pileup. As the United States expects to see more drones in our own airspace with use by Sheriff's departments and other entities, we risk losing our privacy as well as our ability to a trial if a drone has the right to terminate American citizens who are committing acts of terrorism and weapons systems are allowed on domestic drones used by law enforcement.

I'd also like to touch on the impacts of government intervention in preventing digital copyright theft and censorship of internet access by governments. Digital theft is an issue. Organizations are losing property to sites like Pirate Bay, which enable users to download information that was bought by another and than redistributed without permission. As a society we need to decide how we should proceed with this issue. Right now trust is broken in the models of companies that appear to be overcharging for outdated services that can be provided for free on the internet by another party. The question that is outstanding is are we experiencing a market shift in which providers need to determine a new model to distribute products. Is stopping this theft slowing digital evolution in the favor of big corporations? For instance, the Youtube/Hulu+/CordCutter model is showing great promise, with consumers paying only for the content which they desire to watch or watching programs that are paid for with advertising revenue. Organizations are constantly trying to squash this practice, and it seems in some ways our government is wanting to support it with legislative intervention. One could argue this is an instance in which the risks should be placed on the shoulders of corporations instead of expending capital blocking sites that will just pop up on another host. Where do the risks and responsibilities fall when Corporations are asking the government to prevent any copyright infringement in a changing digital world.

Dr. Raval references government censorship in one of his writings. Not only is government censorship an issue, but government tracking of online behavior, as Dr. Raval also mentions is becom ing a stronger issue.Tweets are being recorded by the Library of Congress, but the government is going much further, requesting non-warrant information on citizens on a regular basis. Who accepts the risks and responsibilities in these instances. Should providers willingly give information on their users without court allowed requests? Laws like CISPA are even more concerning, essentially allowing data to be collected and shared if a threat is determined to exist. This has the potential to blow the lid off of security of the individual American in cyberspace, allowing untapped access if a body in power determines it is required.

As we see the implications of allowing government to establish a heavier presence in information technology security, we need to ask if the risks truly weigh in the favor of giving responsibility to our government to utilize technology to attack terrorists without a governing body oversight as well as gain access to personal data without warrants or other court appointed measures.
Bryan049 at 2/22/2013 8:29 PM

Risk and Responsibility - Mary-Martha Panther

In reading the above blog and the other columns, there is an inherent risk with any new technology that is deployed into mainstream society. As technology becomes more prevalent in our daily lives we are inherently accepting more risk inadvertently. As noted by Vasant Raval in his journal article Risk and Responsibility (2012), “A risk society is a society in which we increasingly live on a technology frontier that absolutely no one completely understands and that generates diversity of possible futures.” It is apparent that society has become immune to the risks of technological advancements because it is has become the norm in our daily lives. Technology is all around us and it is too difficult for one to fully comprehend the nuisances and issues that may cause undue harm to us. It comes back to the question of whose responsibility is it to mitigate these risks? Is the manufacturer of the technology or the individual whose has decided to utilize it? In my opinion, responsibility should rest on both parties. It is the responsibility of the manufacturer of the technology to be proactive in minimizing potential risks to society. On the other side of the coin, individuals should be sensible in how the utilize technologies.

The traffic police officer scenario ties in well with the ethical issues that arise with new technologies. Who is accountable for the accident when 9 of the cars were driverless vehicles? Does the police officer automatically assume the one car that was physically driven by a human was to blame? Do we inherently trust a driverless car over a human driver? As stated by Vasant Raval in his journal article Changing Times and the Eternality of Ethics (2012), “The quandary is not in the technology or new technology as such; rather, it is in the use of technology that we find the human side inherent to the judgment of right vs. wrong, or even right vs. right.”  Going back to the traffic police officer, he/she has to take the driverless car detail out of the equation of who may be accountable for the cause of the accident. As with any car accident, a thorough investigation of the placement of the cars after the accident, the account from the victims, and any witnesses who may have seen the accident should be the determining factors for who is accountable. Technology can fail and one should not assume that it will always provide the right answer.

In thinking further into this topic, trust is a key factor in the roles of risk and responsibility. As stated by Vasant Ravel in his journal article Trust in Information Systems (2013), “Trust is a rather complex, age-old concept with political, social and moral dimensions. In recent decades, the rainbow of implications of trust and trustworthiness has grown in size and shades […] Among all the developments affecting the role of trust in modern times, the Internet stands out both in terms of scale of impact and enormity of challenge of maintaining trust.” Society runs on trust which in turn allows us to keep progressing forward into new territories, such as technological advancements. Our trust is an inherent acceptance of the risks and responsibilities that may arise with any new advancement in society.

Raval, V. (2012). Changing times and the eternality of ethics. ISACA Journal, 2, 1-2.

Raval, V. (2012). Risk and responsibility. ISACA Journal, 6, 1-3.

Raval, V. (2013). Trust in information systems. ISACA Journal, 2, 14-15.
Mary-Martha297 at 2/22/2013 9:25 PM

Risk and Responsibility - Jon

To put it plainly, an algorithm needs checked on a regular basis.  Many of them are extremely powerful and they need some oversight.  They are not always perfect in their results.  Ethics should be large part of how they are implemented and how they are used.  Depending on the industry, there should also be some serious discussion around regulatory
requirements for where they can be used and what their role can be.  For example, how is speed trading really within the spirit or the original intent of the stock market?  I would generalize and say most investors are day traders or people who are interested in long term investments.  They are not trying to get their transaction in within milliseconds.  How does this speed trading or black box trading, which was the likely cause of the “flash crash of 2:45”, fit in to honest and ethical trading?  I am not sure it does.  There is also a risk level when applying the algorithms in this field because during that flash crash it temporarily cost the country 9% market value and we still do not know exactly how that happened.

In circling back around on ethics for algorithms, I believe the framework outlined in the Ethics Implementation Tool Kit and the references in COBIT 5 are a great place to start.  The hard part is developing a willingness to do it.  Ethics and culture have a direct correlation and I would make the case that our culture really enables the questionable ethics to take place.  Our culture focuses on winning, being the best, establish wealth, and collecting fancy material goods, and society judges people based on those things.  Some people stop at nothing to achieve it.  Some get caught in ethically questionable situations and some do not.  NASCAR driver Richard Petty has said “If you ain’t cheatin, you ain’t tryin.”.

How can we trust that next time an algorithm fails us that it won’t be a long term recovery?  The Information Ethics article about Trust in Information Systems talks about how a break down in trust results in regulation, policing, monitoring, checking, and others as a reaction.  I suspect those reactions will be misguided at best if we do not understand what we are trying to control.  I would suggest we better wrap our heads around where algorithms are used today, what their role is, and what kind of risk they present.  Many of them are harmless and actually make our lives easier.  I question how successful an inquiry would be in figuring out what is going on with Wall Street trades.

So where do we apply accountability?  So far I would say the general public has had to absorb most of the consequences of algorithms that turn out poorly and the lack of a practical application of ethics.  Many individuals that use algorithms do so as part of doing business, and business is setup in a way to minimize the risk or liability to the business.  This can be through terms of use, non disclosure agreements, careful navigation around regulatory requirements, and others.  Due to all of that, we end up being accountable for the actions of a few.  Those few also happen to have the resources to prevent changes to the way accountability is applied.

Kevin Slavin gives an interesting presentation on how algorithms are in our daily lives on
Jon082 at 2/23/2013 11:00 AM

Risk and Responsibility - Troy Vingom

Today’s society is challenged in ways not seen in the past.  Continued globalization, technological innovation, increased education, society convergence and variation in practical ethics has resulted in an increase in moral and ethical dilemmas.  Dilemmas that continue to challenge society ability to balance policy, trust, ethics, as well as risk and responsibility. 

The fast pace of innovation is far exceeding societies ability to ensure that new products, services, and applications meet societies ethical standards.  Examples include the development of genetic testing, ability to determine if the unborn have deformities or undesirable characteristics, new innovation enabling 3-D printing and technical developments which have resulted in the loss of privacy.

As Norbert Wiener states “The future offers very little hope for those who expect our new mechanical slaves will offer us a world in which we may rest from thinking.”  This quote clearly extends to today’s technology innovations.  We can not continue to create new products and services without considering the ethical implications such as selective reduction, genetic profiling or further enhancing defectors ability to circumvent laws and regulations.

The challenge for today’s technologists is to determine how ethics and policy development/implementation can be integrated into technology innovation without hindering the pace of innovation or stifling creativity.  This challenge is made even more complex due to differences in ethics throughout the world due to religious or societal differences.  It is each of our responsibilities to anticipate ethical issues that may arise striving to minimize the negative consequences created by technology. 

In the past, society has developed new technologies with good intentions only to have it used in ways that violate morals and ethics principles.  Examples include the use of sonograms to determine gender which has resulted in the termination of untold lives.  Could one of those lives been the next Maragret Thatcher, Indira Gandhi, Hillary Clinton, Rosa Parks, Benazir Bhuto or Mother Thersa?  Unfortunately, we will never know!

Other examples include the development of 3D printing which may enable defectors to overcome pending gun control laws and regulations that may soon limit an individual's ability to purchase high capacity magazines.  The new technology may enable defectors to create new black-market industries which may enable defectors to commit horrible crimes such as the one recently perpetrated in Newtown Connecticut.

As J.H. Moore states, we must adopt a three-fold approach to bridge the gaps created by policy vacuums.  Ethics is an ongoing and dynamic activity not post-mortem.  There must be increased collaboration at the design stage between ethicists, scientists, and others.  We must also ensure that a more thorough ethical analysis is complete up front.  Together, these actions will help fill the policy gap that currently lags behind technology innovation.

The use of policy development to enforce ethics must be balanced with trust, freedom and personal responsibility ensuring that we don’t create an authoritarian society.  However each of us has a responsibility as technology professionals to analyze the potential ethical implications of our developments.  We must continually evaluate the risk/reward proposition placing an emphasis on our responsibility to all mankind.

Raval, V. (2012). Changing times and the Eternality of Ethics. ISACA Journal, 2, 1-2.

Raval, V. (2012). Risk and Responsibility. ISACA Journal, 6, 1-3.

Raval, V. (2013). Trust in Information Systems. ISACA Journal, 2, 14-15.

Raval, V (2012). Policy Vacuums. ISACA Journal, 3, 1-2
Troy075 at 2/23/2013 11:02 AM

Risk and Responsibility - James Pfotenhauer

While clearly ethics are right to pursue simply for their own sake, it should be noted that in his work Liars and Outliers, Bruce Schneier regarded security systems as both the final line of defense against defection as well as an amplifying force that could be utilized to enhance the strength of existing pressures to promote actions which were in line with societal interest.  This delineates a relationship between social pressures and security.  While social pressures can work in tandem with security they can also serve as a functional means of improving security simply by reducing the degree of the workload upon security by addressing the issue prior to the final line of defense.
This additional utility may prove useful when attempting to justify resource investment with senior leadership in an organization or even middle/lower management which is strongly focused on day to day operations in the organization.  Convincing these entities of the value of such resource expenditure can be key to obtaining buy-in.  However this raises a danger that when the relative value of the return is involved that the people in the entity may come to regard the situation in a purely ROI fashion, thus undoing hope for an ethical approach should it prove not be cost effective.
The question then arises, what is the best means for promoting ethics in society (either at large, or an industry or a single organization) so that such ethical benefits may be realized?
As indicated in the other readings, recent history has seen an unparalleled period of rapid technological development.  This has resulted in a constant rewriting of the rules for almost every facet of civilization including commerce, warfare, politics, and social interaction (just to name a few).  This degree of change in turn creates numerous opportunities for defectors as they are typically operating on a smaller scale and are more adaptive, thus they are able to more readily grasp and utilize new technologies as a more rapid rate than the society at large can adapt and integrate it. As Bruce Schneier predicts, this will result in an increasingly large gap between defectors and society at large as technology advance continues to accelerate.
The solution to this is to begin a more proactive approach to ethics.  Instead of waiting until an ethical crisis has erupted, a prior review of what people’s inherent rights and privileges are needs to be conducted to see if how the implementation of a new technology or methodology may disrupt the status quo.  Unfortunately this approach is likely to contain problems.  Most people reviewing the situation will likely have too narrow of a focus and not be able to identify second and third order effects.  Likewise the changes may take on attributes that range from insignificant in one culture but simultaneously be catastrophic in another. Consequently it seems that a broader perspective needs to be applied.
Instead of ethics being developed in isolated pockets, a broader range of collaboration and interaction is needed across industry lines.  Core inalienable universal rights needs to be identified as well as key privileges, vague though they may be initially.  Industry specific ethics groups then need to take this bedrock and further develop it while simultaneously coordinating with other industries both for
James Pfotenhauer at 2/23/2013 4:58 PM

Risk and Responsibility - James E. Johnson

The inherent risks with new technology have been and will be a major hurdle in today’s world. Per C.S. Lewis, “moral rules are directions for running the human machine. These directions have three purposes: to keep our own internal engines running smoothly, to keep our communities in harmony, and to help us follow the general purpose of human life as a whole.” I think C.S Lewis provides a great analogy in regards to a “machine” and what it takes to keep society running effectively and progressively moving forward. In a moral sense, as a society we need the harmony and a general purpose for life in order to keep threats and defection at bay. In relation, the advancement of technology has employed increased ethical behaviors for the advancement in the medical and science fields, but is also used negatively as well. For example, as stated in the related article “Apple suffered a major security breach last week when China-linked hackers infiltrated an unknown number of corporate computers.” ( So many of these types of major security breaches are happening at a more frequent rate and works to render the technology "machine" from running effectively.

Key questions remain as Dr. Ravel points out in the blog “Who will take the responsibility to consider in the risk assessment equation this wider universe of impacted societies? Where do we draw the line?” These are hard questions to answer; defectors causing these issues may be doing it for a plethora of reasons. I think there will be a continuing of this inherent risk in technology however we have to embrace the unethical side as well, in that we accept that fact that this will happen and continue to revamp guidelines and system modifications to deal with them. This relates to the notion that “IT has remained a moving target in recent decades. While dramatic values are created by weaving new technologies and innovating richer applications of all technologies, greater room for unethical behavior has emerged concomitantly”

Also to quote Warrant Buffet, ““Predicting rain doesn’t count; building arks does.” So as IT professionals and even as a society we need to build/modify the ark in order to handle and keep pace in the changes with technology. In doing so we will be able to weather the next storm no matter what the nature and ferocity may be.

Raval, V. (2012). Changing times and the Eternality of Ethics. ISACA Journal, 2, 1-2.
Mr.Johnson4444 at 2/23/2013 11:25 PM

Risk, Responsibility and Ethics,  Patrice K. Evans

This blog and the columns we read, discuss risk, responsibility and the role of morality and ethics when managing technology today.  I learned early on in my career the role of ethics in technology management.  Shortly after I started in IT, I worked for the US Forest Service and one of the IT “people” abused her access to snoop into her boss’s email.  She was caught, fired, and all of us were given a stern lecture on abusing our privileges and the repercussions if we did.  It also taught the Forest Service that managing computer systems was more than just another duty to give to the administrative assistants.  There were several such incidents that had happened in the course of about six months making management realize that computers were much more than just a fancy type writer. 

Fast forward twenty years and I have moved out of government service into the public sector, into an area where IT management has become as critical to the success of the business as keeping the generators running at the plants.  IT has more access to data than it ever had.  We are now not only responsible for developing and maintaining critical business system but also for the data that is held within them and ensuring that only the right people have access and that they can access their data from wherever they may choose to want to do so. 

I found the statement that IT Ethics is like “Old wine in a new bottle” to be an unsettling opinion for any IT manager to have.  As moral agents of IT systems, our integrity must play a vital role.  No more is the shredding machine the only way to hide damaging information.  As IT professionals we may be asked to do things by upper management that we know are not morally right.  We must be ethical and brave enough to do the right thing.  If an IT manager doesn't realize the importance of moral agency, she should not be in that job.  
In Trust in Information Systems Dr. Raval states “an overwhelming majority of people obey the implicit societal contract on trust, there are always defectors.  Because of them, the ecosystem of trust breaks down; as a consequence, people stop trusting each other.  This is the beginning of a whole new set of actions:  regulating, policing, monitoring, surveying, screen, checking and so forth.” (2012). As IT managers we deal with the problems created by defectors.  We frequently blame the regulations for all of the extra work.  But we must remember that the extra work is not the fault of the regulations but rather the fault of the defectors.  While the regulations do become burdensome, if defectors didn't create the need, Congress wouldn't be creating the regulations. 

This week I had firsthand experience with dealing with risk and trust.  My fiancé had a heart attack.  I had to trust doctors and nurses, none of whom I had never met before, to do the right thing.  As they wheeled him behind closed doors, I could only imagine how much technology they would be relying on, as well as their own medical expertise.  For two and half hours all I could do was wait.  He survived the surgery and was moved to ICU.  There I got to see all the technology that was hooked to him, medicating, monitoring, and alarming when things didn't fall within the defined parameters.  I had to trust that the electronics that were hooked up to him were programmed correctly and that the ICU staff knew and understood what everything meant. 

I also thought about what it meant for all of the data that was being collected on him.  While it is a good thing in that anywhere he goes his medical history is available and current.  If medical information gets in the hands of unethical defectors it could be used to destroy someone’s life rather than save it. I realize now how grateful I am for HIPPA.  I signed more papers that morning than I had when I bought my house.  HIPPA was created to protect the trust that we should have in medical and medical IT professionals.  IT Managers of the hospital are moral agents of all of that data.

As I watched all the equipment in the ICU I began to wonder if the hospital had a good BC/DR plan.  I wondered what would happen to the patients in ICU, if a disaster occurred.  Few looked as though they could survive being away from their electronics for very long.  The hospital is so dependent on technology that a BC event could be critical to people’s lives.  Moving patients or leaving them behind could cause a moral dilemma that few medical professionals are prepared to face.   
I did ask several hospital workers if they had a BC/DR plan some had no idea what I was even talking about.  No one I spoke to was very comfortable with the topic and all referred me to someone else.  I was able to find out they have an emergency generator; plans with the local ambulance service and rest homes if they need to move patients; but no one was too sure how it all worked.  I was directed to ask the business manager for more details.   The hospital is required to post information about the operations and inspections of the elevator in the open for everyone to see, but the critical information about the actual running of the hospital technology is hidden from patients and medical staff both.  This is another example of technology advancing faster than those who use and depend on it to fully understand the implications of its failures.  It is also an example of where government should have regulations in place for hospitals to have BC/DR plans and they should be as available to the public as the elevator’s license. 

Raval, V., DBA, CISA. (2012). Information ethics: risk and responsibility. ISACA Journal, 6, 1-1.
pke88026 at 2/24/2013 2:07 AM

Trust in Information Systems- Thomas White Eyes

Moving into the future we are depending more and more on information systems to keep us secure. We depend on information systems to track criminals and sex offenders. We get weather alerts and other emergency alerts on our cell phones, through the Internet, text messaging, and call services. If our property goes missing, we wait for a pawn shop or police database to flag it in their systems. We are putting all of our faith in these systems to secure us, but many of us don't stop and wonder who is going to put their energy into securing those systems.

A while ago I sat in a demonstration and watched an operator increase the level of water being treated, remotely over 200 miles away using his laptop. The treatment plant services thousands of family on one of the largest water systems in the country. This was a public demonstration on an open network. The security risk if somebody intercepted the operator's connection or stole his laptop could have been a disaster. In the wrong hands, a person could poison entire communities through increasing the amount of water treatment chemicals remotely with the laptop or by accessing the system using the operator's credentials. In retrospect this makes me wonder, what are we willing to risk for convenience?

I work with emergency services and one of the most critical systems we have is our dispatch phone computers. In the future, many people in the emergency services field, mainly those that work in public safety access points, talk about the next generation of 911 phone calls. In the future we might see people sending live streaming videos of their situation, which will allow dispatchers to have more control in giving guidance and help to the victim. People will be able to send cries for help using text messaging. While these technologies have the potential improve emergency services, they also bring new risks. New questions need to be asked, like what kind of hostage situations will be created once a 911 call for a simple robbery turns into a 911 video call? What happens when something meant to deter criminal behavior becomes a tool for the criminal to use? Dispatchers are trained to keep their cool when there is screaming on the other end of the phone, but will a dispatcher be able stomach the same situation when they can see the bloody and violent reality of what's happening in the field? What about the opposite when all they have is a text message to go off of. Can they properly asses and dispatch resources when they only have a vague text message for help?

Not only are the risks external to these new systems but internal as well. With added technology means added technical and security risks. Imagine not being able to get through to a dispatcher because 20 people are trying to send video calls for a ten car pile up elsewhere. Next generation 911 calls will rely on Internet protocols to process communication which means dispatch consoles will be vulnerable to the same attacks any computer on the Internet faces. With more incoming Internet traffic to dispatch computers, means more ways to find attack vectors into our critical information systems. Denial of services attacks are just one example of recent attacks (Clayton, 2013) on our dispatch centers and they will only increase as we put more emphasis on our digital infrastructure. Emergency service systems will have to be locked down and closely monitored if we are going to put any amount of trust in a NG 911 system.

In my opinion, technology is meant to enhance our lives. I have dedicated a good amount of my time securing and improving critical systems for our community. A recent experience has taught me that we need to be careful with technology moving forward. I responded to a work request late one night because a communications officer was having trouble with the new firewall I installed earlier that day. The dispatcher was shaking with fright. “A woman was being choked to death and I couldn't help her because I couldn't get her [cell phone] coordinates into Google maps!” Nobody died that night, but it is enough to make me wonder where we would be if we didn't have the technology in place designed to save lives. There is no going back, we just have to be careful and tip-toe forward. Nobody in my organization is ever going to ask me if I made sure the connections into our system are secured, encrypted, and monitored and then pat me on the back for a job well done. I have to make a moral decision to ensure that the system is secure because to many people trust the technology to keep them safe.

Clayton, M. (2013, April 4). Internet-based attacks hit emergency call  centers. What's the damage? Retrieved October 10, 2013, from ll-centers.-What-s-the-damage
Thomas545 at 10/11/2013 12:11 AM