Information Security Matters: Artificial Intelligence—Ethics and Security

A hand grips a bright purple light, casting a warm glow in the dark.
Author: Steven Ross, CISA, CDPSE, MBCI, MBCP
Date Published: 1 May 2025
Read Time: 7 minutes
Related: Artificial Intelligence Training and Resources

It seems to me that no matter what information security project I am working on these days, there is always someone who asks, “Sure, security is important, but what about artificial intelligence?” On occasion, this is a relevant question, but I have often found that the person who is asking is ignorant about artificial intelligence (AI) and is just trying to demonstrate my ignorance before I expose his. This is a useful technique these days. I suggest everyone should ask his or her brain surgeon, rocket scientist, and football coach about AI. The question is certain to obfuscate almost any issue under discussion, while making the questioner seem like he or she is the holder of some heretofore undisclosed wisdom.

There is, however, a close relative to that question that I believe is worth pondering: “Yes, artificial intelligence is important, but what about security?”

The Security of Information in AI Systems

There is little question that AI is prevalent in many spaces today, and its use will continue to grow. It is already in use on my cell phone, at the pharmacy, at the airport, and in my car. No one asked me if I wanted to deal with AI; it seems it just sprang up, unannounced. But somebody did decide to put it there. I have often wondered what that somebody was doing with the information the AI system collected. It knows where I was, where I am now, and has a pretty good idea of what I am doing there. I am not so happy about anyone knowing that much about me.

Human beings, including AI system developers, are fallible, and the line between right and wrong is not always clear.

The security requirements for the use of AI in my daily life seem clear to me. Any information gained should be managed in such a way that only those who are authorized to see and use it may do so. And the use of AI systems should be monitored by someone (or something, since it might be another AI system doing the monitoring) to make sure that it is not being abused.

Yes, it is clear, but why? What difference does it make to me if anyone else knows which medicines I take or where I fly? Well, it is my personal information, and I have a right to keep it private. But where did I get that right? It really is not a right as such, but it may (depending on jurisdiction) be a law, and laws are made by people I may or may not have chosen to make them.

The Ethics of AI Systems Use of Information

Ultimately, who may see my—or anyone else’s—private information is a matter of ethics. Aristotle told us that “He is his own best friend and takes delight in privacy whereas the man of no virtue or ability is his own worst enemy.” The Ten Commandments prohibit theft, which I interpret as including stealing someone else’s personal information. The Quran says, “Do not spy on one another.” At some level, learned citations notwithstanding, we simply know that using someone else’s information without his or her permission is just wrong.

Oh, yes, we know it because we are sentient human beings, but AI systems are extremely useful idiots. They know not what the ethical import of the information in their models might be nor even what it means. They are only able to correlate some specks of data with other specks to come up with something that, rightly or wrongly, has meaning to the reader. Those AI systems were crafted (so far) by human beings who can or should be able to distinguish right from wrong and restrict what the system can do within ethical bounds. But human beings, including AI system developers, are fallible, and the line between right and wrong is not always clear.

The ethics of AI have been a concern for theorists and developers for some time. Carel Kapek wrote the play R.U.R. about robots turning on humans in 1920.1 Isaac Azimov proposed his “laws of robotics” in 1942.2 As early as 1956, leading AI theorists were expressing worries about the responsibility and accountability for the use of the technology. They emphasized the need for a societal discussion on the implications of AI, including ethical frameworks and the safeguards that would ensure AI was developed and deployed responsibly.3 James Moor of Dartmouth College coined the term “Computer Ethics” in 1985, anticipating the advent of AI.

The Interfaces of AI Ethics and AI Security

Many organizations have formed AI Ethics Committees to deal with the thorny ethical issues presented by AI, including unintended bias, privacy violations, harm to people and property, reputational injuries, and violations of laws and regulations.4 But many of these matters are also very much in the domain of the information security function. So where is the line, if there is one, between the ethics and the security of AI systems? Where are the points of overlap, or at least tangency, between AI ethics and the security of AI systems?

To me, the starting point in resolving the questions raised is whether a system must be ethical for it to be secure. To me, the reverse is self-evident: A system cannot be assured to be ethical if it is not secure. Whatever guardrails are built into a system to prevent its unethical use can be overridden if unauthorized, presumably unethical, people can access it.

But can a system designed to do unethical things be considered secure? Is it ethical to develop an AI system intended to purloin the secrets of foreign governments? Intelligence agencies do it all the time. Would anyone ever develop an AI system to cheat on taxes? Probably not, but there many tax departments who wring every dollar out of the tax codes, and maybe a few extra pennies as well.

Of course, this is a philosophical discussion, well worth having over a glass of cognac on a stormy night. But it also has practical ramifications. The lines of demarcation between the information security and AI ethics functions need to be carefully drawn. Which function should take action if an AI system discloses corporate secrets or customers’ personal information? Who is responsible for ensuring that only authorized persons have access to AI models, training data, and algorithms? As AI systems perform jobs that no human has ever done before, who is to provide recoverability for those positions if the AI system goes down?

These issues should probably have been addressed prior to organizations beginning their explorations of the uses of artificial intelligence. They are certainly timely now. Some matters seem beyond information security’s remit. For example, systems that devolve into racism or misogyny are a real problem but do not seem to me to be relevant to security. Others are surely in security’s realm, such as control of access to models and training data. Boundary concerns such as these usually, so I have found, lend themselves to shades of grey, with both ethical and security specialists needing to have a say.

I do not think that merely adding the CISO to the AI Ethics Committee is sufficient to resolve these complex problems, nor is placing an ethicist in the information security department. It is easy for me to say that these are matters that require cooperation and the destruction of silos. I have heard that one before and know it easier to say than do. But as enterprise after enterprise embarks on its AI journeys, I am quite certain that I would prefer too much attention to ethical security issues than not enough.

Endnotes

1 Kapek, C.; R.U.R. (Rossum's Universal Robots), English translation: Penguin Random House Canada, 2004; Kapek invented the term robot, from the Czech word robota, which means slavery.
2 Goel, A.; “Looking Back, Looking Ahead: Humans, Ethics, and AI”, AI Magazine, vol. 43, iss. 2, p. 267-269; Azimov, in turn, invented the term robotics.
3 Szabo, L.; “AI’s Origins: John McCarthy, The Visionary Mind Behind Artificial Intelligence,” NowadAIs, 29 February 2024
4 Blackman, R.; “Why You Need an AI Ethics Committee,” Harvard Business Review, 1 July 2022

STEVEN J. ROSS | CISA, CDPSE, AFBCI, MBCP

Is executive principal of Risk Masters International LLC. He has been writing one of the Journal’s most popular columns since 1998. Ross was inducted into the ISACA® Hall of Fame in 2022. He can be reached at stross@riskmastersintl.com.