The Digital Trust Imperative: Digital Trust—Automation and Its Impact

Digital Trust: Automation and Its Impact
Author: K. Brian Kelley, CISA, CDPSE, CSPO, MCSE, SECURITY+
Date Published: 1 July 2024
Read Time: 9 minutes
Related: Digital Trust Board Briefing

This article will not focus solely on digital trust. However, when we consider solutions involving automation, we are considering how to improve the performance of certain processes and systems within our organization. Being faster and more accurate than others in our field certainly has a positive impact on our reputation and the digital trust others have in our organizations. Likewise, if our automation implementations are problematic and faulty, we will experience a negative impact. Therefore, how we implement automation, the controls we establish, and the areas we choose for automation implementation, all influence the level of digital trust in our organizations.

Computers Do Exactly What We Tell Them

In their book, Extreme Ownership: How US Navy SEALs Lead and Win1, retired SEALs Jocko Willink and Leif Babin introduced many people to the concept of "Commander's Intent." This principle underscores the importance of ensuring that the team understands what needs to be done and why. For instance, if one understands that the primary purpose of the internal audit function is to protect the organization, especially from itself, one should realize why the internal audit team needs to look at and seriously consider the controls it implements. Applying this logic, if a particular area of risk cannot be satisfied by controls, understanding what needs to be protected against allows the combined team to come up with some other means of protection, usually referred to as a "compensating control". In these cases, we understand the "commander's intent" behind the audit function. As a result, if we find that a particular process does not work, we can think about and develop alternate solutions that are still in keeping with the principle of commander's intent.

Automation is powerful. Because it is powerful, we should carefully consider its use and put in appropriate controls.

Unfortunately, this principle does not work so well on machines. Machines will do exactly what we tell them to do. If we are thinking about machine learning (ML), artificial intelligence (AI) solutions will do exactly what they are taught to do. For instance, generative AI is taught via large language models to think about words in a mathematical way. This mathematical representation is called a word vector2 and these word vectors allow a machine to understand how closely related semantically two or more words are. When we ask a generative AI implementation to generate output for us by giving it a prompt, it does not understand what we mean by our words. It cannot grasp "commander's intent". The results generative AI produces are based on the mathematical model it has been taught.

You Are Just a Co-Pilot, Copilot

One domain where the software industry has begun to harness the power of generative AI is in assisting developers with writing code. Typing in a quick prompt and receiving a code block back can help the speed of development. However, code "written" by generative AI could have problems, such as outdated patterns or language use. Also, if code with security vulnerabilities were used to train the model, generative AI can and will produce more code with those same vulnerabilities.3

To guard against these issues, we need to use code quality scanners as well as have knowledgeable humans conducting manual reviews, especially in areas where the code quality scanners flag. The claim that generative AI can develop an application may be potentially true. However, if we understand the issues with relying on generative AI alone for code generation, we understand that the application may be riddled with inefficient and insecure code. That is why solutions like Microsoft's Copilot should be looked at as another computer-aided design (CAD) tool, and not a replacement for humans.

“Rewarding A, While Hoping for B.”

Another example of where computers can go wrong is if we make the mistake of "rewarding A, while hoping for B."4 AI researcher Dario Amodei encountered this issue in teaching a boat simulation how to win a racing game. His "rewarding A" was instructing the model to accumulate points, which it did. The fastest way to do so was to turn loops in the harbor and collect power ups. This did not result in winning the race, the "B" in this example.5 This example is an extension of the idea that machines will do exactly what we tell them to do, instead of what we want them to do, but in a different way than we would typically expect. Here a goal was set, to achieve as many points as possible, as opposed to a step-by-step set of instructions. It is a different mechanism but the same issue.

Failure to Check Your Work

Imagine if your organization put in a platform to help make better decisions. However, no one in your organization, or any organizations like yours, knew exactly how the platform worked. It was a black box where you entered data relevant to your decision and it gave you back a suggestion. Other than that, no one had any idea of how it worked. Of great importance, then, would be verifying the platform's data model had been appropriately built and checked for bias. After all, any platform is going to depend on that data model as well as any rules or conditions that have been added concerning the usage of that data model. If there is bias in the data, for whatever reason, the platform is going to produce bad recommendations at least part of the time. Likewise, we will have the same problem if the implementation or processing of that data model has bias issues.

Unfortunately, journalists found the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) to have these sorts of issues.6 The COMPAS tool is used by judges and law enforcement to predict the likelihood of a particular criminal defendant's recidivism. In other words, the probability that a particular individual will commit additional crimes. The system incorrectly judged that black defendants had a higher risk of recidivism while also incorrectly judging that white defendants had a lower risk of recidivism when compared to actual results. The percentage of times the system correctly predicted recidivism was similar between the two races. It was when the predictions were wrong that the journalists found the divergence and saw that it went in opposite directions. In other words, there was definitely a bias based on race within the platform's predictions. However, quite a few municipalities utilized the system without any sort of validation. No one, not even the company, checked their work.

Automation is powerful. Because it is powerful, we should carefully consider its use and put in appropriate controls. In the case of COMPAS, it is the same sort of issue we try to protect against with data scientists: bias. The fact that it is a computer does not exempt us from the necessity of implementing appropriate controls. We are not worried about a Skynet scenario,7 but we are worried about a system that might recommend someone not be given parole because there is a bias in the system. We are worried about the algorithm for a self-driving car to mishandle a particular traffic situation and make a decision that results in injury and/or loss of life. Just as with any risk to the organization, risk involving the use of AI, ML, and other forms of automation should be considered and mitigated.

Faster and With Less Error

With all the warnings I have given thus far, it might seem like I am against more automation. I am not. We will conclude with several examples.

There are a lot of applications that still require a graphical interface to manage security. There are no application programming interfaces (APIs) or PowerShell cmdlets to be called, and no proprietary language that can work through some custom interface. The systems are not built for automation, at least when it comes to security. If we intend to reduce the likelihood of human error when it comes to managing systems, then we need a solution that can act like a human. Enter robotic process automation (RPA). Robotic process automation can be "taught" how to look for particular cues in the graphic interface and enter the appropriate text, make the correct clicks, and submit the security changes like a real human. Moreover, many RPA systems do have APIs that can be used by a security platform to make an API call, pass the appropriate properties over, and invoke the correct RPA job to manage security on these formerly manual applications. Moreover, it can usually do these types of processes faster than humans.

Automated scoring for loan applications is another example of automation that helps the organization. Automated fraud detection is yet another. One solution helps increase an organization's revenue while the other reduces the organization's loss. These systems operate significantly faster than humans and can spot patterns that humans might miss. These are yet more examples of how automation helps the bottom line.

Implement Automation and Implement It Well

Automation, implemented correctly, is a huge boon to most organizations. However, some of the challenges with automation, especially systems that rely on data models, differ from other systems. As a result, to properly handle automation we must think specifically about how automation works, where the risk lies, and what types of controls work for that risk. Otherwise, automation has the potential to cause our organizations harm, possibly erasing any gains it may provide.

Endnotes

1 Willink, J.; Babin, L.; Extreme Ownership: How US Navy SEALs Lead and Win, St. Martin's Press, USA, 20l5
2 Shoham, S.; "What are Word Vectors?," Kubiya, 5 August 2023, https://www.kubiya.ai/resource-post/what-are-word-vectors
3 Degges, R.; "Copilot Amplifies Insecure Codebases by Replicating Vulnerabilities in Your Projects." Snyk, 22 February 2024, https://snyk.io/blog/copilot-amplifies-insecure-codebases-by-replicating-vulnerabilities/
4 Kerr, S.; "On the Folly of Rewarding A, While Hoping for B." Academy of Management Journal, Vol. l8, No.4, December l975, https://journals.aom.org/doi/10.5465/255378
5 Christian, B.; The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, USA, 2020
6 Larson, J.; Mattu, S.; Kirchner, L.; et al.; "How We Analyzed the COMPAS Recidivism Algorithm," ProPublica, 23 May 20l6, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
7 Sherlock, B.; "Terminator: Why Skynet Was Created (& How It Became Self-Aware)," ScreenRant, 9 April 2023, https://screenrant.com/terminator-why-skynet-formed-became-self-aware/

K. BRIAN KELLE Y | CISA, CDPSE, CSPO, MCSE, SECURITY+

Is an author and columnist focusing primarily on Microsoft SQL Server and Windows security. He currently serves as a data architect and an independent infrastructure/security architect concentrating on Active Directory, SQL Server, and Windows Server. He has served in a myriad of other positions, including senior database administrator, data warehouse architect, web developer, incident response team lead, and project manager. Kelley has spoken at 24 Hours of PASS, IT/Dev Connections, SQLConnections, the TechnoSecurity and Forensics Investigation Conference, the IT GRC Forum, SyntaxCon, and at various SQL Saturdays, Code Camps, and user groups.