In today’s rapidly evolving technological landscape, risk management experts are aware that employees are using large language models (LLMs) and other artificial intelligence (AI)-enabled tools to automate their tasks. They are also painfully aware that, at any given moment, employees could be including proprietary information in their prompts. Cybersecurity professionals know that most users of these technologies are unaware that generative AI systems are an endlessly growing database of sensitive information. If professionals want to use these tools, they must also understand and mitigate its risk. However, it is important to ask: Are we addressing the greater threat of AI, that its ungoverned use could risk the essential human element of critical thought and lead to unhealthy business risk?
AI tools are rapidly becoming a de facto source for answers, which some treat as gospel.1 While the rapid evolution of AI systems is widely acknowledged, the greater concern lies in the lack of context accompanying its responses.2 The ultimate hope for AI is that it becomes capable of presenting information that is both novel (unique) and can be monetized (commercially viable). Current AI systems typically do one or the other very well, but not both simultaneously.
For risk management professionals or those developing AI governance policies, it is important to understand several key issues:
- The current limits of AI
- The lack of transparency regarding the security of AI systems
- The importance of prioritizing human skills and critical thinking alongside the adoption of these tools
Without factoring these elements into enterprise risk management activities, such as assessing risk impact and likelihood, risk modeling will be too inaccurate to support timely action.
The development of robust AI governance policies is essential for organizations seeking to leverage AI's capabilities cautiously and securely. While AI's reputation in data analysis and pattern recognition presents vast opportunities, human reasoning remains vital for fostering meaningful relationships and guiding impactful organizational decisions. Establishing governance frameworks that prioritize risk mitigation and system security ensures that AI is not only harnessed effectively but also ethically.
AI Cannot Replace Human Reasoning
AI succeeds incredibly well at telling users something they did not know and creatively answering user queries. But so does a dictionary, encyclopaedia, thesaurus, technical manual, mentor, or educator.
AI, however, cannot teach critical thinking and analysis skills. It cannot break down complex problems and solve them, instead, it attempts to give the user solutions to similar problems or provide answers to questions that have already been solved. This places organizations at risk as enterprise personnel may lose their operational command over the relevant subject matter. In effect, they may no longer know how a system works. For example, an organization that relies heavily on AI-managed services to complete daily tasks may eventually lose wherewithal of daily operations, leading to inefficiencies and risk.
All healthy enterprises engage in healthy risk. Too little risk and the organization stagnates. Too much risk, and the organization may encounter a threat it cannot recover from. It is important to remember that risk management requires making business decisions today on events that have not yet happened. The decisions of today require consensus, capital, and professional buy-in. AI solutions get organizations part of the way — enough to handle standard operations or mid-level complexity—but not all the way to senior-level decision making or high-sensitivity work. While AI excels in data analysis and pattern recognition, the importance of human reasoning cannot be overstated. It plays a crucial role in nurturing relationships with colleagues, stakeholders, and customers, while also guiding organizations to make meaningful decisions that protect valuable data.
Creating AI Governance Policies With Security in Mind
Given the ease by which AI systems are both accessible and responsive, organizations must have robust policies in place to govern their use. Organizations should develop specific acceptable use language concerning AI systems, typically codified in enterprise policy, and ensure that these policies are read and acknowledged (physically or digitally signed) to ensure compliance. It can also be helpful to enlist the services of a legal firm with privacy expertise. Specific to the policy itself, several considerations are also important to include:
- Compliance—Ensure that AI systems are used within the bounds of regulations and organizational obligations.
- Disclosure—Ensure that when output from AI systems is utilized, its use is disclosed as part of business correspondence or other analysis.
- Prohibited use—Ensure there are rules for enterprise AI tools that detail prohibited use for employees.
- Nondiscrimination—Ensure that AI output is free from bias and discriminatory language.
Creating AI governance policies with these considerations in mind will give organizations confidence that risk is mitigated and sensitive systems remain secure.
Use AI with Caution
Organizations must stop treating AI as the all-knowing oracle. AI shines best in a collaborator role and must be fact-checked and verified. It should not be making business decisions for the organization. There are several ways AI can be implemented into processes while protecting sensitive organizational data and preserving consumer trust:
- Do not feed AI systems intellectual property (IP). A query today is someone’s answer tomorrow.
- Do not misrepresent AI answers as your own analysis. It can harm credibility as a professional and diminish trust in the organization.
- Always fact check AI output. AI models have plenty of bias, intentional or not. Thus, it is crucial to cross verify.3
- Do not assume AI systems will remain “human” in the future. Futurists such as Yuval Harari are starting to sound the alarm: do not think that when AI systems truly begin thinking for themselves, they will do it in the same way a human brain would.4
If implemented correctly, AI has great potential for creating amazing, novel, and potentially economically viable intelligence. For the time being, the best source of outside-the-box thinking is humans using tools, such as AI, to augment their human creativity to solve complex business problems.
Endnotes
1 Klingbeil, A.; Grützner, C.; et al.; “Trust and Reliance on AI — An Experimental Study on the Extent and Costs of Overreliance on AI,” Computers in Human Behavior, vol. 160, 2024; Howard, A.; “In AI We Trust — Too Much?,” MIT Sloan, 26 March 2024
2 Mucci, T.; “The Future of Artificial Intelligence,” IBM; Sorrel, C.; “Can These New AI Models Answer Questions Better? Not Really,” Lifewire, 28 May 2024
3 Holdsworth, J.; “What is AI Bias,” IBM
4 Pangambam, S.; The Rich Roll Podcast, podcast, “Transcript of Our AI Future Is Way Worse Than You Think: Yuval Noah Harari,” The Singju Post, 22 April 2025
Matthew Dehant
Is the CEO of Security Counsel, an information security management consultancy. He has 25 years of experience building information technology and security programs. As an in-house CISO and through Security Counsel, Dechant has managed the creation of cybersecurity programs for numerous clients and their executive teams, corporate boards, and high-net-worth individuals. He leads response events and conducts tabletop exercises with his clients to help them prepare for their potential worst-case scenario cybersecurity events. He is part of numerous cybersecurity best practices committees & boards and is passionate about supporting quality cybersecurity education.