

Introduction
Generative artificial intelligence (GenAI), autonomous solutions, and AI-based business intelligence dashboards have become a fixture in many organizations due to their effectiveness and speed in performing tasks.1 The advances in AI capabilities can revolutionize economies and societies around the world.2 However, despite the associated benefits, a growing challenge in the uncontrolled use of AI technologies in the workplace emerges in the form of shadow AI, which is the unauthorized use of AI solutions and tools to perform job tasks.3 As such, organizations and business leaders wonder when AI solutions will meet the transformational expectations and value creation that AI innovations have promised.4 Shadow AI resembles shadow IT, a problem associated with unauthorized technological innovations circumventing formal enterprise IT controls. The consequences of shadow AI are critical, especially in data privacy, security, and compliance. These consequences put organizations at risk, underscoring the crucial need for proper mitigation strategies.
Understanding Shadow AI
Shadow AI involves using AI solutions such as chatbots, code assistants, and large language models (LLMs) without approval from IT or compliance teams.5 Compared to authorized AI systems, which go through testing and evaluation before adoption for use in enterprises, shadow AI applications create silos that are often dangerous for enterprise risk management. For example, an employee using a personal GitHub Copilot subscription to generate production code may inadvertently create unmonitored information flows and compliance vulnerabilities despite improving productivity.6 The adverse outcomes are too costly for enterprises compared to the associated benefits.
Risk Associated with Shadow AI
Shadow AI creates a unique risk that goes beyond the risk shadow IT presents. There are several primary concerns:
Data leakage and IP exposure—Shadow AI operates in silos. As such, employees may misuse AI tools and systems by sharing confidential and proprietary business information with external parties or giving access to strangers to cause breaches and losses of intellectual property. According to IBM’s 2025 Cost of Data Breach Report, AI-associated cases caused organizations more than US$650,000 per breach.7 The financial implications came from penalties for shadow AI exposure that denied organizations AI governance frameworks.
Compliance violations—Using AI tools without proper authorization denies organizations the opportunity to streamline the tools to data protection frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), which requires open, transparent, and collaborative processes to support AI risk management efforts in enterprises, and the EU AI Act, which regulates AI development in the European Union and requires enterprises to harmonize AI use for economic and social benefits.8
Security vulnerabilities—Unauthorized AI tools and systems are vulnerable to cyberthreats. These threats include the injection of viruses into organizational systems, cyberattacks that weaken systems and access sensitive information, and unsanctioned model tuning, which influences enterprise operations.9
Model hallucinations and bias—Shadow AI often utilizes unvalidated models that may produce misleading information (model hallucinations) and biased outcomes, resulting in poor-quality decisions and diminished organizational trust.
Employee use of shadow AI and the potential risk incurred speaks to a larger issue: Although 26% of organizations have developed innovative AI solutions, only 4% have realized a desirable return on investment (ROI) due to the increasing use of shadow AI, which distracts organizations from their security and regulatory compliance needs.10 These tools bypass controls and oversight required to mitigate risk, causing security vulnerabilities that lead to data loss, manipulation, and the injection of harmful code by malicious actors.
The Need for AI Usage Audits
Organizations must have a systematic framework to detect and prevent shadow AI usage to enhance compliance and control enterprise risk. Responsible AI use in organizations will prevent issues such as giving incorrect advice to customers, exposing proprietary enterprise information, or creating vulnerabilities that lead to cyberattacks. Organizations can ensure that AI tools are used responsibly and ethically by establishing robust AI governance principles, conducting AI risk assessments, and regularly monitoring for shadow AI use.11 However, implementation will still come with hurdles, including increased cost in implementing AI monitoring tools, employee resistance to AI-use policies, and inadequately skilled AI auditors. Addressing these challenges requires regular AI usage audits and structural assessments to identify, evaluate, and control unauthorized AI tools and solutions. An AI usage audits can help organizations avoid risk and maintain trust in several ways:
AI discovery and inventory—The audit will help organizations develop tools that detect shadow AI use through their cloud services. For example, a 2025 report by the World Economic Forum emphasized the importance of a technology audit to foster transparency, accountability, and resilience to allow AI governance that aligns with enterprise risk management goals.12 Organizations can also adopt data loss prevention (DLP) and security information management tools to analyze security vulnerabilities and their sources.
Policy development and enforcement—Organizations should develop AI policies around usage only after understanding the risk created by shadow AI applications in business operations.13 The policies should list approved AI tools based on organizational needs, define data sharing when using the AI tools, and mandate user training on responsible AI use. Such policies will classify AI risk, control measures, and protective interventions so organizations can proactively manage AI-related risk.
Access and identity controls—Auditing will help organizations integrate extensive identity and access management (IAM) protocols to limit user access to AI tools, especially those connected to sensitive data.14 These controls will ensure accountability and compliance, enhancing security posture, optimizing AI processes, and supporting AI adoption to achieve robust and adaptable systems that overcome evolving threats and challenges.
Meeting accountability standards and requirements—Even organizations that build their own AI tools must ensure those tools comply with accountability frameworks that guide employees on their ethical use. These standards typically require documenting usage, the models powering the AI tool, its outputs, and the intended functions—establishing a clear record that supports consistent auditing.
Building a Governance Roadmap
Controlling shadow AI usage in organizations requires a broader intervention focused on proactive and collaborative AI governance among all stakeholders. AI transparency is necessary for organizations to build and adopt AI tools.15 There are several initiatives organizations should use to develop a governance roadmap for enterprise AI use:
AI tool registry—Organizations should maintain a comprehensive catalogue of the approved and tested AI tools that comply with the organization’s business model. The catalogue will help organizations improve compliance and security, reduce the risk of bias and ethical concerns, streamline AI adoption, enhance efficiency and productivity, improve data-driven decision making, increase innovation, improve customer experience, and optimize processes to improve efficiency and reduce costs.
Training and culture building—Organizations must educate employees on ethical standards and regulatory requirements regarding AI. This training should include compliance measures and discussions on the risk created through unauthorized AI use. Training and educating employees on AI ethical standards will help facilitate authorized AI use, build trust with stakeholders, ensure compliance to avoid legal issues and penalties, and foster a culture of responsible AI use. To this end, regular training sessions should include ethical considerations, data privacy, bias and fairness, regulatory requirements, transparency and accountability, AI governance, and the practical implications of AI use.
Furthermore, an organizational culture that is aware of the importance of ethical AI use will enable cross-functional governance through teamwork, as employees will want to band together to ensure appropriate AI use. To further facilitate collaboration, organizations should establish teams with professionals from IT, legal and compliance departments, human resources (HR), and cybersecurity to ensure effective use and compliance with AI protocols.
Integration with ERM—Organizations that have enterprise risk management (ERM) in place should integrate AI use into their risk management frameworks to enable continuous monitoring and adaptation. As a result, the risk management team will work closely with the mentioned cross-functional governance team to develop AI-risk mitigation measures and integrate them into the organization’s culture.
Collaborate with AI developers and providers—Organizations should collaborate with AI developers to protect their organizational data. Organizations can prevent data leakage from users when using AI if they collaborate with developers to employ protections such as logical isolation of users, physical security, encryption, and data control.16
Integrating these items in the AI building governance roadmap is fundamental for enterprises that wish to ethically adopt and implement AI. These measures will foster risk mitigation, enhance performance and efficiency, build trust and stakeholder confidence, and maximize long-term value when using AI tools.
Conclusion
Shadow AI use threatens enterprise security. Organizations that seek to implement AI tools should develop protocols and roadmaps to prevent the operational, legal, and reputational risk of unauthorized AI use. In addition, organizations should also embrace regular audits, train employees, enforce compliance, and adopt oversight teams to protect organizational data, people, and models. In essence, managing unauthorized AI use within organizations through implementing an AI governance roadmap will ensure compliance and harness the full potential of AI use while managing risk and establishing a sustainable technology-enabled future.
Endnotes
1 McKinsey, The State of AI in 2023: Generative AI’s Breakout Year, August 2023
2 World Economic Forum, AI Governance Alliance: Briefing Paper Series, January 2024
3 Mitrovic, Z.; “Shadow AI: Are Employees Secretly Sabotaging Your Company’s Security?,” LinkedIn, 14 May 2025
4 Deloitte, Now Decides Next: Generating a New Future, January 2025
5 Open Worldwide Application Security Project, (OWASP), OWASP Top 10 for LLM Applications 2025, 18 November 2024
6 International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system, Edition 1, 2023
7 IBM, Cost of a Data Breach Report 2024
8 National Institute of Standards and Technology (NIST), AI Risk Management Framework, Version 1.0, USA, January 2023; Europa.eu, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS NIST (n.d.).
9 Future of Privacy Forum, Best Practices for AI and Workplace Assessment Technologies, 2024
10 Hoque, F.; “Two Frameworks for Balancing AI Innovation and Risk,” Harvard Business Review, 6 March 2025
11 Accenture, From Compliance to Confidence: Embracing a New Mindset to Advance Responsible AI Maturity
12 McCartney, A. (2023). Gartner Top 10 Strategic Technology Trends for 2024
13 Kaganovich, M.; Kanugo, R.; et al.; Delivering Trusted and Secure AI, Google Cloud, 2025
14 Kazi, S.; “Optimize Enterprise Data and Uphold Privacy With ISACA’s DTEF,” ISACA® Industry News, 14 March 2024
15 Kaganovich; Kanugo; Delivering Trusted and Secure AI
16 Microsoft, “Data, Privacy, and Security for Microsoft 365 Copilot” 2 May 2025
Alex Mathew, Ph.D., CISA, CCNP, CISSP, CEH, CEI, CHFI, ECSA, MCSA
Is an associate professor in the department of cybersecurity at Bethany College (West Virginia, USA) and is widely recognized for his deep expertise in cybersecurity, cybercrime investigations, next-generation networks, data science, and IoT Azure solutions. His proficiency in security best practices, particularly in IoT, cloud systems, and healthcare IoT, is complemented by his comprehensive knowledge of industry standards such as ISO 17799, ISO 31000, ISO/IEC 27001/2, and HIPAA regulations.
As a certified Information systems security professional (CISSP), Mathew’s leadership is evident in his role as a consultant across international regions, including India, Asia, Cyprus, and the Middle East. His extensive 2-decade career, distinguished by numerous certifications and over 100 scholarly publications, underscores his commitment to advancing the field. Mathew has been a pivotal force in organizing cybersecurity conferences and establishing incubation centers, contributing significantly to the academic and professional community.
A highly sought-after speaker, Mathew’s influence extends to international conferences where he shares his insights on cybersecurity, technology, and data science. His remarkable interpersonal skills and openness enhanced his ability to engage and inspire diverse audiences, further cementing his position as a leader in his field.