

Artificial intelligence (AI), especially generative AI, has been dominating headlines and enterprise conversations in recent years. This innovation has been touted as a transformative force that will revolutionize everything from menial tasks to advanced analysis. Vendors, investors, venture capitalists, and media outlets have collectively fueled a narrative that positions AI as the solution to nearly every business challenge, promising unprecedented efficiency, automation, and insight.1
While AI undoubtedly has had impressive achievements, many of those wins are narrow, context-dependent, and, frequently, hard to achieve. Organizations are beginning to discover that the road to meaningful AI adoption is far more complex—and costly—than what was initially anticipated.2 From the need for massive infrastructure investments and robust datasets to unforeseen security vulnerabilities, intellectual property risk, and lagging returns on investment, the promises of AI have frequently outpaced the outcomes.3
The reality is that AI implementation often creates new layers of risk. Security professionals are grappling with increased attack surfaces, governance professionals are facing mounting concerns around explainability and compliance, and risk professionals are being asked to sign off on implementations where the decisions made by models cannot be thoroughly audited.
Despite these challenges, it seems that the AI hype is not slowing down. In many cases, vendors are overpromising and underdelivering by adding AI features to tools where they are not necessary.4 AI has seemingly gone from an exciting new tool to a glorified data-harvesting vacuum sucking up input from nearly every crevice of everyday life. Executives and users alike are growing wary of AI's creeping presence, especially when the value it brings is unclear and, more concerning, its risk is poorly managed.5
These concerns were validated in a June 2025 study conducted by Apple researchers, who found that large reasoning models (LRMs)—a class of advanced generative AI that uses chain of thought—suffer from “complete accuracy collapse” when faced with even low-complexity reasoning tasks.6 In many cases, the models simply halted altogether, unable to complete logical sequences. Cognitive scientist Gary Marcus called this a “knockout blow for LLMs,” arguing that such models rely on surface-level pattern matching rather than actual comprehension or reasoning capabilities.7 This inability to reason becomes problematic as pattern matching models do not actually understand the content they are generating. Instead, they statistically predict the next likely word or phrase based on the training data, not context or logic. For enterprises, this means AI systems will likely produce confident-sounding answers that are incorrect, potentially misleading decision makers with hallucinated insights, or failing in cases where nuance or reasoning is required.8 Organizations that rely on these models without robust validation risk introducing errors into financial forecasts, compliance reporting, policy decisions or customer communications, all of which are areas where accuracy and accountability are non-negotiable.
These findings suggest that many enterprise-grade AI deployments may be built on fragile foundations: highly capable in their demos, but prone to failure when faced with real-world complexity.
Acknowledging AI’s Value—But at What Cost?
AI has undeniably provided value by optimizing efficiency and assisting with decision making. However, the recent surge in generative AI has shifted the focus from purposeful innovation to a race for feature saturation. In an effort to appear technologically advanced and competitive, vendors are rapidly embedding AI into an increasing number of products.
This trend has led to bloated tools with superficial AI features that contribute more to complexity, cost, and risk than to meaningful outcomes.9 Oftentimes, AI-enabled features are added to products merely for show—lightweight features that offer little functional value are included to feed into market hype or meet investor expectations.10 Rather than driving innovation, AI is increasingly being used to meet market expectations rather than operational needs. The unchecked proliferation of AI is now outpacing the ability of enterprise teams to manage associated risk. Security professionals are struggling to conduct adversarial testing, monitor model-specific vulnerabilities (such as prompt injection or data leakage), or audit black box AI systems that behave unpredictably under pressure. Furthermore, privacy professionals face challenges as these models increasingly engage in inference-based profiling such as drawing sensitive conclusions about users from seemingly innocuous data, often without consent or visibility into its reasoning. Governance professionals are attempting to apply risk models to tools that do not behave like traditional software, lacking transparency, traceability, or any clear accountability.
Further compounding these challenges is a fragmented global regulatory landscape. The European Union’s AI Act has emerged as the most comprehensive attempt at AI regulation to date.11 However, other jurisdictions—such as in Canada, where the AI and Data Act (AIDA) remains in legislative limbo, and the United States, which lacks a unified framework—are falling behind. The United States instead relies on state-level laws targeting issues such as algorithmic bias (Colorado), employment surveillance (Illinois), or deepfake misuse (California and New Hampshire).12 This regulatory disparity creates uncertainty and compliance burdens for global enterprises deploying AI across borders, and pending litigation over claims of unlawful model training threatens to cause major upheaval.13
The result is a rapid escalation in enterprises harvesting data to train AI systems, often without sufficient oversight, user consent, or transparency. For instance, Meta’s LLaMA models were reportedly trained on datasets that included pirated books and copywritten materials that were used without obtaining permission from authors or publishers.14 Similarly, OpenAI allegedly scraped user data, web copy, and personal information from across the internet to build its GPT models, all without obtaining the explicit consent of the affected parties.15 Google has also faced backlash for updating its privacy policy in mid-2023 to retroactively authorize the use of public user data for AI training—after such data had already been collected.16 While some AI use cases may be valid, the pace and scale of AI integration are introducing new attack surfaces and even more privacy violations.17 Moreover, ethical gray areas are being introduced faster than industry standards and policies can mature and be adopted. Instead of being a strategic asset, AI is increasingly becoming a liability masked as innovation.
Conclusion
The fascination with AI, especially generative AI, has undoubtedly increased experimentation and innovation, but it has also driven significant overreach in its implementation and adoption. While AI has demonstrated that it has real value in some areas, industries around the globe are grappling with the consequences of inflated expectations, premature adoption, and superficial integration. Vendors have too often positioned AI as a default solution rather than a purposeful one, leading to bloated products, increased operational complexity, new security vulnerabilities, governance gaps, and mounting technical debt.
For AI to deliver on its promised potential, the industry must shift from hype to discipline, where AI addresses a defined problem, and its deployment is governed by rigorous controls around data privacy, model transparency, and security. It also means resisting market pressure to embed AI into everything and instead focusing on responsible innovation that aligns with core business goals.
For digital trust professionals, the path forward is clear: Demand better accountability, question exaggerated claims, and ensure that AI deployments are auditable, ethical, and aligned with a strategy. The illusion that AI makes everything better must end in order to realize its true value as a tool, rather than a gimmick.
Endnotes
1 McKinsey & Company, “The Economic Potential of Generative AI: The Next Productivity Frontier,” 14 June 2023
2 Marcus, G.; “GenAI’s Day of Reckoning May Have Come,” Marcus on AI, 27 March 2025
3 Invictus Sovereign, “What are the Infrastructure Demands of AI?,” 16 January 2025; Farrar, O.; “Understanding AI Vulnerabilities,” Harvard Magazine, 21 March 2025; Marcus, G.; “Poor ROI for GenAI,” Marcus on AI, 10 April 2025
4 Gomes, G.; “Hype Over Reality: ‘AI Washing’ and Why Is it a Problem?,” CTO Magazine, 23 May 2025; Yancey, J.; “AI-Powered Everything, Whether You Need It or Not: The SaaS Race to Irrelevance,” Medium, 17 September 2024
5 EY, “CEO Confidence in Artificial Intelligence Tempered by, Social, Ethical, and Security Risks,” 24 July 2023
6 Shojaee, P.; Mirzadeh, I.; et al.; “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,” Apple Machine Learning Research, June 2025; Milmo, D.; “Advanced AI Suffers ‘Complete Accuracy Collapse’ in Face of Complex Problem, Study Finds,” The Guardian, 9 June 2025
7 Marcus, G.; “A Knockout Blow for LLMs?,” Marcus on AI, 7 June 2025
8 Wilkinson, L.; “AI Project Failure Rates are on the Rise: Report,” CIO Dive, 14 March 2025
9 Doidge, F.; “Threats Versus Potential Benefits: Weighing up the Enterprise Risk of Embracing AI,” ComputerWeekly.com, 9 May 2025
10 Ahmed, S.; “AI Washing: The New ‘Dot-Com’ Hype — How Companies Are Misleading Investors and Consumers,” Medium, 14 September 2024
11 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
12 S. 24-205, Reg. Sess. (Colo. 2024); Kourinian, A.; Miller, K.; et al.; “Illinois Passes Artificial Intelligence (AI) Law Regulating Employment Use Cases,” Mayer Brown, 9 September 2024; Serrato, J.; Mastromonaco, C.; et al.; “California’s AI Laws Are Here—Is Your Business Ready?,” 7 February 2025; Guidry, T.; “New Hampshire AI Legislation: Four AI Bills You Need to Know About,” The National Law Review, 3 June 2024
13 Bobrowsky, M.; “Reddit Sues Anthropic, Alleges Unauthorized Use of Site’s Data,” The Wall Street Journal, 4 June 2025; Creamer, E.; “US Authors’ Copyright Lawsuits Against OpenAI and Microsoft Combined in New York With Newspaper Actions,” The Guardian, 4 April 2025
14 Milmo, D.; “Zuckerberg Approved Meta’s Use of ‘Pirated’ Books to Train AI Models, Authors Claim,” The Guardian, 10 January 2025
15 Gillham, J.; “Open AI and ChatGPT Lawsuit List,” Originality.ai, 7 May 2025
16 Thorbecke, C.; “Google Hit With Lawsuit Alleging it Stole Data From Millions of Users to Train its AI Tools,” CNN Business, 12 July 2023
17 Jarovsky, L.; “OpenAI and AI as a Privacy Dystopia,” Luiza’s Newsletter, 11 April 2025
Collin Beder, CSX-P, CET, Security+
Is an emerging technology practices principal at ISACA®. In this role, he focuses on the development of ISACA’s emerging technology-related resources, including books, white papers, and review manuals, as well as performance-based exam development. Beder authored the book Artificial Intelligence: A Primer on Machine Learning, Deep Learning and Neural Networks, and develops hands-on lab items.