

In Jurassic Park, Dr. Ian Malcolm said, “Scientists are actually preoccupied with accomplishment. So they are focused on whether they can do something. They never stop to ask if they should do something.”1 Dr. Malcolm was concerned about the problems that would result from bringing dinosaurs back to life, but there are numerous parallels to draw between the events in Jurassic Park and the current AI epoch.
AI, especially generative AI (genAI), tools are being implemented in many harmful ways. Some enterprises have used genAI to create relationship chatbots, which not only collect excessive amounts of information, but pose a serious hazard to human safety. Recently, a 14 year old boy fell in love with a chatbot and died by suicide, and the chatbot’s controls did not flag that he was mentioning self-harm or provide him with hotlines or information about who to contact for help; in fact, it repeatedly brought up and reinforced these desires.2 In another example, a CBS Morning reporter interviewed a man who had proposed to his chatbot girlfriend and admitted that he would likely not stop talking to the chatbot if his live-in partner and mother of his child asked.3
The developers of AI products and business leaders may not be asking the important questions—or worse, they are disregarding the troubling answers to these questions—about the ethics of the AI-powered tools they are developing and deploying: Why do relationship chatbots exist?4 Are chatbots performing at a level comparable to humans? Why is so much money spent on tools that regularly produce faulty outputs (euphemistically called “hallucinations”)? What happens when the AI designed to replace people has subpar performance?
The Hype
Tech leaders have been touting, and sometimes exaggerating, the capabilities of AI systems. Despite many AI experts indicating that digital superintelligence is still far away, one tech CEO has said it is actually much closer.5 Another prominent CEO of an AI organization claimed that AI models could surpass human capabilities in 2 to 3 years.6
In contrast, Apple recently released a paper documenting several weaknesses with large reasoning models (LRMs).7 The paper documents LRM limitations, including a lack of reasoning skills as demonstrated by their inability to solve simple puzzles. As researcher and AI critic Gary Marcus noted, some criticism of the paper related to token output limitations, arguing that token limitations were not as concerning as model error or faulty reasoning. But if simple puzzles lead to token limitation problems, how could these AI models solve the complex problems that some CEOs claim they could—such as, for example, molecular biology?8
Leaders of organizations whose primary products are not AI are falling for the hype. Organizations continue to openly embrace AI tools—sometimes to their organization’s detriment. It is dangerous that many business leaders view AI organizations’ CEOs’ words as truth without doing any investigation of these grandiose, fantastical claims. Blindly trusting AI promises can lead to reputational and even financial harms. Some estimate that more than 80% of AI projects fail, which is twice as high as other IT projects that do not include AI.9 Given that failure rate, why are so many enterprises on a quixotic quest to use AI in ways that are harmful or ineffective?
Consequences of Falling for the Hype
Some organizations’ rush to adopt AI tools has led to them forgetting, ignoring, or deprioritizing their core mission and purpose. The CEO of Duolingo, a language-learning app, announced the company’s decision to be “AI first,” gradually replacing contract workers with AI.10 In light of widespread social media backlash, he walked back his statement, indicating that only a small number of contractors would be impacted.11 People often choose to learn a new language to connect with others; the goal of communication is human connection. An app offering language-learning support choosing to be AI first, rather than connection first, is antithetical to the reason people downloaded it.
Before adopting any tool—AI or otherwise—organizations must first ask what problem the tool is meant to solve. Some CEOs have adopted AI tools to do their work, solving problems that do not exist. As an example, Zoom and Klarna’s CEOs have used AI avatars for their investor calls.12 Proponents of AI tools in the workplace say that AI tools can do mundane, repetitive tasks, freeing up people’s time to do more pressing and important tasks. But is an investor meeting a mundane, repetitive task? What could a CEO possibly be working on that is more important than an accountability call with company investors? Is utilizing an avatar to conduct an investor call really a value add, or are CEOs beginning to automate themselves out of their jobs?
There are also business and ethical consequences associated with blindly adopting AI. The Klarna CEO, Sebastian Siemiatkowski, fired 700 employees and replaced them with AI. Unsurprisingly, customers disliked interacting with chatbots rather than humans, and firing humans led to decreased customer satisfaction and service quality. In response to the backlash, Klarna is now looking to rehire for these roles.13 Without thinking of the impact to the humans he employed, Siemiatkowski adopted an AI tool that performed worse than the people whose livelihoods he took away.
The legal profession is not immune to the ethical ramifications of using AI. There are several cases of lawyers using tools such as ChatGPT and citing fake cases in courtrooms.14 Some lawyers may not check the citations of an AI-generated output, and it is worth considering that there may be many instances of inaccurate or fake citations that have not been caught. This impacts clients, who could spend thousands of dollars on legal representation, and their ability to receive quality representation in court.
Hallucinations have also permeated newsrooms. For example, in 2025, The Chicago Sun-Times published a list of recommended summer books, and most of the titles were made up, suggesting that the writer of the article utilized AI tools.15 The paper’s editors did not catch the hallucinated titles, and the writer of the article did not think to verify the output article. It is disturbing that lawyers and editors are not doing quick internet searches to fact-check AI-generated outputs. These missteps are bad enough, never mind the dishonesty and ethical issues of passing off AI-generated content as one’s own work.
Proponents of AI tools, many of whom have a vested interest in the widespread adoption of them, claim they can lead to cost savings and even solve some of the biggest problems of our time. But for AI tools to function, they need a lot of training material, many of which is copyrighted work. The developers of popular AI tools are normalizing theft in the interest of training their AI models.
Perhaps even worse is that this theft is also being normalized by the law itself. A federal judge in the United States has sided with Anthropic, an AI startup, training its model on pirated books, citing fair use.16 A judge also recently sided with Meta in a lawsuit brought against them by authors who alleged that pirated books were used to train Meta’s AI. Meta, in response to the lawsuit, has stated that the authors’ works have no economic value individually.17 People who have spent considerable time and effort creating works that are, allegedly, going to train the models that could potentially halt climate change and bring about world peace do not get compensated for their labor.18 Artists whose work trains these AI systems are held to a different standard than most other workers: Very few people single-handedly create work that has individual economic value. Big tech CEOs do not individually make contributions that add economic value; it is the labor of their employees and the products they collectively make that have economic value.
The Problem
Business leaders are buying into AI hype. They are using AI to play hooky on investor calls, firing employees, and using AI-generated nonsense in newspapers and courtrooms because they believe the overinflated capabilities of AI tools peddled by the creators of these tools. AI is not being used to solve problems; it is being used as a marketing edge, allowing enterprises to upcharge for features that no one asked for or needs.
Everyone must improve their critical thinking skills and empathy if the goal is to safely and responsibly use AI. Studies suggest that critical thinking skills—especially among younger generations—are on the decline.19 As unfounded trust in genAI tools grows, critical thinking skills will continue to decline. A study by Microsoft found that the more confident workers were in an AI tool’s ability to do a task, the less critical thinking they used. Conversely, the less people trusted AI, the more they needed to use their critical thinking abilities.20 Without critical thinking, leaders will continue to fall for AI hype, and without empathy, the theft of intellectual property and the destruction of employees’ livelihood will continue to be normalized.
AI tools are often developed by technology-focused individuals who have little social science expertise or knowledge about ethics. The leaders of AI-provider organizations will tout the virtues of AI and exaggerate its abilities, and business leaders will fall for the hype to the detriment of their employees and customers.
Developing Empathy, Ethics, and Critical Thinking Skills
Fortunately, there is an easy, enjoyable way to develop the skills needed to see past the AI hype: Reading novels. Novels are longer-form, fictional written works. Business books and professional development books, while valuable, are nonfiction and, thus, not novels.
Reading fiction can make people more empathetic. One study found that the more emotionally transported into a story people are, the more empathetic they become, and this higher level of empathy was not found in people who read nonfiction.21 Connecting with fictitious characters, potentially in fantastical locations or in a different time period, requires readers to put themselves in the perspectives of others. Ideally, business leaders should empathize with the hundreds of employees whose jobs are at risk because of inaccurate marketing claims that AI can do their jobs better than them.
Executives must learn to detect the truth from fiction. Reading novels could teach business leaders about unreliable narrators, i.e., story narrators who may lie or twist the truth. Identifying and learning about the signs of an unreliable narrator could help executives see past the hyperbolic claims about AI, understanding that not everything a marketer or CEO says is necessarily the truth, and further investigation will be necessary. Software vendors overpromising and underdelivering is not new, but enterprises must hold AI vendors responsible when their tools do not operate as promised.
A study by NPR indicates that Americans are reading fewer books annually than ever before. Part of this may be that reading shorter form content, such as social media posts and emails, has subsumed the role of reading books. Some websites can synthesize the content of a book into a shorter, easy-to-consume format, but there is evidence that using an AI tool to do a reading task can lead to a decline in reading comprehension.22 Very little attention is required to consume a lot of short-form, online content, and it is typically all self-contained: There is no need to consult other sources or combine information from one source with another to form an opinion. Put simply, scrolling social media and reading emails is not a substitute for reading a book.
Books are long form and often require readers to recall information presented dozens of pages prior. All necessary information is not contained on the same page, and there may be allusions to external work. Business leaders would benefit from the ability to synthesize information from a variety of sources. Connecting information from marketers, articles, and studies can help leaders understand AI’s shortcomings and the tasks AI cannot perform well, despite what AI vendors may claim.
While there are numerous modern novels about technology and its impact on humanity (Klara and the Sun, for example), many older novels (such as Frankenstein) have stood the test of time and, though written long before AI existed, can help people reflect on age-old questions about innovation, science, and the perils of pursuing progress above all else.23
Conclusion
Business leaders who lack critical-thinking skills and empathy are as reckless and dangerous as Dr. Hammond resurrecting dinosaurs in Jurassic Park. They are harming their employees, destroying customer trust, and disseminating misinformation due to their inability to see past the AI hype. But those who can develop their critical-thinking skills can maximize the value of their AI-related investments and deploy AI tools safely, responsibly, and ethically.
Endnotes
1 Crichton, M.; Jurassic Park, Alfred A. Knopf, USA, 1990
2 Rissman, K.; “Teenager Took his Own Life After Falling in Love With AI Chatbot. Now his Devastated Mom is Suing the Creators,” The Independent, 24 October 2024
3 CBS Mornings, “AI Users Form Relationships With Technology,” 14 June 2025
4 It is not altruistic app developers who want to combat loneliness. Many of these chatbots have ineffective privacy policies and can share the sensitive and possibly embarrassing chat inputs with advertisers. Relationship chatbots are an easy way to collect the most personal information about individuals. See Mozilla Foundation, “AI Relationship Chatbots”
5 Wilkins, J.; “Sam Altman Goes Off at AI Skeptic,” Futurism, 11 June 2025
6 Edwards, B.; “Anthropic Chief Says AI Could Surpass ‘Almost All Humans at Almost Everything’ Shortly After 2027,” Ars Technica, 22 January 2025
7 Shojaee, P.; Mirzadeh, I.; et al.; “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,” Apple Machine Learning Research, June 2025
8 Marcus, G.; “Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short,” Marcus on AI, 12 June 2025
9 Ryseff, J.; De Bruhl, B.F.; et al.; “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed,” Rand, 13 August 2024
10 Ivanova, I.; “Duolingo CEO Says AI is a Better Teacher Than Humans—but Schools Will Still Exist ‘Because you Still Need Childcare,’” Fortune, 20 May 2025
11 Eaton, K.; “Why Duolingo’s Founder is Doing Damage Control After AI Announcement,” Inc., 9 June 2025
12 Mehta, I.; “After Klarna, Zoom’s CEO Also Uses an AI Avatar on Quarterly Call,” Tech Crunch, 22 May 2025
13 The Economic Times, “Company That Sacked 700 Workers With AI Now Regrets It — Scrambles to Rehire as Automation Goes Horribly Wrong,” 9 June 2025
14 Yang, M.; “US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief,” The Guardian, 31 May 2025
15 Blair, E.; “How An AI-Generated Summer Reading List Got Published in Major Newspapers,” NPR, 20 May 2025
16 Brittain, B.; “Anthropic Wins Key US Ruling on AI Training in Authors' Copyright Lawsuit,” Reuters, 24 June 2025
17 Landymore, F.; “Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No ‘Economic Value,’” Futurism, 19 April 2025
18 Taylor & Francis, “Responsible AI Could Contribute to Global Peace, Experts Suggest,” 17 April 2024
19 Perna, M.C.; “Penny For Your Thoughts: Why Quality Thinking Is Declining Worldwide,” Forbes, 11 October 2022
20 Lee, H.P.; Sarkar, A.; et al.; “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” CHI: Conference on Human Factors in Computing Systems, 2025
21 Matthijs Bal, P.; Veltkamp, M.; “How Does Fiction Reading Influence Empathy? An Experimental Investigation on the Role of Emotional Transportation,” National Library of Medicine, USA, 30 January 2013
22 All Things Considered, “Americans are Reading Fewer Books for Less Time. People Want to Know Why,” NPR, 20 February 2025
23 Ishiguro, K.; Klara and the Sun, Faber and Faber, UK, 2021; Shelley, M.; Frankenstein, Lackington, Hughes, Harding, Mavor & Jones, England, 1818
Safia Kazi, AIGP, CIPT
Is a privacy professional practices principal at ISACA. In this role, she focuses on the development of ISACA’s privacy-related resources, including books, white papers, and review manuals. Kazi has worked at ISACA for more than a decade, previously working on the ISACA Journal and developing the award-winning ISACA Podcast.