ChatGPT set the record for being the fastest-adopted technology in history, with many people using it to research new topics, draft emails and revise their writing. ChatGPT and other generative AI tools have been adopted by enterprises and individuals for myriad uses, but despite their widespread adoption, they are not without risk.
And though large language models (LLMs) have been a hot topic in recent years, AI systems and their impacts are not new. AI-powered recommendation algorithms, such as those on social media sites and music or video streaming platforms, inform the content people consume and can create content echo chambers, in which people are only exposed to information that confirms their already-held beliefs.
Some enterprises have invested heavily into AI-powered technology at the expense of human investment. One big tech company might layoff up to 20% of their workforce reportedly due to AI-related costs (research and development and AI infrastructure). A survey found that enterprises are reducing headcount prior to even seeing the benefits of AI.
AI systems are already affecting society and, in some cases, are causing considerable harm. Chatbots have been linked to the suicides of many people, some of them teenagers. Convincing scams may leverage deepfake technologies to trick victims, making these fraudulent activities more persuasive and realistic than attempts in the past.
AI technology has already proliferated; it cannot be rolled back at this point. Enterprises developing or deploying AI systems must consider the widespread impact of these systems and work to limit their negative impacts.
Compliance Is Lagging
Enterprises whose AI systems are compliant with applicable laws and regulations should not assume that these systems are safe and immune from causing harm. The EU General Data Protection Regulation has provisions related to automated decision-making, while the EU AI Act prohibits certain AI uses and regulates high-risk AI systems. Many US states have introduced AI-related legislation. But laws and regulations typically cannot keep pace with technological advancement.
Some emergent uses of generative AI are risky, and regulations did not (and likely cannot) anticipate them. People are pursuing therapy with chatbots and exploring romantic relationships with chatbots, with the “My Boyfriend Is AI” subreddit having more than 55,000 followers. But legislation like the EU AI Act does not specifically call out these use cases, and it is unrealistic to expect regulations to address every possible dangerous AI use case.
Legislation related to AI also primarily focuses on how systems are used and not necessarily how the models are built or the people who are instrumental in data labeling. A recent article from 404Media explored the hard work done by low-paid workers who help train AI models and moderate content; these workers may suffer from post-traumatic stress disorder from having to look at gruesome or explicit content and sign non-disclosure agreements that, in some cases, forbid them from discussing what they’ve seen with mental health professionals. When considering the costs of developing AI systems, this invisible labor might be excluded from calculations.
Identifying AI Impacts
Developing and deploying AI systems may have ethical consequences, and laws and regulations do not address many of these outcomes. Therefore, enterprises must proactively anticipate the impacts of AI systems. To that end, ISACA has released an AI Impact Assessment Tool to help practitioners consider the possible effects of deploying an AI system. This tool can help enterprises address the ethical risk AI systems may pose. It considers elements such as privacy, transparency, cybersecurity and auditability while also incorporating less commonly explored topics like hidden costs, health and safety, and anthropormorphism.
The tool contains questions to guide enterprise AI steering committees as they evaluate an AI system, and questions can be ranked based on risk (high, medium, and low). The tool will automatically calculate risk scores across the 14 dimensions outlined in the tool and provide a benchmark to help practitioners determine which areas should be prioritized.
Considering the myriad impacts of AI systems can help enterprises safely and responsibly deploy these systems in a way that aligns with their mission and vision. To download the AI Impact Assessment Tool, visit www.isaca.org/ai-impact-assessment-tool.