There is a chilling episode of The Twilight Zone, aired in March 1962, with the title “To Serve Man.” It is about the arrival on Earth of tall aliens who appear to be very friendly and helpful. They brought with them a book titled “To Serve Man,” which was interpreted as how the aliens could help mankind. Then the aliens began shipping willing people in their spaceship, presumably to their home planet. However, toward the end of the episode (spoiler alert!), someone realized too late that “To Serve Man” was a COOKBOOK!
The question is whether we are reliving this episode when it comes to AI. We know that AI can be very helpful in writing papers, making videos, writing computer code, etc., etc., which can increase productivity tremendously. But there are also concerns about huge numbers of job losses, claims of AI psychosis, and existential threats to humans that are glossed over by many fascinated with generative AI’s and humanoid robots’ impressive capabilities.
So, which is it? The likely answer is “both.” The solution might be to minimize the negative impacts. But how do we go about it? You might think that laws and regulations could do the trick, but they haven’t so far and are unlikely to be very successful going forward. Some 26 years ago, I wrote a letter to the editor of The New York Times propounding the idea that software manufacturers should be responsible for ensuring the security and safety of their products ... and AI systems are, after all, made up of computer software and hardware.
AI manufacturers claim that there is no way of knowing how AI systems do what they do in many cases. AI manufacturers are trying to convince us that AI systems, in general, and agentic AI systems, in particular, have minds of their own and that the developers are not responsible for any errors, hallucinations, and the like, much as social network providers are protected under Section 230 of Communications Decency Act and are not liable for content.
It’s really a matter of manufacturers avoiding taking full responsibility for the security and safety of their systems and not thoroughly testing their systems under the vast range of possible scenarios. This is mostly due to the high cost of, and time required for, software security assurance and safety verification, and the need to get ahead of the competition. It is far faster, easier and cheaper to throw AI systems “over the transom,” as it were, and have users discover and report the problems than to do it right the first time before release. This is the preferred ploy of software and platform manufacturers ... and they have gotten away with it to date. As this situation continues, we must consider whether we are dealing with servants or chefs!
An interesting take on who serves whom showed up in a recent New York Times article. A quote that caught my eye was: “It’s time ... for algorithms to serve people instead of people serving algorithms.” Just as AI systems are essentially software systems, algorithms are key components of AI systems.
So, how should we interpret all this considering the above-mentioned Twilight Zone episode? I suggest that we be extra vigilant and hold AI systems’ developers to account. We should not succumb to their claims that they are unable to know what their systems are doing and therefore they cannot be held responsible for the systems’ actions. Such claims are unacceptable. They seem happy to reap the huge potential financial benefits of AI technologies, but are reluctant to spend the necessary time and money to ensure that their creations do not “eat” us all. That must change ... and quickly.