My interest in information-technology outsourcing began more than two decades ago. I then published a book with the title Outsourcing Information Security, which outlined outsourcing risks such as those that may result from giving up control and losing inhouse knowledge and skills. Retaining such functions is critical to bringing a system or facility back inhouse should the need or desire arise. If inhouse abilities are not retained, it becomes comes very difficult, costly and time-consuming to try to return to original capabilities. Sometimes moving back inhouse or to a domestic facility is virtually impossible technically or is prohibitively expensive.
The loss of internal expertise is a major issue at both corporate and personal levels. Nicholas Carr’s definitive article “Is Google Making Us Stupid? What the Internet is Doing to Our Brains,” appeared in the July-August 2008 issue of The Atlantic magazine. The cover of the magazine humorously asks, “Is Google Making Us Stoopid?” [sic.] The premise of the article is that, while the writer finds the Web to be a godsend in reducing days of research to minutes (as do I), he finds himself less able to concentrate and contemplate. He is no longer able to read a traditional book cover to cover, losing patience after the first few pages.
Fast forward 17 years to AI, and we see some of these same arguments cropping up. Emanuel Maiberg’s article “Microsoft Study Finds AI Makes Human Cognition ‘Atrophied and Unprepared’” from earlier this year points to a study described in a lengthy, 23-page report. Of course, with the above-mentioned Google-inspired loss of concentration, who would read such a report?
New technologies invariably lead to both positive and negative consequences. Even though AI systems may diminish processing and remembering previously required of one’s brain, they bring with them new capabilities that are highly beneficial ... even leading to a Nobel prize. Being able to have so much information at one’s fingertips beats having to trek to a library. Writing cohesive, comprehensive text, which has correct grammar and spelling, can be a great help ... especially for those not writing in their primary language. However, as a reviewer for a respected journal, I saw an increasing number of articles that were clearly generated by AI, adding nothing new to the art. It is becoming ever more difficult to differentiate these shams as AI systems become more sophisticated.
My advice is to gain and retain relevant and applicable knowledge and skills so that you can function at some meaningful level without having to depend on the results of Google searches or AI-generated content. You need to be able to evaluate results from searches and prompt results for their precision (accuracy), recall (completeness), and trustability, and to not be easily fooled by misinformation, or harmful or misleading claims. This requires considerable work, but it is well worth the effort, especially if the internet were to go down for an extended period or your essential websites were to crash.
A similar recommendation is provided in an excellent article about using AI for medical diagnosis. In the article, Dr. Adam Rothman is quoted as saying “... you should train doctors who know how to use A.I. but also know how to think.”
The same advice on knowledge and skills retention also applies to cyber-physical systems and agentic AI. After all, what would you do if robots, autonomous land, sea, and aerial vehicles, etc., were to fail or, even worse, act dangerously? Hit the kill switch, Scotty!