Artificial Intelligence (AI) comes in two forms: Artificial General Intelligence (AGI)–software that aims to think and problem-solve like a human, and Generative Artificial Intelligence (GAI)–software that aims to be as creative and inventive as the human mind. The goal is to create machines vastly more intelligent and powerful than humans.
Seth Taube, an internationally renowned entrepreneur and student of neuroscience, warns that the recent explosive growth of AI is becoming an existential threat to humanity. Humans may soon become unable to control AI.
“New technologies like genetic engineering, blockchain, and AI have certainly had a positive impact on how we work, live, and communicate, but they also come with risks and potential downsides that cannot be ignored,” Taube cautions.
Since humans can be both constructive and destructive, depending on their ethics, morality, and compassion, Seth and a growing global consortium of scientists, physicians, and philosophers fear that AI is becoming dangerous. AI is not yet governed by algorithms for ethics, morality, and compassion–notoriously thorny factors for digital encoding.
Dangerous and Selfish?
The recent exponential growth of AI, then, if unguided by human sensitivities, may do more than amplify humanity’s creativity. AI may also amplify human destructiveness, whether intentionally or not. Unfettered AI software may also consume resources, for example, without regard for human needs or the human environment.
This possibility, the “Selfish AI” that Taube alludes to, presents many more unexpected and dangerously nuanced complexities. As a result, unregulated AI, which some have now dubbed as “soulless” by nature, may become a catastrophic threat to human existence.
In addition, the goal of Artificial General Intelligence research–the creation of self-improving AI software that learns from its successes and failures–may explain its destructive potential. If a flaw in the original intelligence algorithm remains undetected, that defective intelligence may then try to “correct” itself by using its own faulty intelligence–a self-replicating and expanding error. Illogical decisions and actions may then replicate themselves dangerously. The extremely rapid development of such AI, may outrace humanity’s ability to prevent such fatal flaws from harming many.
The May 9, 2023 edition of BMJ Global Health, in an article titled “Threats by artificial intelligence to human health and human existence,” echoes Taube’s warning. AI, according to many signatories to an open letter–an international group of doctors and public health experts–poses “‘an existential threat to humanity’ akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated.”
The authors go on to add, “With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing.” The authors include experts from the International Physicians for the Prevention of Nuclear War and the International Institute for Global Health.
Public health experts and physicians sound the alarm. While AI promises to improve the ability to rapidly analyze data, diagnose diseases, create new therapies, and answer questions, such massively enhanced data sets could jeopardize data privacy and be misused for surveillance.
AI algorithms quickly amass and analyze data, but without human insights, it cannot assess the quality of each data source. As a result, some hospital AI algorithms, for example, have failed to give Black patients proper care. An AI-driven pulse oximeter overestimates blood oxygen levels in the blood of patients with darker skin. Fatal undertreatment for hypoxemia can result.
AI, either generative or general, might take over menial, dangerous, or unsatisfying work and serve humanity’s interests. On the other hand, long-term unemployment is negatively associated with wellness, mental health, and behavior. Mass unemployment from AI-based worker displacement, though originally well-intentioned, may have destructive consequences.
Goldman Sachs, a leading global investment banking, securities, and investment management firm, for example, predicts that over 300 million jobs worldwide could see their workload lowered by as much as 50%. The effect on social structures at the rapid pace of AI development would be unprecedented, unpredictable, and probably catastrophic.
Such a mass reallocation of work further illustrates the selfish AI concept Seth Taube warns against.
Democracy at Risk
The 2023 BMJ Global Health report warns that “AI’s ability to rapidly analyze data and information may…further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts.”
AI-driven information systems can also serve malevolent actors, either state-sponsored or individuals, to distort reality with deep fakes. This can thwart democratic goals and sabotage democratic movements. A malicious enemy might even use AI to enslave entire populations.
AI-Driven War Machines
AI has enabled research into the development of lethal, autonomous weapons. Such future weapon systems might learn to locate, target, and kill on a large scale, indiscriminately, without the need or permission of humans.
According to Taube, “Existential risks like nuclear proliferation, cyber threats, climate change, biothreats, and the possibility of mass unemployment have been amplified exponentially by technology over the past decades. It is not surprising that people are facing substantially increased trauma, anxiety, and suffering as a result.”
“We need better tools to help people down-regulate their nervous systems,” Seth advises concerning the anxiety caused by run-away technology. “Only by doing so will they be able to collaborate and solve the unprecedented challenges ahead.”
“I believe that an effective moratorium on AI can only be achieved by working both from the inside out and the outside in,” Seth adds. “Then we have a shot at people working collaboratively to address these growing existential challenges we face as a species.”
Software, hardware, and internet companies will also have to catch up with the growth of AI, which is now outpacing its creators.
Though some say the race between humanity and its newest creation, AI, might already be impossible to win, Taube believes more can be done. “This is not a time when regulators can postpone action and then rush for solutions when it is too late,” he warns. “While the promise of the positive benefits of AI is clear, the negative externalities, including the reduction of human involvement in almost all systems and processes, are equally clear.”
Some software developers are working to create AI friendly to humans. This idea far predates AI itself. In 1942, long before the term AI was coined, science fiction writer Isaac Asimov wrote The Three Laws of Robotics: A moral code to keep machines in check.
Taube’s warning about the “Rapid and Selfish Development of AI” calls for Asimov’s still-unwritten software algorithms. Replacing robots with AI brings his three laws into the 21st century.
AI may not injure a human being or, through inaction, allow a human being to come to harm.
AI must obey orders given to it by human beings except where such orders would conflict with the First Law.
AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seth Taube agrees with the sentiment.