Scientists and tech sector experts, including Microsoft and Google executives, issued a new warning Tuesday about the dangers that AI (artificial intelligence) poses to humanity.
“Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war,” according to the Associated Press.
Hundreds of important professionals signed the declaration, which was posted on the Centre for AI Safety’s website, including Sam Altman, CEO of ChatGPT developer OpenAI, and Geoffrey Hinton, a computer scientist renowned as the godfather of artificial intelligence.
With the advent of a new breed of extremely effective AI chatbots like ChatGPT, concerns about artificial intelligence systems outsmarting humans and running wild have grown. It has prompted governments around the world to scramble to draught legislation for the evolving technology, with the European Union blazing the route with its AI Act, which is likely to be enacted later this year.
According to Dan Hendrycks, executive director of the San Francisco-based nonprofit Centre for AI Safety, who organised the move, the latest warning was intentionally brief — just one sentence — to encompass a broad coalition of scientists who may not agree on the most likely risks or the best ways to prevent them.
“There are a variety of people from all top universities in various different fields who are concerned about this and believe it is a global priority,” Hendrycks explained. “So we had to sort of bring people out of the closet, so to speak, on this issue because many were silently speaking among themselves.”
More than 1,000 experts and engineers, including Elon Musk, signed a much longer statement calling for a six-month moratorium on AI Artificial Intelligence development, claiming it poses “profound risks to society and humanity.”
That letter was written in reaction to OpenAI’s publication of a new AI model, GPT-4, but leaders at OpenAI, Microsoft, and Google did not sign on and rejected the demand for a voluntary industry freeze.
In contrast, Microsoft’s chief technology and scientific officers, as well as Demis Hassabis, CEO of Google’s AI research centre DeepMind, and two Google executives who head the company’s AI policy initiatives, all supported the current statement. The statement makes no particular recommendations, however some, including Altman, have urged an international regulator similar to the United Nations Nuclear Agency.
Some opponents have charged that AI producers’ apocalyptic warnings about existential hazards have helped to exaggerating their products’ capabilities and distracted from calls for more immediate legislation to rein in their real-world difficulties.
Hendrycks believes society can manage the “urgent, ongoing harms” of items that generate new text or images while also beginning to address the “potential catastrophes around the corner.”
He contrasted it to nuclear experts telling people to be cautious in the 1930s despite the fact that “we haven’t quite developed the bomb yet.”
“No one is claiming that GPT-4 or ChatGPT are causing these kinds of issues today,” Hendrycks added. “We’re attempting to address these risks before they occur rather than dealing with disasters after the fact.”
Experts in nuclear science, pandemics, and climate change also signed the letter. Among the signatories is author Bill McKibben, who raised the alarm about global warming in his 1989 book “The End of Nature” and warned about AI and related technologies in another book two decades ago.
“Given our failure to heed the early warnings about climate change 35 years ago, it feels to me like it would be prudent to actually think this one through before it’s all done,” he wrote in an email Tuesday.
An academic who helped push for the letter said he was derided for his concerns about AI’s existential threat, despite the fact that significant advances in machine-learning research over the last decade had exceeded many people’s expectations.
According to David Krueger, an assistant computer science professor at the University of Cambridge, scientists are hesitant to speak out because they don’t want to be seen as implying AI “consciousness or AI doing something magical,” but AI systems don’t need to be self-aware or set their own goals to pose a threat to humanity.
“I’m not attached to any particular type of risk.” “I believe there are numerous ways for things to go wrong,” Krueger stated. “However, I believe the one that has historically sparked the most debate is the risk of extinction, specifically from AI systems that go rogue.”