AI should be prioritized as dangerous as nuclear wars: experts
Signed by hundreds of executives and academics, including ChatGPT developer OpenAI, the statement comes amid growing fear of the risks posed by the technology to humanity.
A statement released on Tuesday by the Center for AI Safety warns that artificial intelligence (AI) technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
Signed by hundreds of executives and academics, the statement comes amid growing fear of the risks posed by the technology to humanity. Signatories include chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read, which follows calls by leaders of OpenAI and industry experts to regulate the technology out of fear that it could impact job markets, harm health and weaponize disinformation, discrimination, and impersonation.
Two weeks ago, after doctors and health specialists warned that AI should stop unless it is regulated, a crucial committee of European Parliament legislators adopted a first-of-its-kind AI law using a risk-based approach, with requirements corresponding to the amount of danger posed by the system, as well as establish criteria for suppliers of so-called "foundation models" such as ChatGPT.
Geoffrey Hinton, dubbed as the godfather of AI who was also a signatory, quit Google this month citing its “existential risk,” which was emphasized by No. 10 last week for the first time. He said: "It's hard to see how you can prevent bad actors from using it for bad things".
Although the statement is not the first of its kind, it is considered the most impactful due to its mass number of signatories and its core concern, according to a professor in machine learning at the University of Oxford and co-founder of Mind Foundry, Michael Osborne.
“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realization among those of us working in AI that existential risks are a real concern.”
Read next: US scientists use AI, brain scans to 'read minds', decode thoughts
What led Osborne to sign the statement is the risk of AI potentially accelerating engineered pandemics and military arms races. He expressed: “Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species.”
The increase in the calls to regulate follows the launch of the language model ChatGPT in November, which is now widely adopted by millions and is rapidly advanced beyond predictions by top experts in the industry.