Shut down AI or 'everyone on Earth will die', researcher warns
The high-profile AI researcher says fighting a computer-programmed entity that “does not care for us nor for sentient life in general” would require “precision and preparation."
As technology takes over, the only way to save humankind from going extinct is to shut down all advanced global artificial intelligence systems, according to high-profile AI researcher, Eliezer Yudkowsky.
As co-founder of the Machine Intelligence Research Institute (MIRI), Yudkowsky clarified in a piece for Time Magazine the reason why he didn't join the petition for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” by OpenAI this month.
Although the letter was signed by Elon Musk and Apple’s Steve Wozniak alongside many others, Yudkowsky claims that the petition was “asking for too little to solve” the threat posed by the uncontrolled development of AI.
📢 We're calling on AI labs to temporarily pause training powerful models!
— Future of Life Institute (@FLIxrisk) March 29, 2023
Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short 🧵on why we're calling for this - (1/8)
He wrote, “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,”
No exceptions to government or military
A point of argument he brings up is that fighting a computer-programmed entity that “does not care for us nor for sentient life in general” would require “precision and preparation and new scientific insights” that humanity lacks and won't be able to develop for quite some time.
“A sufficiently intelligent AI won’t stay confined to computers for long,” Yudkowsky warned, adding that the already existing possibility of emailing DNA strings for protein production will allow AI “to build artificial life forms or bootstrap straight to postbiological molecular manufacturing” and branch out into the world.
Yudkowsky suggests introducing a moratorium on developing AI, which must be effective immediately, stressing that “there can be no exceptions, including for governments or militaries” and that there must be limits in international deals as to how much computing is allowed in training them.
“If intelligence says that a country outside the agreement is building a GPU (graphics processing unit) cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike,” he stated.
Read next: UN says AI poses 'serious risk' for human rights
As per Yudkowsky, it should be made “explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange."
This comes after investment bank and financial services company, Goldman Sachs, warned that artificial intelligence (AI) could cause "significant disruption" to the labor market and put millions of jobs at risk.
According to the company's estimates, generative AI tools such as ChatGPT could replace up to 300 million full-time jobs worldwide.
Artificial intelligence (#AI) is revolutionizing how we interact with the world around us, from our smartphones to our homes and cars. But what does this imply for the future of education? pic.twitter.com/hPDdQbIq4l
— Al Mayadeen English (@MayadeenEnglish) January 4, 2023