MIT professor warns of AI firms' 'race to the bottom'"
Physicist Max Tegmark asserts that the level of competition is too high for technology executives to pause their AI development efforts and contemplate the risks associated with artificial intelligence.
The scientist responsible for a significant letter advocating for a temporary halt in the advancement of potent artificial intelligence systems has asserted that technology executives did not cease their efforts due to being caught up in a competitive "race to the bottom," The Guardian reported.
Max Tegmark, one of the founders of the Future of Life Institute, orchestrated an open letter in March, urging a six-month pause in the development of massive AI systems.
Although the letter garnered support from over 30,000 signatories, including prominent figures like Elon Musk and Apple co-founder Steve Wozniak, it failed to achieve a break in the development of the most ambitious AI systems.
In an interview with The Guardian six months ago, Tegmark stated that he hadn't anticipated the letter would compel tech companies to cease their efforts in creating AI models surpassing the capabilities of GPT-4, the substantial language model used to operate ChatGPT, as the competition in this field has become exceedingly fierce.
“I felt that privately a lot of corporate leaders I talked to wanted [a pause] but they were trapped in this race to the bottom against each other. So no company can pause alone,” he said.
'Losing control of our civilization'
The letter raised concerns about an uncontrollable competition to create AI systems with capabilities that would be beyond comprehension, prediction, or reliable control. It called upon governments to step in and take action if leading AI companies like Google, OpenAI (owner of ChatGPT), and Microsoft couldn't agree on implementing a pause in the development of systems exceeding the power of GPT-4.
It questioned: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Concerns regarding the advancement of AI cover a spectrum of worries, spanning from immediate issues like the creation of deepfake videos and the widespread dissemination of false information, to more profound existential threats associated with super-intelligent AIs that can elude human oversight or make decisions of utmost significance that cannot be reversed.
Tegmark cautioned against characterizing the emergence of digital "god-like general intelligence" as a distant future concern, as some AI experts believe it could materialize in just a few years.
He also called upon governments to address the issue of open-source AI models that are accessible and customizable by the general public. Recently, Meta, led by Mark Zuckerberg, released an open-source large language model named Llama 2. A UK expert cautioned that this action was akin to providing people with a blueprint to construct a nuclear bomb.
“Dangerous technology should not be open source, regardless of whether it is bio-weapons or software,” Tegmark concluded.
Read next: AI should be prioritized as dangerous as nuclear wars: experts