AI firms urged to calculate catastrophe odds like Oppenheimer
Physicist Max Tegmark urges tech companies to quantify the risk of losing control over Artificial Super Intelligence, drawing parallels with nuclear-era safety calculations.
-
Physicist and AI safety advocate Max Tegmark at MIT (helena.org)
Artificial intelligence companies are being urged to conduct rigorous risk assessments before deploying advanced AI systems, echoing the caution exercised ahead of the first nuclear bomb test in 1945.
Physicist and AI safety advocate Max Tegmark has warned that without concrete calculations, the risk of an Artificial Super Intelligence (ASI) escape could remain dangerously underestimated.
Tegmark likened the current phase of AI development to the scientific and ethical crossroad faced by Robert Oppenheimer and his team before the Trinity test, when a catastrophic ignition of the atmosphere had to be mathematically ruled out.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” said Tegmark. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark, a physicist and AI researcher at MIT and co-founder of the Future of Life Institute, explained that the concept of the "Compton constant" refers to the probability of losing control over super-intelligent AI systems. He and his MIT students have conducted similar calculations, drawing inspiration from US physicist Arthur Compton’s pre-nuclear test estimations. Compton had famously approved the test only after concluding the chance of triggering a global catastrophe was “slightly less” than one in three million.
In a newly published paper, Tegmark’s team proposes this constant as a metric for measuring Artificial Super Intelligence risk. They argue that such quantification is essential to fostering consensus among AI developers and building the political will necessary to establish global AI safety standards.
Global AI safety collaboration reignited
Tegmark’s warnings come alongside renewed efforts for international cooperation on AI regulation. The report, co-authored by Tegmark, AI pioneer Yoshua Bengio, and contributors from OpenAI and Google DeepMind, outlines key AI safety standards and identifies three top research priorities: assessing AI’s impact, defining safe behavior, and managing system control.
The Singapore Consensus aims to address escalating fears over AI control loss and bring clarity to what constitutes the responsible development of highly autonomous systems.
Tegmark has long advocated for a cautious approach to artificial intelligence. In 2023, his Future of Life Institute published an open letter warning of an “out-of-control race” among AI labs. Signed by more than 33,000 individuals, including Elon Musk and Apple co-founder Steve Wozniak, the letter raised alarms over the development of “ever more powerful digital minds” that no one could “understand, predict, or reliably control.”
The idea of a superintelligent AI escape is no longer considered fringe. For Tegmark and other researchers, it is a credible and calculable threat. Referencing Vice President JD Vance’s dismissal of these concerns at the recent AI safety summit in Paris, Tegmark noted a shift in momentum.
“It really feels the gloom from Paris has gone and international collaboration has come roaring back,” he said as quoted by The Guardian during the launch of the Singapore Consensus on Global AI Safety Research Priorities.
Read next: AI threatens press freedom: UN warns of growing dangers to journalists