UNDP economist warns AI-driven weapons could trigger mass casualties
A UNDP economist warned that the rapid militarization of artificial intelligence could unleash catastrophic global risks, urging governments to impose strict oversight as UN leaders increasingly frame unregulated AI as an existential threat on par with nuclear weapons and climate change.
-
A picture taken on June 5, 2018 shows an Israeli quadcopter drone flying over Palestinian demonstrations near the separation line with Gaza, east of Khan Yunis (AFP)
A senior UN development official has cautioned that the rapid militarization of artificial intelligence risks placing humanity in unprecedented danger, as a new UNDP assessment urges governments to increase oversight of emerging technologies.
Philip Schellekens, chief economist at the UNDP’s Asia-Pacific bureau, delivered the warning on Tuesday while presenting a report examining how AI may deepen global inequality and accelerate destabilizing trends. The document devotes a section to the spread of AI-enabled armaments and the strategic implications of allowing such tools to evolve without meaningful regulation.
Speaking to reporters in Geneva, Schellekens said current debates around artificial intelligence are marked by a stark duality. On one hand, he noted, advanced systems offer possibilities to confront global emergencies such as climate breakdown or future pandemics. But he stressed that the same technologies also carry catastrophic risks when used in warfare.
"That is a very prevalent feature of the discussion on the AI right now. There is, on the one hand, a sense that AI presents an existential opportunity for humankind to solve existential threats like climate change, advanced medical research to be even faster in pandemics. But quite clearly, there is also a very dark side to this and where AI itself poses an existential threat and can be a negative force for humanity … Military applications are certainly areas of concern where AI could provoke mass casualties," Schellekens told a briefing in Geneva.
He added that the growing influence of automated systems demands stronger safeguards and governance to prevent misuse. Schellekens stressed the need for responsible management of AI tools.
The UN leadership has been issuing similar alarms for months. In January, Secretary-General Antonio Guterres warned that unrestrained AI development should be treated as an existential threat, placing it in the same category as climate change and nuclear weapons.
And in late November, UN human rights chief Volker Turk argued that the deployment of generative AI by technology firms risks enabling new forms of abuse, cautioning that it could become a "modern-day Frankenstein."
AI’s role in the Gaza genocide raises legal and moral alarms
Schellekens’ warning comes as mounting investigations, human rights groups, and open-source researchers have exposed “Israel’s” sweeping reliance on AI-driven targeting systems throughout its assault on Gaza, a campaign that has killed more than 70,000 Palestinians in just over two years.
Platforms such as Lavender, which automatically marked tens of thousands of Palestinians for potential assassination, and The Gospel, which generated strike lists for residential buildings and civilian infrastructure, reportedly allowed “Israel” to accelerate its bombing campaign at unprecedented speed. Former Israeli intelligence officers have confirmed that these systems dramatically increased both the scale and intensity of strikes, often with only superficial or symbolic human oversight.
Rights organizations say deploying AI in this manner, against a besieged population confined to one of the most densely populated territories on earth, constitutes a direct violation of international humanitarian law. Automating the identification of human targets using opaque data models, coupled with the mass-casualty outcomes already documented, has led legal experts to warn that such practices may amount to war crimes, particularly indiscriminate attacks and the systematic failure to differentiate civilians from combatants.
Groups including Human Rights Watch argue that Gaza has become a stark example of how AI-enabled warfare can erase accountability, enabling political and military authorities to hide behind algorithms while carrying out lethal operations. The use of automated systems, they say, has allowed “Israel” to industrialize its killing apparatus while evading responsibility for the enormous civilian toll.
Read more: Gaza; testing ground for Israeli AI tools, raising ethical concerns