Gaza; testing ground for Israeli AI tools, raising ethical concerns
Over the past 18 months, the Israeli occupation has integrated AI with facial recognition, target selection, and Arabic-language analysis.
-
Palestinians bid farewell to their relatives who were killed in an Israeli airstrike early this morning on Yaffa School in Gaza, on April 23, 2025 (AP)
In late 2023, "Israel" used a newly enhanced AI-powered audio tool to locate and assassinate top Hamas commander Ibrahim Biari, after traditional intelligence methods failed—an operation that also killed over 125 civilians and marked a broader shift in "Israel's" rapid deployment of AI-driven military technologies during the war on Gaza.
Anonymous American and Israeli officials told The New York Times that this was only one example of AI technology used in Gaza.
Over the past 18 months, the Israeli occupation has integrated AI with facial recognition, target selection, and Arabic-language analysis—developed largely through collaborations between Unit 8200 and tech company reservists at firms like Google, Microsoft, and Meta, within an innovation hub known as “The Studio".
When AI technology goes rogue
However, even as "Israel" raced to create the AI arsenal, deployment of the technologies occasionally resulted in erroneous identifications and arrests, as well as civilian deaths, according to Israeli and US sources. Some authorities have battled with the ethical implications of artificial intelligence techniques, which might lead to more monitoring and civilian murders.
No other military has been as active as "Israel's" in testing with artificial intelligence tools in real-time combat, according to European and American defense experts, providing a glimpse into how such technology may be employed in future wars – and how they may fail.
The rapid development of AI-driven technology during the war has given the occupation more opportunities to exert its brutality on Palestinians, and experts like Hadas Lorber warn that it raises serious ethical concerns and requires human oversight and accountability.
The technology raises "serious ethical questions,” Lorber noted, explaining that AI needs checks and balances and that humans should make the final call.
AI in drones, Arabic in AI equal higher civilian casualties
"Israel" has integrated AI into its drone systems, enabling them to autonomously identify and track targets with high precision, while also developing Arabic-language AI tools—though officials acknowledge the ethical concerns such technologies raise in warfare.
Following the breakout of the war on Gaza, "Israel" began putting cameras at temporary crossings between the northern and southern Gaza Strip with the capacity to scan and submit high-resolution photographs of Palestinians to an artificial intelligence-backed face recognition tool.
This algorithm, too, occasionally had difficulty detecting persons whose faces were concealed. Two Israeli intelligence personnel alleged that, as a result, Palestinians were arrested and interrogated after the face recognition technology incorrectly highlighted them.
In September, Human Rights Watch raised concerns about the Israeli occupation's use of surveillance technologies, artificial intelligence (AI), and other digital tools that maximize civilian death and destruction during the war on Gaza.
These tools, which "Israel" claims are intended to assess civilian presence, guide attack timing, and differentiate between civilians and combatants, are increasing the risk of civilian harm.
'Targets never end': 'Israel' uses Lavender for Gaza bombing
In April last year, intelligence sources reported that the Israeli military's airstrikes in Gaza utilized a previously undisclosed AI-powered database, dubbed Lavender, which reportedly identified 37,000 alleged targets linked to the Palestinian Resistance, according to intelligence sources familiar with the ongoing aggression.
These sources also revealed that Israeli military officials authorized the killing of a significant number of Palestinian civilians, particularly in the initial weeks and months of the genocide.
Their testimonies offer shocking experiences of Israeli intelligence personnel employing machine-learning systems to pinpoint "targets".
'The machine did it coldly'
Israeli utilization of sophisticated AI technology in its genocidal campaign in Gaza marks new territory in modern warfare, adding to the legal and ethical scrutiny and reshaping the dynamics between military personnel and automated systems.
"The machine did it coldly. And that made it easier,” said one intelligence officer who used Lavender.
“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time,” said another soldier.
The testimony from the six intelligence officers, all of whom have been involved in using AI systems to identify "targets" allegedly affiliated with Hamas and the Palestinian Islamic Jihad (PIJ) during the war, was provided to Israeli journalist Yuval Abraham. This information was published in a report by publication +972 Magazine and outlet Local Call.