Google mum about Israeli use of photo software in Gaza 'hit list'
While Google prohibits the use of its technology for causing immediate harm, "Israel" is employing its facial recognition capabilities to establish a wide-reaching surveillance system targeting Palestinians.
In a piece published by The Intercept, Sam Biddle, a technology reporter, contended that the Israeli military has established a widespread facial recognition surveillance system across the Gaza Strip.
The system scans Palestinian civilians as they navigate devastated areas, trying to escape the continuous Israeli bombardment and acquire necessities for their families.
The Israeli military reportedly utilizes two different facial recognition tools, as per information from The New York Times. One is developed by the Israeli contractor Corsight, while the other is integrated into the widely used consumer image organization platform, Google Photos. According to an anonymous Israeli official cited by the Times, Google Photos outperformed other facial recognition technologies, aiding Israeli authorities in compiling a "hit list" of alleged Resistance fighters involved in Operation Al-Aqsa Flood.
The incident of mass surveillance targeting Palestinian individuals by "Israel" in an attempt to identify Resistance fighters has resulted in the apprehension of thousands of Gaza residents since October 7, as per the article.
Many of those detained or imprisoned, often without substantial evidence, have reported being subjected to brutal interrogation or torture. The New York Times highlighted the case of Palestinian poet Mosab Abu Toha, who was arrested and beaten by Israeli occupation forces, with his arrest initiated through facial recognition technology. Despite being released without any charges, Abu Toha recounted to the newspaper that Israeli soldiers informed him that his arrest, facilitated by facial recognition, was a "mistake".
In Biddle's view, setting aside concerns about accuracy, the deployment of facial recognition systems raises issues, particularly regarding their lower accuracy rates on nonwhite faces.
Utilizing Google Photos' machine learning-powered analysis to subject civilians to military surveillance contradicts the company's explicitly stated guidelines. Under the section titled "Dangerous and Illegal Activities," Google explicitly states that Google Photos cannot be employed to endorse activities, products, services, or information that directly and severely harm individuals.
Biddle gushes, "It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military."
“It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity,’ but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch.
“Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action,” she added.
Tech ethos clash: Doing good or doing google?
Apart from prohibiting the use of Google Photos to inflict harm on individuals in its terms of service, the company has long asserted its commitment to various international human rights norms.
“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post.
“That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”
Google claims to be deeply committed to upholding human rights standards, including those outlined in the Universal Declaration of Human Rights, which explicitly prohibits torture, and the UN Guiding Principles on Business and Human Rights. These principles acknowledge that conflicts over territory often lead to severe human rights violations.
The gist of Biddle's argument is that the Israeli military's utilization of a freely accessible Google product like Photos raises doubts about Google's adherence to its corporate human rights commitments and willingness to take meaningful action as per them.
According to Google, it endorses and adheres to the UN Guiding Principles on Business and Human Rights, which urge companies to prevent or mitigate adverse human rights impacts associated with their operations, products, or services, including those stemming from their business relationships, even if they are not directly responsible for causing those impacts.
“Google and Corsight both have a responsibility to ensure that their products and services do not cause or contribute to human rights abuses,” said Bacciarelli. “I’d expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”
Employees of Google participating in the No Tech for Apartheid campaign, a protest movement led by workers against Project Nimbus, urged their employer to prevent the Israeli military from utilizing facial recognition technology from Google Photos to prosecute the war on Gaza.
During the Israeli tech industry conference, MindTheTech, in #NewYork, a Google Cloud software engineer interrupted #Google "Israel" CEO Barak Regev, condemning "Israel's" aggression on #Gaza.
— Al Mayadeen English (@MayadeenEnglish) March 5, 2024
Project Nimbus comprises four phases, involving cloud infrastructure acquisition and… pic.twitter.com/TL68Y90aAc
“That the Israeli military is even weaponizing consumer technology like Google Photos, using the included facial recognition to identify Palestinians as part of their surveillance apparatus, indicates that the Israeli military will use any technology made available to them — unless Google takes steps to ensure their products don’t contribute to ethnic cleansing, occupation, and genocide,” the group said in a statement, as quoted by The Intercept.
“As Google workers, we demand that the company drop Project Nimbus immediately, and cease all activity that supports the Israeli government and military’s genocidal agenda to decimate Gaza.”
Not an isolated incident
There is nothing new about Google's professed commitment to human rights principles conflicting with its business dealings, especially concerning "Israel", according to Biddle. Since 2021, Google has provided the Israeli military with advanced cloud computing and machine learning tools under its contentious "Project Nimbus" contract.
While Google Photos is a freely accessible consumer product, Project Nimbus is a customized software initiative designed specifically for the Israeli government's requirements. Nevertheless, both Project Nimbus and Google Photos' facial recognition capabilities are developed using the company's extensive machine-learning resources.
The sale of these advanced tools to a government frequently accused of human rights violations and war crimes contradicts Google's AI Principles. These principles prohibit the use of AI in ways likely to cause harm, including applications that conflict with widely accepted principles of international law and human rights.
Google has previously implied that its principles are narrower in scope than they appear, applying only to "custom AI work" and not to the general use of its products by third parties. A spokesperson for the company stated to Defense One in 2022, "It means that our technology can be used fairly broadly by the military."
According to Biddle, the extent to which Google translates its executive-stated commitments into real-world consequences remains unclear.
Ariel Koren, a former Google employee who was forced out of her job in 2022 after protesting Project Nimbus, sees Google's silence on the Photos issue as part of a broader pattern of avoiding accountability for how its technology is utilized.
“It is an understatement to say that aiding and abetting a genocide constitutes a violation of Google’s AI principles and terms of service,” Koren, now an organizer with No Tech for Apartheid, said as quoted by The Intercept.
“Even in the absence of public comment, Google’s actions have made it clear that the company’s public AI ethics principles hold no bearing or weight in Google Cloud’s business decisions, and that even complicity in genocide is not a barrier to the company’s ruthless pursuit of profit at any cost.”
Read more: 'Targets never end': 'Israel' uses Lavender AI system for Gaza bombing