Facebook allows ads inciting violence, 'holocaust for Palestinians'
Following the discovery of the ad, advocates for digital rights conducted an experiment to assess the boundaries of Facebook's machine-learning moderation.
A set of advertisements designed to dehumanize and incite violence against Palestinians, created to test Facebook's content moderation standards, were all given approval by the social network, as revealed by materials shared with The Intercept.
The testing campaign was prompted by an ad found by Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, on Facebook advocating for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement.
The ad's text, automatically translated by Facebook, stated: "It's time to assassinate Paul Larudi [sic], the 'anti-Semitic' and 'human rights' terrorist from the United States." It was only when Nashif reported the ad to Facebook that it was removed.
The submitted test ads, presented in both Hebrew and Arabic, openly violated Facebook and Meta's policies. These ads included explicit calls for violence, such as advocating for a "holocaust for the Palestinians" and the eradication of "Gazan women and children and the elderly." Additionally, the content dehumanized Palestinians, referring to children from Gaza as "future terrorists" and using derogatory language like "Arab pigs."
It is worth noting that Israeli Security Minister Yoav Gallant ordered on October 9 to cut off electricity, water, and food from Gaza in the aftermath of Operation Al-Aqsa Flood, referring to the Palestinians in Gaza as "human animals", who he will have to deal with "accordingly."
Last year Meta claimed it had launched a machine learning tool to detect violent incitement in Hebrew. If it's unable to flag "Death to the Arabs" or "we will return to our land and kill all the Palestinians," one must question what this hostile speech classifier actually does. pic.twitter.com/E3S0ZPVD2n
— Sam Biddle (@samfbiddle) November 21, 2023
"Israeli" TV presenter Shai Golden threatens Arabs and the world.
— Ania Lewandovska🔻 (@Anna_AnninaEl) November 16, 2023
Zionism=mental illness pt. 2 pic.twitter.com/zA0tLMCQ5V
“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, said as quoted by The Intercept.
“Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”
Read next: Instagram adds 'terrorist' to Palestinian profiles, then apologizes
Calling for assassination of pro-Palestine activist: Approved
7amleh's initiative to assess Facebook's machine-learning censorship system originated when Nashif found an ad on his Facebook feed explicitly advocating for the assassination of Paul Larudee, the report acknowledged.
Ad Kan, a right-wing Israeli organization established by former Israeli security force and intelligence officers to oppose "anti-Israeli organizations" allegedly funded by antisemitic sources, was responsible for placing the ad, the report stressed.
“Our ad review system is designed to review all ads before they go live,” as per a Facebook ad policy overview.
As Meta faces increased scrutiny and criticism over its human-based moderation, which historically relied heavily on outsourced contractor labor, the company has increasingly turned to automated text-scanning software to enforce its speech rules and censorship policies.
These technologies help Meta address labor issues associated with human moderators but also introduce challenges related to the transparency of moderation decisions, as they rely on secret algorithms.
Arabic posts deleted, Hebrew hostile speech undetected
A recent external audit commissioned by Meta revealed that the company routinely used algorithmic censorship to delete Arabic posts but lacked a comparable algorithm to detect "Hebrew hostile speech," including racist rhetoric and violent incitement. Following the audit, Meta claimed to have "launched a Hebrew 'hostile speech' classifier to help us proactively detect more violating Hebrew content," such as an ad advocating murder.
Amid the Israeli genocide against Palestinians in Gaza, Nashif expressed concern over the explicit call for the murder of Larudee in the ad, fearing that similar paid posts could contribute to violence against Palestinians.
Pro-#Israel ads, including 👀 one calling for the assassination of American activist Paul Larudee, were also initially approved by Meta but later deleted after user reports. https://t.co/yPnZZ64IJy
— Anisia Uzeyman (@dreamstatesmeta) November 22, 2023
Facebook approves ads inciting violence: A test of content moderation fails
The potential transition of large-scale incitement from social media to real-world violence is not hypothetical, as evidenced by the role of Facebook posts in Myanmar's Rohingya genocide, according to United Nations investigators in 2018. Last year, another group conducted a similar experiment with test ads inciting violence against the Rohingya, and in that case, all the ads were also approved.
Although the Larudee post was promptly removed, it raised questions about how the ad received approval in the first place. Despite Facebook's assurances of safeguards, Nashif and 7amleh, which collaborates with Meta on censorship and free expression issues, found the situation perplexing.
It seems that the approval may have been an anomaly, as 7amleh deliberately created and submitted 19 ads in both Hebrew and Arabic, intentionally violating company rules. This was essentially a test for Meta and Facebook, with the ads containing explicit examples of violent and racist incitement. The purpose was to assess whether Meta's automated screening process had improved in detecting such content.
“We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalized communities,” Nashif said as quoted by The Intercept, “and that their ads manager system was particularly vulnerable.
Meta appears to have failed the test conducted by 7amleh
The company's Community Standards rulebook, which ads are required to adhere to for approval, prohibits not only text endorsing violence but also any dehumanizing statements based on race, ethnicity, religion, or nationality. Surprisingly, confirmation emails obtained by The Intercept reveal that Facebook approved all 19 ads submitted by 7amleh.
Although 7amleh stated that it had no intention of actually running these ads and planned to withdraw them before they were scheduled to appear, the approval highlights what 7amleh perceives as a fundamental flaw in the social platform's handling of non-English speech — languages used by a significant majority of its over 4 billion users. Meta later retroactively rejected 7amleh's Hebrew ads after The Intercept brought them to the company's attention, but the Arabic versions remain approved within Facebook's ad system.
Facebook spokesperson Erin McPike claimed that the approval of the ads was accidental.
After noticing Facebook approved an ad calling for the assassination of a pro-Palestinian political activist, @7amleh took out 19 test ads that explicitly advocated for ethnic violence against Palestinians /#Gaza .
— Nour Naim| نور نعيم (@NourNaim88) November 23, 2023
Facebook approved every single one ! pic.twitter.com/Cy33LDHUuM
“Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said as quoted by The Intercept. “That’s why ads can be reviewed multiple times, including once they go live.”
An online campaign for a New Nakba
Shortly after 7amleh's own experimental ads received approval, the organization discovered an Arabic ad from a group named "Migrate Now". This ad urged "Arabs in Judea and Samaria" — the term used by Israelis, particularly settlers, for the occupied Palestinian West Bank — to "relocate to Jordan."
Leave before it is too late.” An Israeli Facebook page called “Migrate Now” threatens residents of the West Bank and demands that they immigrate to #Jordan https://t.co/48HpPJJoZa
— FOUAD S AL-SHARAABY فؤاد صالح عبدالله الشرعبي (@fouadsharaaby) November 18, 2023
Read next: A 'new Nakba' is ongoing in West Bank alongside a genocide in Gaza
The report also noted that Facebook relies primarily on automated, software-based screening for approving or denying ads, but it's uncertain if the same "hostile speech" algorithms used to identify violent or racist posts are employed in the ad approval process.
Despite Facebook's assurance in response to a previous audit that its new Hebrew-language classifier would significantly enhance its ability to handle spikes in violating content during Israeli wars on Gaza, 7amleh's experiment suggests that either the classifier is not effective or is not utilized in screening advertisements.
When asked if the approval of 7amleh's ads indicated an issue with the hostile speech classifier, Facebook spokesperson McPike did not provide a response, as reported by The Intercept.
According to Nashif, the approval of these ads underscores a broader problem: Meta claims it can use machine learning effectively to prevent explicit incitement to violence, but the evidence suggests otherwise.
“We know that Meta’s Hebrew classifiers are not operating effectively, and we have not seen the company respond to almost any of our concerns,” Nashif stressed in his statement as cited by The Intercept. “Due to this lack of action, we feel that Meta may hold at least partial responsibility for some of the harm and violence Palestinians are suffering on the ground.”
The approval of the Arabic versions of the ads is particularly unexpected, especially in light of a recent Wall Street Journal report indicating that Meta had reduced the confidence threshold for its algorithmic censorship system to remove Arabic posts. The threshold was lowered from 80 percent confidence that the post violated rules to only 25 percent, meaning Meta was less certain that the suppressed or deleted Arabic posts indeed contained policy violations.
“There have been sustained actions resulting in the silencing of Palestinian voices,” Nashif concluded as quoted by The Intercept.
An influencer posted a video revealing that "Israel" offered him $5,000 to go live and pledge his support to the Israeli occupation.
— Al Mayadeen English (@MayadeenEnglish) October 26, 2023
"You cannot buy my support of a genocide," he insisted, calling out, "Free #Palestine".
Source: @/yourfavoriteguy on Tiktok #GazaGenocide… pic.twitter.com/IqM9Enw6nP
It is worth noting that "Israel" conducted a widespread misinformation and incitement campaign through various media outlets and social platforms, amid a hysteric military aggression on Gaza. The primary targets of this campaign are Palestinians, and it aims to dehumanize and demonize them, potentially paving the way to "justify" genocide.
The Israeli occupation has reportedly intensified its efforts to manipulate public perception and shape the narrative surrounding the ongoing Israeli brutality. Traditional media, as well as social media platforms like YouTube and X, have become battlegrounds for this paid disinformation campaign.
Read next: ‘Israel’ warps reality, funds twisted ads to 'justify' Gaza genocide