Meta restricts, bans Palestinian posts on its platforms: Report
Not only has the social media giant proven to harbor double standards, but a new report now shows that in light of the wars in Palestine, it discriminates against posts merely based on them being written in Arabic.
Palestinian users were deprived of their freedom of expression during the two-week war on Gaza by "Israel" last year, by having their content blocked on Facebook and penalizing Arabic-speaking users more heavily than those who spoke Hebrew, an independent audit of Meta’s online content handling revealed.
The audit, carried out by the consultancy Business for Social Responsibility (BSR) demonstrates Meta's ability to control and manage content and regulations on its social media platforms. Regarding the matter of censoring, it represents one of the first insider accounts that spoke out on the failure carried out on a social platform during the conflict and it advocates for Palestinian activists' complaints that online censorship targeted them specifically for speaking Arabic, according to reports by The Washington Post and other media outlets at the time.
7amleh, the Arab Center for the Advancement of Social Media which is a support group for Palestinian digital rights, tweeted: "The BSR report confirms Meta’s censorship has violated the Palestinian right to freedom of expression among other human rights through its greater over-enforcement of Arabic content compared to Hebrew, which was largely under-moderated".
Israeli occupation forces attacked the residents of the western neighborhood of Sheikh Jarrah in occupied Al-Quds in May of last year. Soon afterward, the IOF stormed the Al-Aqsa mosque and began aggressively bombing Gaza, leaving more than 200 Palestinians killed. A few months following, the Israeli Supreme Court in occupied Al-Quds ruled to postpone the expulsion of the residents of Sheikh Jarrah from their homes due to concern over escalating tensions with the Palestinian resistance, during which the court suggested they remain tenants and that houses be owned by settlers but that was immediately rejected by Sami Arsheed, the attorney of residents in Sheikh Jarrah.
When Mona Al-Kurd, a Palestinian activist, tried live-streaming the ongoing events in the neighborhood trying to expose the Israeli crimes against Palestinian families, her streaming was cut off suddenly, leading Al-Kurd later to post on her Instagram story explaining that her live-streaming feature was blocked, which exposes Instagram’s complicity and censorship of Palestinian content.
Palestinians posted photos of their destroyed homes and of children’s coffins, which eventually led to a global outrage to end the conflict. That, however, was short-lived, as content moderation campaigns began rolling in.
Meta's Instagram banned content containing the hashtag #AlAqsa, as initially, the company stated automated software deployment errors as being the reason behind that but a Meta spokeswoman added that a “human error” was behind the glitch, after The Post underlined the issue. According to the BSR report, the #AlAqsa hashtag was mistakenly listed under terms linked to terrorism by a third-party contractor's employee that controls content moderation, through which the employee wrongly drew “from an updated list of terms from the US Treasury Department containing the Al Aqsa Brigade, resulting in #AlAqsa being hidden from search results."
Years of accounts from Palestinian journalists and activists that Facebook and Instagram appear to censor their posts were confirmed, more frequently than those of Hebrew speakers. BSR not only deduced that Facebook was banning or adding strikes to more posts from Palestinians than from Israelis but also that "rule-breaking" content in Arabic than that in Hebrew was routinely flagged at higher rates. Meta’s AI-based hate speech systems are blamed for using lists of terms linked to foreign terrorist organizations, adding that it would be a higher possibility that an Arabic-written post may have its content marked due to being associated with a terrorist group.
The report claimed that Meta installed a detection software to identify hate and violent speech in Arabic, but had not done so for the Hebrew language.
Human Rights Watch had previously accused Meta last year, then-Facebook, of wrongfully removing and suppressing content by Palestinians and their supporters, including content regarding human rights abuses committed by "Israel" against Palestinians during the Seif Al-Quds battle.
The BSR report included policy alterations for labeling "dangerous" organizations and individuals, alongside stipulating more transparency to users when posts are banned, redistributing content moderation resources in Hebrew and Arabic based on “market composition,” and routing potential content violations in Arabic to people speaking the same dialect as the one in the social media post.
In light of the list of changes, Meta’s human rights director Miranda Sissons replied that the 10 of the recommendations would be completely enforced - as of now, partly four changes are implemented and the "feasibility" of another six is being reviewed and “no further action” on one unidentified change will be taken.
Sissons stated: “There are no quick, overnight fixes to many of these recommendations, as BSR makes clear,” adding: “While we have made significant changes as a result of this exercise already, this process will take time — including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible.”
7amleh released a statement that the report falsely named the bias from Meta unintentional: “We believe that the continued censorship for years on [Palestinian] voices, despite our reports and arguments of such bias, confirms that this is deliberate censorship unless Meta commits to ending it".