Facebook, Twitter ignored 90% of Islamophobic flagged posts
A majority of the posts contained not only offensive but also violent hashtags with death threats.
According to a recent analysis, Facebook, Twitter, Instagram, YouTube, and TikTok have failed to act on over 90% of anti-Muslim and Islamophobic content on their platforms.
The research published Thursday by the Centre for Countering Digital Hate indicates that 530 posts were viewed 25 million times and featured dehumanizing information about Muslims in the form of racial caricatures, conspiracies, and false claims.
Read more: Indian hate-filled songs target Muslims in India
This included Instagram posts depicting Muslims as "pigs" and calling for their expulsion from Europe, comparisons between Islam and cancer that should be "treated with radiation" on a photo of an atomic blast, tweets on Twitter claiming Muslim migration was part of a plot to change other countries' politics, and many more.
The CCDH employed inflammatory hashtags including #deathtoislam, #islamiscancer, and #raghead to locate postings to report.
The CCDH reported that of 125 posts to Facebook, only seven were acted on; of 227 Instagram posts, only 32 were acted on; of 50 TikTok videos, only 18 were acted on; of 105 Twitter posts, only three were acted on; and of 23 videos submitted to YouTube, none were reported.
There were also countless groups on Facebook dedicated to Islamophobia, with titles like "ISLAM means Terrorism," "Stop Islamization of America," and "Boycott Halal Certification in Australia." The organizations have thousands of members with a total of 361,922 members, mostly from the United Kingdom, the United States, and Australia. Despite being reported to Facebook, all of these groups were still active.
See this: Islamophobia surged in Australia after the Christchurch massacre
Researchers also discovered 20 postings showing the Christchurch shooter, just six of which were acted on, despite Facebook, Instagram, and Twitter publicly pledging to remove terrorist and extremist content.
Read more: Rohingya refugees take Facebook to court
The gunman also released a 74-page diatribe in which he screamed against Muslims and immigration, which rapidly went viral on the internet. Facebook stated at the time that it erased 1.5 million videos depicting the New Zealand mosque attacks within the first 24 hours of the horrific murders.
The video, which was live on Facebook, had 4,000 views at the time, with social media services attempting to remove reuploaded material.
To avoid YouTube's detection and removal, several of the uploaders made minor changes to the video, such as adding watermarks or logos to the footage or changing the size of the clips.
Facebook’s community standards forbid “a direct attack against people on the basis of... race [or] ethnicity," as does Instagram. Twitter states that users “may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity [and] national origin." YouTube states that “hate speech is not allowed on YouTube," and TikTok “do[es] not permit content that contains hate speech or involves hateful behavior, and we remove it from our platform.”
Kemi Badenoch, the minister for communities and equalities, said in a statement that the report is welcome and "shines an important light on the unacceptable abuse many Muslims receive online every day. Social media companies have to do more to take meaningful action against all forms of hatred and abuse their users experience online."
In 2020, TikTok stated, “We’ve always been open about the fact that we won’t catch every instance of inappropriate content or account activity, and we recognize that we have more to do to meet the standards we have set for ourselves today. This is why we continue to invest at scale in our Trust and Safety operations, which include both technologies and a team of thousands of people around the world."
Also in 2020, researchers discovered that Facebook posts and sites promoting fascism are "actively recommended" by the platform's algorithm. In response, Facebook said it was revising its hate speech standards.