Social Media failed to control islamophobic content: What is the impact of inaction?
Social media continues to turn a blind eye to rising Islamophobia in the world, which is causing increasing discrimination, injustice, and even massacres committed against Muslims.
-
Notably, when social media conglomerates fail to respond to hateful and violent content, they know that offline violence is a real possibility
Violent, anti-Muslim propaganda continues to increase on social media platforms, resulting in threats, aggression, and even genocide against Muslims worldwide. Muslim Advocates has spent years alerting these corporations to anti-Muslim groups and content, but their efforts are limited as long as these firms fail to enforce their content standards. Social media networks have grown far too large, too quickly, and with far too little accountability.
According to the Center for Countering Digital Hate's (CCDH) recent report, 2022, social media networks failed to monitor anti-Muslim content. All major social media platforms, including Facebook, Instagram, TikTok, Twitter, and YouTube, failed to respond to 89 % of reports of anti-Muslim hatred and Islamophobic content.
Previously, in 2019, Facebook, Twitter, and Google pledged to support the Christchurch request to remove terrorist and violent extremist propaganda off the internet. Moreover, the social media conglomerates pledged to remain steadfast in their "commitment to ensuring they do everything they can to combat the hatred and extremism that lead to terrorist violence". But unfortunately, their press releases have proven to be nothing more than hollow promises once again.
Most importantly, the CCDH research indicates 530 posts with unsettling, biased, and degrading information directed at Muslims, including racist jokes, conspiracies, and false claims. These posts have received 25 million views. Although much of the harmful content was immediately recognized, it was still used. For example, users on Instagram, TikTok, and Twitter can use hashtags like #deathtoislam, #islamiscancer, and #ragheadi. At least 1.3 million people saw the content that was distributed through hashtags.
Nearly 5,000 people are members of the Facebook group "Fight against Liberalism, Socialism, and Islam". Because it's a closed group, anything posted there is only viewable to the members. The group argues that Moderate Islam does not exist and invites Facebook users to join the group to learn about Islam and the crimes it is committing.
Similarly, Taitz is one of the dozens of Facebook groups in the United States, Canada, Australia, and the United Kingdom dedicated to spreading anti-Muslim hate speech. None of these groups has been removed despite being notified on Facebook. However, the groups are merely one facet of a much larger problem with social media sites failing to address Islamophobia, which exists on Facebook and across the board.
On the other side, YouTube recently deactivated Dr. Israr Ahmed's official account, a well-known Islamic scholar. The channel has about 2.9 million subscribers and has received YouTube Silver and Golden buttons.
Notably, when social media conglomerates fail to respond to hateful and violent content, they know that offline violence is a real possibility. Similarly, anti-Muslim intolerance aims to demean and isolate communities of people who have faced violent threats, attacks, prejudice, and antagonism in the past.
Allowing this content to be pushed and spread on platforms without appropriate interventions and repercussions puts these communities even more at risk by causing social splits, normalizing abusive behaviour, and increasing offline attacks and abuse. Platforms profit from this hatred, cheerfully monetizing material, interactions, and the attention and eyeballs. Hatred is profitable for them.
The conspiracies and racist content foster and maintains anti-Muslim and anti-faith sentiment. It further suppresses these communities and makes it difficult for Muslims to exercise their right to freedom of religion and expression online.
Significantly, big tech's reluctance to respond to anti-Muslim bigotry fosters an environment that limits freedom of expression and pushes marginalized people off platforms while allowing white supremacist, extremist, and biased content to thrive and give record profits to their stockholders.
Legislators, regulators, and civil society no longer trust social media corporations when they say they would combat extremism and hate speech. Failures that are systemic and unregulated, such as those documented in this research, must be remedied, and technology corporations must be held accountable.
Even though Meta has been sued for failing to respond to anti-Muslim attacks on their platforms by victims of the Rohingya massacre, Facebook has failed to act on 94% of the messages in this sample. The current situation is insufficient to encourage technology companies to seriously take their obligations to Muslim communities and other groups.
There is a need for transparency on algorithms (which determine which content gets amplified and which isn't), community standards enforcement (which rules are implemented and when), the economy, and advertising, which make up the bulk of revenues for social media platforms. In addition, the social media platforms must be held accountable for the impact of content they monetize on an individual, communal, and national level.
The ability to hold social media CEOs accountable for their actions as administrators of platforms with great power over discourse and equity in user experience for underrepresented communities is also needed to cater to Islamophobic content (CCDH, 2022). What we do online influences every aspect of our lives and significantly impacts our societies, democracies, and planet. Therefore, we must address what is going on online to address major challenges.