Facebook far-right groups pushing radicalization in UK: The Guardian
A network of far-right Facebook groups is spreading racist disinformation and fueling online radicalisation across the UK, a Guardian investigation reveals.
-
The Facebook app icon is seen on a smartphone, Tuesday, Feb. 28, 2023, in Marple Township, Pennsylvania. (AP)
A network of far-right Facebook groups is exposing hundreds of thousands of Britons to racist and extremist disinformation and has become an engine of radicalisation, a The Guardian investigation published on Sunday revealed.
Experts who reviewed The Guardian’s months-long data project said such groups help to create an online environment that can radicalise people into taking extreme actions, such as last year’s summer riots, which targeted hotels housing migrants in the United Kingdom.
The network was exposed just weeks after 150,000 protesters from all over the country descended on London for a far-right protest, the scale of which dwarfed police estimates and whose size and toxicity shocked politicians. The Guardian’s data projects team identified the groups from the profiles of those who took part in the riots that followed the killing of three girls in Southport last summer.
An ecosystem of misinformation and far-right cliches
The Guardian analyzed more than 51,000 text posts from three of the largest public groups in the network, which found hundreds of posts that experts said were peppered with misinformation and conspiracy theories, containing far-right tropes, the use of racist slurs, and evidence of white nativism.
A key element of the network’s success is the groups’ admins: a team of mostly middle-aged Facebook users responsible for the invites to the group, the moderation of often far-right language, and the spread of rumour and misinformation, which they repost to other groups in the network. Primarily located in the southeast of England and the Midlands, the research showed these individuals are scattered across England and Wales.
Hailing from vastly different social backgrounds and home lives, their residences range from a large seaside townhouse on the south coast to a neat new-build on the outskirts of Loughborough and a small red-brick council house in urban Birmingham.
While most contacted admins declined to speak, one moderator in a Leicestershire village, who oversees six groups with nearly 400,000 members, including “Nigel Farage for PM,” stated from her doorstep that far-right users are "deleted and blocked".
Contrary to her claims, the investigation found swathes of examples of extreme far-right posts, including disinformation and well-known debunked conspiracy narratives, some of which were spread word-for-word or with slight variations in writing across multiple connecting groups.
Immigrants face the most hateful posts
The rhetoric against immigrants is intensely bitter and angry, according to The Guardian, relying on demonizing and dehumanizing slurs such as "criminal," "parasites," "primitive," and "lice." The characterization of Muslims includes a range of hostile descriptions, labeling them as "barbaric and intolerant," "an army," "archaic," "medieval," and "not compatible with the UK way of life."
“We need a humongous nit comb. To scrape the length and breast [sic] of the uk, to get rid of all the blood sucking lice out of our country once and for all!!” one post cited by The Guardian reads.
“Our own government has put us all at risk by allowing these primitive minded people onto our land," another post says.
“If you think immigration is bad. Then just wait until all our towns and cities are full! And they start leaching out into our quaint beautiful quiet little villages and bringing all their crimes and third world culture with them," one of the users posted.
Questionable moderation policies
Months after Meta announced sweeping content moderation changes, the content shared in the groups raises fresh concerns about these very policies.
In the past, far-right ideas online emerged from platforms more readily associated with that part of the political spectrum, such as 4chan, Parler, and Telegram, which typically attracted younger audiences.
According to Dr Julia Ebner, a radicalization researcher at the Institute for Strategic Dialogue and an expert on online radicalisation, such spaces act as a breeding ground for extremist ideologies and "definitely play a role in the radicalisation of individuals".
“What is new is that the online spaces amplify a lot of these dynamics,” she emphasized to The Guardian.
“The algorithmic amplification, the speed at which people can end up in a radicalisation engine. Then there are the new technologies from fabricated videos to deepfakes to bot automation," she noted, adding that that the digital age has created a dangerous dynamic where people inherently trust content from individual influencers over established institutions.
After reviewing the three groups used in The Guardian's analysis, Meta, the parent company of Facebook, confirmed through a spokesperson that the content did not violate its hateful conduct policy.