According to Research, Twitter's Algorithm Favors Right-Wing Politics
Twitter's research suggests that tweets from right-leaning political groups and news outlets are amplified more than those from the left.
The revelation surrounding Twitter was uncovered as the social media giant was examining how its algorithm suggests political information to users, according to a tech firm.
In a report by the BBC, the company is cited admitting that it had no idea why, calling it a "more difficult question to answer". Twitter has faced accusations of anti-conservative bias in the past.
The study by the firm looked at tweets from political parties and users sharing content from news sites in seven countries: Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States. Millions of tweets were analyzed between April 1 and August 15, 2020.
Research mechanism
Researchers compared the tweets that were magnified more on an algorithmically ordered feed to those that were amplified more on a reverse-chronological stream, both of which consumers can access.
Consequently, researchers discovered that mainstream political right-wing parties and outlets had higher degrees of "algorithmic amplification" than their left-wing equivalents.
Explaining the pattern is difficult
The company's next step, according to Rumman Chowdhury, director of Twitter's Meta (machine-learning, ethics, transparency, and accountability) team, is to find the cause of the issue.
She stated that Tweets from political-right elected officials are algorithmically amplified more than tweets from political-left elected officials in six out of seven countries. When compared to left-leaning news sites, right-leaning news outlets receive more amplification as well.
"Establishing why these observed patterns occur is a significantly more difficult question to answer and something Meta will examine," the director commented to the BBC.
Flawed Image-cropping algorithm favored white people over black people
The platform announced in April that it was doing research to see if its algorithms were causing "unintentional harm."
In May, the business admitted that its image-cropping algorithm had flaws that favored white people over black people and women over men.