Meta faces scrutiny after ending fact-checking on Facebook
Meta's decision to end fact-checking on Facebook has sparked concerns from the Meta Oversight Board and experts, warning of misinformation risks and human rights issues.
-
An illustration photograph taken on April 17 shows the Facebook app available to download from the App Store displayed on a phone screen, in a residential property in Guildford, south of London. (AFP)
Independent monitors raised concerns on Tuesday that Meta’s recent decision to end fact-checking on Facebook could pose serious human rights concerns.
Meta’s unexpected announcement in January to terminate its US fact-checking program drew harsh criticism from disinformation researchers, who warned that the policy shift could accelerate the spread of misinformation on Facebook.
The Meta Oversight Board, which reviews the company's content moderation decisions, described the announcement about policy and enforcement changes to the handling of hateful and potentially harmful posts as "hastily" made, according to a statement released on Tuesday.
"People have the right to express controversial opinions," said board co-chair Helle Thorning-Schmidt. "People should also be safe from harm."
Mark Zuckerberg’s moderation policy under fire
Meta Chief Executive Mark Zuckerberg unveiled the new moderation approach as part of a major policy change, which analysts viewed as a move to appease then-US President-elect Donald Trump, who has criticized fact-checking as censorship.
As Meta implements these moderation changes globally, the Oversight Board stressed the importance of addressing potential risks to human rights that may arise from the reduction or absence of formal fact-checking systems.
Community notes vs. fact-checking: Effectiveness questioned
The board issued 17 recommendations, including an evaluation of Community Notes' effectiveness compared to third-party fact-checking, particularly in scenarios where false information spreads rapidly and threatens public safety.
Meta previously collaborated with third-party fact-checkers to identify and challenge misinformation on its platforms.
Under the new policy, Facebook and Instagram will use Community Notes, a crowd-sourced tool similar to one used by X (formerly Twitter), to provide context on posts, according to Zuckerberg.
However, experts have questioned the reliability of Community Notes. "You wouldn't rely on just anyone to stop your toilet from leaking, but Meta now seeks to rely on just anyone to stop misinformation from spreading on their platforms," said Michael Wagner from the University of Wisconsin-Madison.
"Asking people, pro bono, to police the false claims that get posted on Meta's multi-billion dollar social media platforms is an abdication of social responsibility," he indicated.
While Meta has pledged to honor the Oversight Board’s rulings on specific content decisions, it is not required to follow the board's broader policy recommendations.
As Meta’s fact-checking policy evolves, global observers remain concerned about the potential impact on truth, safety, and accountability in the digital space.
Read more: Leaked data expose 'Israel's' effort to censor pro-Palestine content