Meta issues apology for graphic video surge on Instagram reels
Meta clarified that the surge of violent videos was unrelated to its recent changes in content moderation policies.
-
The Meta logo is seen at the Vivatech show in Paris, France, June 14, 2023 (AP Photo/Thibault Camus, File)
Meta issued an apology on Wednesday night after a technical glitch caused a flood of graphic and violent videos to appear on Instagram Reels feeds worldwide, including for underage users. The disturbing videos depicted extreme violence, including shootings, accidents, and fatalities, with some clips lacking the expected "sensitive content" warnings.
Numerous users expressed shock as their feeds were inundated with violent imagery. Some report a continuous stream of videos showing people being shot, severely injured by machinery, and thrown from amusement park rides. The videos originated from accounts such as "BlackPeopleBeingHurt," "ShockingTragedies," and "PeopleDyingHub," none of which the reporter followed.
The visibility of these videos skyrocketed due to Instagram's recommendation algorithm, which significantly boosted their viewership. Some clips amassed millions of views, far surpassing the engagement levels of other content from the same accounts.
Addressing the situation, an Instagram spokesperson stated, "We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake." However, the company did not disclose the full extent of the issue.
Read more: 'From the river to the sea' not necessarily hate speech: Meta board
Meta clarified that the surge of violent videos was unrelated to its recent changes in content moderation policies. These adjustments aimed to prevent the overreach of censorship by concentrating on "high-severity" rule violations. The company also announced it would reduce the proactive use of AI for scanning and removing prohibited content, opting to wait for human reports before taking action.
Meta did not immediately confirm whether these policy changes applied to violent and graphic content, prompting questions about the platform's capacity to protect users from harmful material.