Meta has issued an apology after an “error” caused Instagram’s recommendation algorithm to flood users’ Reels feeds with disturbing and violent videos, including fatal shootings, gruesome injuries, and tragic accidents. The issue, which affected a wide range of users, including minors, sparked outrage as many were served explicit content without their consent.
Some videos carried “sensitive content” warnings, but others appeared with no restrictions, exposing users to distressing material. A Wall Street Journal reporter noted that their feed was inundated with clips of people being shot, crushed by machinery, and violently thrown from amusement park rides.
Read More: Instagram tests dislike button for comments
These videos originated from pages with names such as “BlackPeopleBeingHurt” and “PeopleDyingHub,” accounts that the journalist did not follow. Alarmingly, metrics on these posts indicated that Instagram’s algorithm had significantly boosted their visibility, with some violent videos receiving millions more views than other posts from the same accounts.
Meta’s Response and Unanswered Questions
Following widespread complaints, Meta issued a statement late Wednesday: “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.” Despite this apology, Meta did not specify how many users were affected or provide a clear explanation of what caused the error. Even after the company claimed to have resolved the issue, some users and journalists continued to see violent content on their feeds.
Adding to the controversy, these graphic clips appeared alongside paid advertisements for brands, including law firms, massage studios, and online retailers such as Temu, raising concerns about Meta’s content moderation and ad placement strategies.
Content Moderation Changes
The incident comes amid Meta’s broader adjustments to its content moderation policies. On January 7, the company announced a shift in its approach, stating that it would focus its automated moderation on “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud, and scams.” Meanwhile, less serious policy breaches would only be addressed if reported by users.
This change follows criticism that Meta’s moderation systems were overly aggressive in censoring posts that merely might violate its rules. The company has since scaled back automated content suppression, though it has not confirmed whether its policies on violence and graphic content have been altered.
According to Meta’s transparency report, more than 10 million pieces of violent and graphic content were removed from Instagram between July and September last year, with nearly 99% of them flagged and deleted by the platform before being reported by users.
Political Implications and Staffing Cuts
Many observers view these moderation changes as an attempt by Meta CEO Mark Zuckerberg to mend relations with former President Donald Trump, a longtime critic of the company’s policies. Earlier this month, Zuckerberg visited the White House to discuss Meta’s role in strengthening American technological leadership.
These policy shifts also coincide with significant staff reductions at Meta. In 2022 and 2023, the company laid off approximately 21,000 employees—nearly a quarter of its workforce—including personnel in its civic integrity, trust, and safety teams.
Read More: Meta announces world’s longest undersea cable
The sudden exposure to violent content left many users unsettled. Grant Robinson, a 25-year-old in the supply-chain industry, described the experience: “It’s hard to comprehend that this is what I’m being served. I watched 10 people die today.” Robinson noted that similar content had appeared in the feeds of all his male friends, ages 22 to 27, none of whom regularly engaged with violent material.