Home Tech Instagram’s Recommendation Algorithm Exposes Young Users to Explicit Videos, Report Finds

Instagram’s Recommendation Algorithm Exposes Young Users to Explicit Videos, Report Finds

Young Instagram users are being exposed to sexually explicit and harmful videos more frequently than the platform acknowledges, according to a recent report. The Wall Street Journal and Northeastern University computer science professor Laura Edelson conducted two separate experiments to uncover these concerning findings. Over a period of seven months, the researchers created new minor accounts and had them scroll through Instagram’s video Reels feed, intentionally skipping over “normal” content and focusing on more “racy” adult videos. Shockingly, within just 20 minutes of scrolling, the accounts were bombarded with promotions for “adult sex-content creators” and offers of nude photos. It is important to note that Instagram accounts marked as minors are supposed to have strict content control limits automatically assigned to them.

These tests conducted by the Wall Street Journal mirror those carried out by former Instagram safety staff in 2021, which revealed that the site’s universal recommendation system was undermining child safety measures. Internal documents from 2022 indicate that Meta, the parent company of Instagram, was aware that its algorithm was recommending more explicit content to young users than to adults, including pornography, gore, and hate speech. Meta spokesperson Andy Stone responded to these findings by dismissing the experiments as artificial and not reflective of how teens use Instagram. He stated that Meta has been actively working on reducing sensitive content seen by teens and has made significant progress in recent months.

It is worth mentioning that similar tests were conducted on video-focused platforms like TikTok and Snapchat, but they did not produce the same results in terms of recommendation algorithms. These new findings build upon a November report that highlighted Instagram’s Reels algorithm recommending sexually explicit content to adult users who were only following child accounts. In February, another investigation by the Wall Street Journal revealed that Meta employees had raised concerns about exploitative parents and adult account holders on Instagram who profited from sharing images of children online. The report shed light on the emergence of “Momfluencers” engaging in sexual conversations with followers and selling subscriptions to view suggestive content of their children, such as dancing or modeling in bikinis.

Advocates and regulatory bodies have been focusing their attention on social media’s role in online child exploitation. Meta itself has faced multiple lawsuits alleging its involvement in facilitating child exploitation. In response, the company formed a child safety task force in 2023 and introduced several new safety measures, including anti-harassment controls and what they claim to be the “strictest” content control settings available. On the other hand, Meta’s competitor, X, recently revised its adult content policy, allowing users to post adult nudity or sexual behavior as long as it is properly labeled and not prominently displayed. However, the platform does not specify any consequences for accounts that post unlabeled adult content. It does promise to block users under the age of 18 from accessing such content if it is appropriately labeled with a content warning.

These findings highlight the urgent need for social media platforms to prioritize child safety and protect young users from exposure to explicit and harmful content. While efforts have been made to address these issues, it is clear that more needs to be done to ensure the well-being of young individuals online.

Exit mobile version