Whistleblower Revelations on Social Media Algorithms

Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in users' feeds to compete with TikTok. They sort of told us that it's because the stock price is down, the engineer said.

A TikTok employee provided the BBC with rare access to the company's internal dashboards of user complaints, illustrating evidence of prioritizing political cases over reports of harmful posts involving children. Such decisions were made to maintain a strong relationship with political figures and avoid regulation efforts, rather than user safety.

Algorithmic Arm Wrestling

The whistleblower revelations offer a close-up view of how the industry responded to TikTok's explosive growth, which left competitors scrambling to capture user engagement. Matt Motyl, a senior Meta researcher, highlighted that Instagram Reels, launched in 2020, was initiated without adequate safeguards. Internal research indicated significantly higher occurrences of harassment and violence in comments on Reels.

Despite growing concerns, the urgency for safety in developing products like Reels was overshadowed by the pursuit of market share and revenue. This reality underscores a troubling trade-off between user welfare and corporate profit objectives.

Internal documents suggested that algorithms disproportionately rewarded content that incites outrage, leading to a more negative user experience. Despite assurances from Meta and TikTok about their commitment to user safety, the whistleblower accounts challenge these claims, highlighting a potential disregard for the consequences of harmful content in search of financial gain.

The Ethical Dilemma of Engagement

As TikTok's algorithm reshaped social media dynamics, many users experienced harmful or radicalizing content. Experts caution that teenage users, in particular, remain vulnerable, as moderation capabilities struggle to keep pace with the volume of content. Whistleblower sentiment echoes that protecting young audiences appears secondary to market concerns.

The internal struggle within these companies reflects an intense battle between achieving user engagement metrics and focusing on the societal impact of the content shared. This revealing dialogue raises critical questions about the ethical responsibilities of social media platforms in safeguarding their users while also vying for market dominance.

In light of these allegations, both Meta and TikTok have denied the accusations, asserting their dedication to user safety and systematic protection against harmful content.