Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers reported. Over a dozen insiders revealed that both companies took risks concerning safety issues such as violence, sexual blackmail, and terrorism in a bid to capture users' attention.
An engineer at Meta described directives to allow more borderline harmful content like conspiracy theories, stemming from pressure to improve stock performance. TikTok employees indicated that prioritizing political content over user safety led to insufficient responses to child abuse reports.
The whistleblower accounts from BBC's 'Inside the Rage Machine' provide an inside look at the algorithm arms race triggered by TikTok's rapid growth, exposing practices that have resulted in algorithms emphasizing controversial content for higher engagement.
Furthermore, concerns about content moderation in the wake of launching Instagram Reels, intended to mimic TikTok's success, highlighted the prevalence of harmful comments on this platform, marking it as a troubling trend that continues amid rapid technological advancements.
Despite assurances from both Meta and TikTok that they actively combat harmful content and prioritize user safety, whistleblower revelations suggest a focus on profitability is often prioritized over protective measures for vulnerable users.



















