Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC.
More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.
An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in users' feeds to compete with TikTok.
They sort of told us that it's because the stock price is down, the engineer said.
A TikTok employee gave the BBC rare access to the company's internal dashboards of user complaints - as well as other evidence of how staff had been instructed to prioritize several cases involving politicians over a series of reports of harmful posts featuring children.
Decisions were being made to maintain a strong relationship with political figures to avoid threats of regulation or bans, not because of the risks to users, the TikTok staffer said.
The whistleblowers, speaking in the BBC documentary, Inside the Rage Machine, demonstrate how the industry responded following TikTok’s explosive growth, revealing a concerning trend that prioritizes engagement over user safety.
Matt Motyl, a senior Meta researcher, noted that Instagram Reels was launched without sufficient safeguards, leading to a higher prevalence of bullying, hate speech, and violence compared to the standard Instagram feed.
He characterized the algorithm arms race as one that deprioritized user safety, exacerbated by internal metrics that favored provocative content because of its high engagement rates.
As the situation continues to unfold, both Meta and TikTok contest the claims made by these whistleblowers, asserting their commitment to safeguarding their users from harmful content.



















