Social media giants made decisions which allowed more harmful content on people's feeds after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC.
More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.
An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in users' feeds to compete with TikTok.
They sort of told us that it's because the stock price is down, the engineer said.
A TikTok employee gave the BBC rare access to the company's internal dashboards of user complaints and evidence of how staff prioritized cases involving politicians over harmful posts featuring children.
Decisions were made to maintain a strong relationship with political figures to avoid threats of regulation or bans, not for user safety, the TikTok staffer said.
The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, illustrate how the competition in the industry following TikTok's explosive growth led to allowed harmful content on those platforms.
Meta’s internal researcher, Matt Motyl, stated that Instagram Reels, launched to compete with TikTok, lacked sufficient safeguards, resulting in higher levels of bullying, harassment, hate speech, and violence compared to other Instagram content.
In a response to the investigative claims, Meta stated, Any suggestion that we deliberately amplify harmful content for financial gain is wrong, while TikTok described these allegations as fabricated claims. Despite official denials, the findings present a concerning picture of the prioritization of engagement over user safety on major social media platforms.



















