Instagram's tools designed to protect teenagers from harmful content are failing to stop them from seeing suicide and self-harm posts, a study has claimed.
Researchers also said the social media platform, owned by Meta, encouraged children to post content that received highly sexualised comments from adults.
The testing, by child safety groups and cyber researchers, found 30 out of 47 safety tools for teens on Instagram were substantially ineffective or no longer exist.
Meta has disputed the research and its findings, saying its protections have led to teens seeing less harmful content on Instagram.
This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today, a Meta spokesperson told the BBC.
Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls.
The company introduced teen accounts to Instagram in 2024, saying it would add better protections for young people and allow more parental oversight.
The study into the effectiveness of its teen safety measures was carried out by the US research centre Cybersecurity for Democracy - and experts including whistleblower Arturo Béjar on behalf of child safety groups including the Molly Rose Foundation.
The researchers said after setting up fake teen accounts they found significant issues with the tools.
In addition to finding 30 of the tools were ineffective or simply did not exist anymore, they said nine tools reduced harm but came with limitations.
The researchers said only eight of the 47 safety tools they analysed were working effectively - meaning teens were being shown content which broke Instagram's own rules about what should be shown to young people.
This included posts describing demeaning sexual acts, as well as autocompleting suggestions for search terms promoting suicide, self-harm or eating disorders.
These failings point to a corporate culture at Meta that puts engagement and profit before safety, said Andy Burrows, chief executive of the Molly Rose Foundation - which campaigns for stronger online safety laws in the UK.
It was set up after the death of Molly Russell, who took her own life at the age of 14 in 2017.
Mr Burrows said the findings suggested Meta's teen accounts were a PR-driven performative stunt rather than a clear and concerted attempt to fix long running safety risks on Instagram.
Meta is one of many large social media firms which have faced criticism for their approach to child safety online.
In January 2024, Chief Executive Mark Zuckerberg was among tech bosses grilled in the US Senate over their safety policies - and apologised to a group of parents who said their children had been harmed by social media.
But these tools have a long way to go before they are fit for purpose, said Dr Laura Edelson, co-director of the report's authors Cybersecurity for Democracy.
Meta told the BBC the research fails to understand how its content settings for teens work and said it misrepresents them.