London, United Kingdom (Enmaeya News) — Major social media platforms are under renewed scrutiny after a report revealed widespread failures in moderating content related to suicide and self-harm.
The Molly Rose Foundation, which monitors online safety, analyzed more than 12 million content moderation decisions across six leading social media networks and found significant disparities in how harmful material is handled.
Pinterest and TikTok were responsible for more than 95% of content removals related to suicide and self-harm, while Instagram and Facebook each accounted for about 1% of removals. X, formerly Twitter, removed only one in 700 flagged posts.
Even when potentially dangerous content is identified, enforcement measures are often minimal. The report said TikTok, despite flagging nearly 3 million posts containing harmful material, suspended only two accounts.
“The inconsistencies in enforcement demonstrate a systemic failure across social media platforms,” the foundation said.
Experts warn that exposure to suicide and self-harm content online can have severe consequences, particularly for adolescents and other vulnerable groups, amplifying the risk of mental health crises.
The findings have prompted calls for stronger regulation. Advocates are urging lawmakers to bolster the Online Safety Act to ensure platforms are held accountable for failing to remove harmful content and to implement consistent penalties for breaches.
“Users, especially minors, continue to face significant exposure to unsafe material,” the foundation said. “Without meaningful oversight, these platforms are effectively placing profits and engagement above the safety of their users.”
Social media companies have pledged to improve safety features and content moderation. TikTok has introduced AI-driven tools to detect harmful content, while Facebook and Instagram have said they will expand mental health resources and reporting mechanisms.
However, the report notes that technological tools alone are insufficient, and enforcement gaps leave users at continued risk. Experts stress the need for transparent reporting, robust moderation policies, and legislative pressure to ensure platforms prioritize user well-being over algorithmic engagement metrics.
The findings add to growing concerns about the mental health impact of social media. Analysts say the report may serve as a wake-up call for regulators and platform operators, emphasizing that protective measures must be comprehensive, consistent, and enforced across all networks to safeguard users’ mental health.