AT A GLANCE
- A new report says Instagram’s safety tools for teens are “woefully ineffective.”
- The review found only 8 of 53 safety features worked as promised.
- Whistleblower Arturo Bejar and advocacy groups accuse Meta of misleading parents.
- Meta disputes the findings, calling the report “misleading” and “speculative.”
Meta’s Efforts at Addressing Teen Safety and Mental Health on Its Platforms Have Long Been Met With Criticism
Meta’s Instagram platform is under renewed scrutiny after whistleblower Arturo Bejar and four nonprofit groups issued a report accusing the company of failing to protect young users. The review examined 53 safety features, finding that only eight worked without major flaws, while many were ineffective, outdated, or removed.
Meta’s Safety Measures Called “Woefully Ineffective”
The report, conducted with Cybersecurity For Democracy at NYU and Northeastern, Fairplay, the Molly Rose Foundation, and ParentsSOS, argues Meta prioritizes flashy PR campaigns over real safety improvements.
Critics say Instagram’s design continues to allow unwanted adult contact, encourage harmful content, and incentivize risky behaviors among teens.
Meta Pushes Back Against the Claims
Meta dismissed the findings as “dangerously speculative,” insisting its Teen Accounts lead the industry in parental controls and protections. The company says its features reduce harmful content exposure and unwanted contact, while offering parents tools to limit screen time and monitor activity.
Still, Meta has not disclosed how many parents actively use these tools.
Evidence of Risky Interactions and Content
Researchers created test accounts that highlighted persistent dangers. Adult strangers were often recommended to minors through Instagram’s “people to follow” and reels. Offensive messages, including encouragements to self-harm, bypassed filters.
Teens were also exposed to disturbing content, from sexual material to violent videos and body image-related posts.
Legal and Parental Concerns Grow
New Mexico Attorney General Raúl Torrez, who has filed a lawsuit against Meta for failing to protect children from predators, said the report underscores the company’s refusal to make real safety changes.
Parents and advocates stress that flashy safety tools are useless if design choices continue to place young users at risk.
Recommendations for Change
The report urges Meta to adopt red-team testing against predatory behavior, provide effective ways for teens to report harmful interactions, and publish transparent data about youth experiences.
Authors argue recommendations to teen accounts should be PG-rated and filtered to prevent exposure to self-harm and sexualized content.







