72.9 F
San Antonio
Thursday, March 5, 2026

Instagram Is Safeguarding Teens by Limiting Them to PG-13 Content

Meta’s PG-13 Makeover with Stricter Teen Controls and Parental Locks

Teenagers on Instagram will now be limited to PG-13-level content by default, part of Meta’s new push to protect younger users from explicit or harmful material. The company announced Tuesday that minors won’t be able to adjust these settings without parental approval.

The change means posts depicting sex, drugs, or dangerous stunts will be hidden from teen accounts. “This includes hiding or not recommending posts with strong language, certain risky stunts, and additional content that could encourage potentially harmful behaviors,” Meta explained in a blog post, calling it the biggest update since launching teen accounts last year.

Stricter Parental Controls

Anyone under 18 who joins Instagram is automatically assigned a private teen account with usage restrictions and filtered feeds. These filters already hide topics like cosmetic procedures, but the new update blocks even more — including profiles that promote adult content or link to sites like OnlyFans.

Teens who already follow such accounts will lose access to their posts, messages, and comments. Those accounts also won’t be able to contact minors. Meta says it’s also expanding blocked search terms beyond self-harm and eating disorders to include “alcohol,” “gore,” and other words that could lead to inappropriate content, even if misspelled.

The company is also rolling out a “limited content” setting for parents who want tighter controls, disabling teens’ ability to see or post comments entirely.

Meta Faces Criticism for Its Track Record

Despite Meta’s assurances, critics remain unconvinced. Josh Golin, executive director of Fairplay, dismissed the announcement as “about forestalling legislation” and pacifying worried parents.

“Splashy press releases won’t keep kids safe, but real accountability and transparency will,” Golin said, urging Congress to pass the federal Kids Online Safety Act.

Ailen Arreaza of ParentsTogether echoed the skepticism: “We’ve heard promises from Meta before. Each time we’ve watched millions be poured into PR campaigns while actual safety features fall short. Our children have paid the price for that gap between promise and protection.”

Reports Contradict Meta’s Safety Claims

A recent report found that test teen accounts were still being recommended sexualized and self-harm content — including explicit cartoons and imagery linked to body image issues. Meta dismissed the findings as “misleading” and “dangerously speculative.”

The controversy adds to the company’s growing legal and public pressure, as it faces lawsuits and investigations into its impact on minors’ mental health.

A Push for Healthier Digital Habits

Some experts see a silver lining. Desmond Upton Patton, a University of Pennsylvania professor studying social media and AI, said the update “creates a timely opening for parents and caregivers to talk directly with teens about their digital lives.”

He also praised Meta’s decision to make AI chatbots “clearly non-human,” ensuring they “don’t give age-inappropriate responses that would feel out of place in a PG-13 movie.”

“It’s a meaningful step toward a more joyful social media experience for teens,” Patton said.

Related Articles

  • Morning paper

Latest Articles