A year ago, Meta rolled out Teen Accounts on Instagram. The move was framed as a safeguard, a way to protect younger users with built-in restrictions, after lawmakers in the U.S. and Europe accused social media giants of putting profit ahead of children’s safety.
Now, Meta says that hundreds of millions of teens are in Teen Accounts across Instagram, and as of this week, the model is expanding globally to Facebook and Messenger. On paper, it looks like progress: every teen automatically ushered into an environment that promises fewer risks and tighter controls.
The protections sound reassuring. Messages are restricted to friends and prior contacts. Tags, mentions, and comments can only come from people teens follow. Parents must approve any setting changes for those under 16. Notifications nudge teens off the apps after an hour of use, and “Quiet Mode” enforces downtime overnight.
Meta also announced a School Partnership Program, allowing teachers and administrators to report bullying or other safety concerns directly to Instagram for faster action. After pilot tests in U.S. schools, the program is now open to all middle and high schools nationwide. On the surface, it looks like progress. But the question is whether it works in practice.
A Closer Look at the Risks
That optimism has faced tough questions before. In November 2023, a study led by Arturo Béjar, a former Meta engineer turned whistleblower, examined 47 of Instagram’s youth safety features and found that only eight worked as advertised. The findings, reported by Reuters and The Guardian, painted a troubling picture:
- Content filters allowed sexual and harmful content to slip through.
- Adults could sometimes message teens despite supposed safeguards.
- The “Hidden Words” feature failed to reliably block harassment.
- Some protections had been scaled back or quietly removed.
Béjar’s research was conducted independently, not as part of Meta’s official reporting. Meta disputed the findings, insisting its tools were improving and that teens were already seeing less harmful content. But the study raised a lasting concern: safety measures can create the illusion of protection without consistently delivering it.
Why It Matters
This issue is very real and has serious consequences. The U.S. Surgeon General has warned that teenagers are facing a mental health crisis, and he connects it to heavy social media use that can cause anxiety, depression, and sleep problems.
In response, some states, like Utah and Arkansas, have passed laws requiring parents to permit their teens to join social media. At the same time, safety groups argue that tech companies have not done enough to fix addictive app designs, stop predators, or remove harmful content.
This is why Meta’s new “Teen Accounts” are so important. They aren’t just a small update; they are a major public test of how serious Meta is about protecting its youngest users. If the new system works, if it stops harassment, filters out dangerous posts, and helps teens build healthier habits, it could be a turning point for the industry.
But there is a major risk. If the new safety features are weak, unreliable, or easy for teens to bypass, the whole effort will be seen as “just for show”. It will create a false sense of security, offering rules that look good on paper but don’t actually work.
And because hundreds of millions of teenagers are being automatically moved to these accounts, the cost of failure is high. It isn’t just about bad headlines; it’s about real harm to kids. It’s about the moments when a safety feature breaks, a harmful message slips through, or a teen scrolling late at night sees something they can never forget.
Meta plans to convince parents, educators, and the government that it is serious this time. But as critics point out, the real test isn’t what features are promised. The real test is whether they actually work, day after day, to protect the very people they were built for.