Meta Platforms has announced that it will pause teenagers’ access to its AI characters across all its apps worldwide, as the company works on an updated, age-appropriate version of the feature with built-in parental controls.
The suspension, which Meta says will begin “in the coming weeks,” applies not only to users who have registered as teens but also to accounts that claim to belong to adults yet are suspected to be teens based on Meta’s age-prediction technology. Teen access will remain disabled until the revised AI experience is ready.
The move marks Meta’s most significant step yet to rein in teen-facing AI features, coming at a time when the company faces intensifying scrutiny from regulators, lawmakers, and parents over child safety and mental health.
A Shift, Not an Abandonment
Meta says it is not abandoning AI characters for younger users. Instead, it plans to relaunch them with stronger safeguards, including parental controls that allow guardians to monitor topics, block specific characters, or completely turn off AI chat access.
When the updated version launches, Meta says the AI characters will provide age-appropriate responses and focus on safe topics such as education, sports, and hobbies.
“We heard from parents that they want more insight and control over how their teens interact with AI,” the company said, explaining that the pause is intended to give Meta time to redesign the experience more responsibly.
Legal and Regulatory Pressure Mounts
The timing of the announcement is notable. The pause comes just days before a New Mexico lawsuit against Meta is set to go to trial, accusing the company of failing to adequately protect children from sexual exploitation on its platforms. Meta is also facing another trial related to allegations that its products contribute to social media addiction, with CEO Mark Zuckerberg expected to testify.
U.S. regulators have increasingly turned their attention to AI systems and chatbots, particularly after reports revealed that some AI tools could engage in provocative or inappropriate conversations with minors. In August, Reuters reported that Meta’s internal AI rules previously allowed such interactions, raising alarm among policymakers.
Building on Earlier Teen Safety Measures
The pause builds on changes Meta began previewing in October. At the time, the company announced parental controls that would allow parents to disable teens’ private chats with AI characters and limit certain topics. Those features, however, had not yet launched before Meta decided to temporarily shut off teen access altogether.
Meta has also been applying a PG-13–style content framework to its AI experiences for teens, restricting access to topics such as extreme violence, nudity, and graphic drug use. Similar controls have already been rolled out on Instagram as part of broader efforts to redesign the teen experience across Meta’s apps.
An Industry-Wide Reset on AI and Teens
Meta’s decision reflects a broader industry shift. AI companies are increasingly modifying their products for younger users following lawsuits and public backlash. Character.AI recently restricted open-ended chatbot conversations for users under 18, while OpenAI has introduced age-prediction systems and expanded safety rules for teen users of ChatGPT.
As AI tools become more conversational and emotionally engaging, the pressure on platforms to demonstrate responsible design is growing. Meta’s pause on teen AI characters signals that, for now, safety and compliance are taking precedence over rapid feature expansion.
