OpenAI is tightening the rules for younger users of ChatGPT. Starting soon, the company will introduce new restrictions for people under 18, a move it says is aimed at improving safety and aligning with global regulations on AI use by minors.
For years, ChatGPT has been available to anyone who could create an account, with minimal age-specific barriers. While OpenAI required users to be at least 13, enforcement was loose, and younger audiences often accessed the chatbot through shared devices or parent accounts. With AI now woven into daily life — from schoolwork to social media — OpenAI’s new policy represents a shift toward greater responsibility.
The company has not detailed every change, but reports suggest younger users may face limits on the kind of content they can generate, stricter filters for sensitive topics, and possibly reduced access to advanced features like custom instructions or plugin integrations. In regions with more stringent data protection laws, access could even be suspended entirely for those under 18.
The timing is significant. Regulators in the U.S., Europe, and Asia have increasingly scrutinized how AI tools handle children’s data and exposure. For example, the EU’s Digital Services Act already places heavy obligations on platforms to protect minors from harmful or manipulative content. OpenAI’s preemptive step appears to be as much about compliance as it is about ethics.
From my perspective, this change is both overdue and complicated. On one hand, minors do need more protection — generative AI can produce content that isn’t always age-appropriate, and the risk of over-reliance for learning or decision-making is real. On the other hand, it raises the question of accessibility: in many parts of the world, young people are using ChatGPT not as a novelty but as a learning tool, especially where educational resources are scarce. Cutting them off entirely could widen digital inequalities rather than solve them.
What’s more, enforcement is a challenge. Unless OpenAI rolls out robust age-verification systems — which bring their own privacy concerns — these restrictions may prove more symbolic than practical. Still, they mark a clear message: the era of AI companies treating underage access as a gray area is ending.
As OpenAI implements these measures, the conversation about AI, youth, and safety will only grow louder. If done right, the restrictions could set a new standard for responsible AI governance. If done poorly, they risk alienating the very generation that is most eager to engage with these tools.