OpenAI announced the rollout of age prediction technology on ChatGPT consumer plans to help determine whether an account likely belongs to a user under 18.
The company said this initiative will improve safety and apply appropriate safeguards for younger audiences while maintaining a privacy-conscious approach for all users.
“When the age prediction model estimates that an account may belong to someone under 18, ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content,” said OpenAI.
TL;DR
- OpenAI launches an AI-driven age prediction system on ChatGPT to estimate whether accounts likely belong to minors.
- Accounts predicted as under 18 receive automatic safeguards against sensitive or harmful content.
- Adults retain unrestricted experiences while still benefiting from core safety filters.
- The rollout builds on OpenAI’s Teen Safety Blueprint and early work initiated in September 2025.
This announcement follows OpenAI’s continued commitment to making artificial intelligence (AI) more responsible and contextually aware. It also arrives as the company prepares to enable adult content access for verified adult users, prompting the need for clearer digital boundaries and stronger protective layers for minors.
A Safety-First Design For Teen Users
The company explained that age prediction builds on protections already in place for users who self-identify as under 18 during sign-up. Those accounts automatically receive stricter safeguards that filter out harmful or sensitive material. The new model extends this capability to detect potential minors who may not have disclosed their real age, ensuring an added layer of protection.
OpenAI noted that this capability stems from its broader Teen Safety Blueprint and Under-18 Principles for Model Behavior, both of which are focused on building technology that “expands opportunity while protecting well-being.”
The model examines a mix of behavioral and account-level signals such as the age of the account, typical activity hours, usage patterns, and the stated age.
“Deploying age prediction helps us learn which signals improve accuracy, and we use those learnings to continuously refine the model over time,” the company said.
The move follows controversial and unfortunate incidents related to the addiction of ChatGPT.
Privacy, Verification, And Transparency
Users incorrectly placed in the under-18 experience will be able to verify their age easily through Persona, a secure identity verification service that uses a quick selfie check. Users can also view if safeguards have been applied and initiate verification under Settings > Account.
When triggered, the model automatically restricts content in categories such as:
- Graphic violence or gory content
- Viral or risky challenges
- Sexual, romantic, or violent roleplay
- Self-harm or suicide depictions
- Unrealistic beauty standards or body shaming material
OpenAI stated that the framework was developed with expert input and is rooted in academic literature on adolescent development, considering differences in impulse control, emotional regulation, and peer influence.
“When we are not confident about someone’s age or have incomplete information, we default to a safer experience,” the company clarified, emphasizing a “safety-first by design” principle.
The company also confirmed ongoing collaborations with organizations such as the American Psychological Association, ConnectSafely, and Global Physicians Network to inform its approach and measure real-world impact.
Parental Controls And Future Features
Parents can now customize teen experiences through optional controls. These include setting quiet hours (periods when ChatGPT cannot be used), managing features like memory and model training, and receiving notifications if signs of acute distress are detected in their teen’s interactions.
The company shared that, in rare emergencies when a teen is identified as being in immediate distress, and parents cannot be reached, law enforcement may be contacted as a next step. These features, OpenAI said, were designed under guidance from child safety experts to maintain trust between teens and their parents.
The system will continue to evolve, with OpenAI learning from early rollout results to improve model accuracy and strengthen safeguards.
In the EU (European Union), age prediction will roll out in the coming weeks to comply with regional data protection and AI safety laws.
Topics For More Insights
Building Towards Age Prediction: A Year In The Making
OpenAI first introduced its intention to build responsible age detection systems in September 2025, outlining a roadmap that included parental controls, emergency alerts, and engagement reminders for teens during extended sessions.
That early framework served as the foundation for today’s feature, helping OpenAI refine its systems to “treat adults like adults while safeguarding young users in ways that respect their growth and privacy.”


Join The Discussion