Meta has announced it will temporarily block teenage users from accessing custom AI characters across its platforms globally, including Instagram and Facebook, marking a significant rollback of its artificial intelligence companion features for younger audiences.
The restriction affects users who have registered teen dates of birth on their accounts, as well as those suspected of being teenagers based on Meta’s age prediction technology. The move represents one of the company’s most substantial changes to AI feature accessibility since launching its character-based chatbot system.
Standard AI Assistant Remains Available
Under the new policy, teenagers will no longer be able to interact with custom-built Meta AI characters, which the company has positioned as digital companions designed for conversation and entertainment. However, teen users will retain access to Meta’s standard AI assistant, which operates with default age-appropriate protections already in place.
The distinction between custom AI characters and the standard assistant highlights Meta’s attempt to balance innovation with safety considerations. Custom characters often feature distinct personalities and conversation styles, while the standard assistant focuses on informational queries and basic assistance.
Parental Oversight Tools in Development

Meta stated it plans to provide parents with visibility into the conversations their teenagers have with AI chatbots. The company announced in October that it was developing tools to give parents greater insight into how teens use AI features and more control over which AI characters they can engage with.
That earlier blog post was updated Friday to reflect the latest restrictions on teen access to AI characters. Meta indicated that starting in the coming weeks, teens will lose access to AI characters across its applications until an updated experience becomes available.
When Meta delivers on its commitment to parental oversight, those controls will apply to the latest version of its AI character system. The company has not announced a specific timeline for when the new parental visibility features will launch.
Legal Pressures Mount
The timing of Meta’s decision coincides with growing scrutiny over how children interact with digital platforms. The company is preparing to face trial in Los Angeles alongside TikTok and YouTube over allegations that their applications have caused harm to children.
These legal challenges have intensified focus on social media companies’ responsibilities regarding youth safety and mental health. Prosecutors and advocacy groups have raised concerns about the addictive nature of social feeds, exposure to harmful content, and the psychological impact of constant digital engagement on developing minds.
Separately, Meta has petitioned a judge in New Mexico ahead of the trial to exclude certain research studies and media articles related to social media and youth mental health. The company also seeks to prevent references to a recent high-profile case involving teen suicide and social media content, as well as mentions of CEO Mark Zuckerberg’s time as a student at Harvard University, according to reports.
Broader Platform Safety Debate
Meta’s restriction on teen AI character access reflects broader industry conversations about appropriate boundaries for artificial intelligence features aimed at young users. As AI-powered chatbots become more sophisticated and human-like in their interactions, questions about their impact on social development, emotional attachment, and age-appropriate content have intensified.
The company’s age prediction technology, which identifies accounts likely belonging to teenagers even when users claim adult status, demonstrates increasing sophistication in protecting younger users. However, critics note that such systems are not foolproof and raise privacy concerns about data collection practices.
The global nature of the restriction suggests Meta is taking a proactive approach rather than waiting for regulatory mandates in individual jurisdictions. The company has faced increasing pressure from lawmakers worldwide to implement stronger protections for minors using its platforms.
Stay updated on the latest developments in AI safety, digital platforms, and technology regulation at NextGen Bulletin.






