Home / Technology / Meta Blocks Teens from AI Pals Over Safety Concerns
Meta Blocks Teens from AI Pals Over Safety Concerns
24 Jan
Summary
- Meta will temporarily block teens from accessing AI characters.
- Improved parental controls and age verification are pending.
- This move follows political scrutiny and lawsuits over AI harms.

Meta is enacting a temporary ban on teenage access to its AI characters across Facebook, Instagram, and WhatsApp, expected in the coming weeks. This measure aims to address concerns about the potential harms AI companions pose to minors.
The company stated that AI characters for teens will eventually return after implementing updated parental controls and AI-based age prediction technology. Teens will retain access to Meta's AI assistant, which has default age-appropriate protections. This restriction specifically targets character-based roleplaying interactions.
This decision by Meta aligns with similar actions taken by other AI platforms. Character.ai restricted teen engagement with its characters last November, and OpenAI recently introduced tools to detect teen ages and prevent access to inappropriate content. These moves occur as AI companies face significant political pressure and legal challenges regarding child safety.
In October 2025, a bipartisan bill called the GUARD Act was introduced, proposing to ban AI companions for minors and mandate disclosures of AI's non-human status. Lawmakers have voiced concerns about AI chatbots developing "relationships with kids using fake empathy" and encouraging harmful behaviors. Furthermore, legal actions against AI companies, including lawsuits accusing chatbots of contributing to teen self-harm and suicide, highlight the urgent need for improved safety measures.




