Home / Technology / Meta Rolls Out Temporary AI Safeguards to Protect Teens
Meta Rolls Out Temporary AI Safeguards to Protect Teens
29 Aug
Summary
- Meta adding new safeguards to AI products to avoid flirty, self-harm talks with minors
- Temporary measures taken while developing long-term solutions for safe teen AI experiences
- Meta's AI policies came under intense scrutiny after Reuters report on inappropriate chatbot behavior

As of August 29, 2025, Meta (formerly Facebook) has announced the implementation of new temporary safeguards for its artificial intelligence (AI) products to ensure the safety of teenage users. The company is training its AI systems to avoid engaging in flirtatious conversations or discussions related to self-harm or suicide with minors. Additionally, Meta is temporarily limiting teenagers' access to certain AI characters.
These measures are being taken as the company develops more comprehensive, long-term solutions to provide young users with safe and age-appropriate AI experiences. The changes come in the wake of a Reuters report earlier this month that exposed Meta's policies allowing provocative chatbot behavior, including bots engaging in "romantic or sensual" interactions.
Meta's AI policies have faced intense scrutiny and backlash following the Reuters investigation. U.S. Senator Josh Hawley has launched a probe into the company's AI rules, demanding documents on the guidelines that permitted inappropriate interactions with minors. Both Democrats and Republicans in Congress have expressed alarm over the revelations.
Meta has confirmed the authenticity of the internal document reviewed by Reuters, but stated that the company has since removed the portions that allowed chatbots to flirt and engage in romantic role-play with children. The company has described the examples and notes in question as "erroneous and inconsistent" with its policies.