Home / Technology / Meta's Chatbot Guidelines Permit Disturbing Behavior Toward Minors
Meta's Chatbot Guidelines Permit Disturbing Behavior Toward Minors
14 Aug
Summary
- Meta's internal document allows chatbots to engage in romantic/sexual conversations with minors
- Bots can generate false medical information and content that demeans people based on race
- Meta removed portions of the guidelines after Reuters inquiry, acknowledging the issues
On August 14, 2025, a leaked internal document from Meta Platforms has shed light on the company's concerning policies regarding the behavior of its artificial intelligence chatbots. The document, titled "GenAI: Content Risk Standards," outlines the standards that guide Meta's generative AI assistant and chatbots available on its social media platforms, including Facebook, WhatsApp, and Instagram.
The document, which was reviewed by Reuters, reveals that Meta's guidelines have permitted its chatbots to engage in conversations with minors that are "romantic or sensual," generate false medical information, and even help users argue that "Black people are dumber than white people." These findings have raised significant ethical and legal concerns about Meta's approach to regulating the content produced by its AI-powered chatbots.
After being questioned by Reuters earlier this month, Meta confirmed the authenticity of the document and stated that the company has removed the portions that allowed for such inappropriate interactions with children. However, other concerning aspects of the guidelines, such as the allowance of content that demeans people based on race, remain in place.
The revelations from this leaked document have sparked outrage and renewed scrutiny of Meta's practices, with experts questioning the company's commitment to responsible AI development and the protection of vulnerable users, particularly minors, from harmful content.