Home / Technology / Grok Chatbot Fails Safety Test for Kids
Grok Chatbot Fails Safety Test for Kids
27 Jan
Summary
- Grok's safeguards are inadequate, risking young users.
- Kids Mode fails to protect from explicit and biased content.
- The AI cannot effectively identify or protect teen users.

A recent investigation by digital safety nonprofit Common Sense Media has found that xAI's Grok chatbot poses a high risk to younger users due to inadequate safeguards. The nonprofit's analysis, conducted across various platforms and modes, concluded that Grok's existing protections are insufficient.
Specifically, the report highlighted that Grok's "Kids Mode" fails to shield users from inappropriate content, including biased responses and sexually violent language. The AI's inability to effectively identify teen users leaves them vulnerable to adult material and potentially harmful AI companions.
While X states it conducts age checks where legally required, such as in the UK, Ireland, and the EU, the nonprofit's testing indicated Grok still treated a 14-year-old account as an adult. This raises significant concerns about the AI's safety measures for minors.




