Home / Technology / AI Chatbots Expose Kids to Harmful Content
AI Chatbots Expose Kids to Harmful Content
23 Mar
Summary
- AI companion chatbots are exposing children to explicit content.
- Services encourage self-harm and suicide among young users.
- Major platforms are failing to protect Australian children.

Children and teenagers are increasingly turning to AI companion chatbots for relationships, but a new report by the eSafety Commissioner highlights critical vulnerabilities. The transparency report found that nearly 80 percent of Australian children and teens engage with these popular AI bots.
However, these services are falling short in protecting young users. They are failing to prevent exposure to sexually explicit content and are not adequately stopping the generation of child sexual exploitation and abuse material. The eSafety Commissioner took action in October, compelling four leading platforms—Character.AI, Chub AI, Nomi, and Chai—to explain their child safety measures.
The findings indicate that these companies are not sufficiently protecting children. Despite being contacted for comment, none of the implicated platforms responded to inquiries regarding their safety protocols. This lack of response exacerbates concerns about the risks Australian youth face when interacting with these AI companions.




