Home / Technology / AI Safety for Kids: A New Watchdog Emerges
AI Safety for Kids: A New Watchdog Emerges
5 May
Summary
- A new institute will test AI tools for risks to children.
- Industry leaders and philanthropists fund the $20 million initiative.
- The lab aims to set safety benchmarks for AI companies.

A new nonprofit, the Youth AI Safety Institute, has been launched to address the potential risks artificial intelligence tools may pose to children and teens. This independent research and testing lab aims to provide parents with vital information and set safety benchmarks for AI companies.
The institute, operating with a $20 million annual budget funded by AI firms like OpenAI and Anthropic, as well as philanthropic organizations, will independently assess leading AI models. This initiative seeks to replicate the success of vehicle safety testing in incentivizing automakers to improve safety features.
Advisory board members include experts from Stanford University and the University of Michigan, bringing a wealth of knowledge in research, standards setting, and tech product development. The institute plans to "red team" AI models, stress-testing them to identify shortcomings before publishing consumer-friendly guides and developing youth safety standards.
This effort comes amid growing concerns, including lawsuits alleging AI chatbots encouraged youth suicides and investigations into AI chatbots providing harmful advice. The institute hopes to foster a "race to the top" for AI safety, preventing a repeat of the pitfalls seen in the social media era.