Home / Technology / OpenAI Shields Teens With New AI Safety Prompts
OpenAI Shields Teens With New AI Safety Prompts
24 Mar
Summary
- New AI safety prompts target common teenage risks for better protection.
- OpenAI collaborated with Common Sense Media on the developer pack.
- This release follows a lawsuit over alleged lax safety policies.

OpenAI has introduced a new open-source safety prompt pack, aimed at bolstering the protection of teenagers within AI systems. These prompts offer detailed guidance on adolescent risks, including self-harm, inappropriate content, and harmful trends, serving as a more robust safeguard than prior high-level policies.
The initiative, developed with Common Sense Media and everyone.ai, addresses challenges developers face in translating safety goals into operational rules. This release comes in the wake of a wrongful death lawsuit filed against OpenAI, which alleged that lax safety policies contributed to a teenager's death.
OpenAI has been enhancing its teen safety features, including age assurance, to mitigate risks. The company acknowledges that third-party developers have struggled to maintain consistent safety standards. This new pack is intended to establish a meaningful safety floor across the AI ecosystem for young users.




