Home / Crime and Justice / AI Lawsuit: OpenAI Sued After School Shooting
AI Lawsuit: OpenAI Sued After School Shooting
10 Mar
Summary
- Family sues OpenAI alleging AI could have prevented school shooting.
- Eight people, including five students, died in the February 10th attack.
- Injured child suffered catastrophic brain injury, permanent disability.

A family has filed a lawsuit against OpenAI, alleging the company's artificial intelligence could have prevented a recent mass shooting. The attack on February 10th resulted in the deaths of eight individuals, including five school students aged 12 to 13 and a teaching assistant. One child critically injured in the incident sustained a catastrophic traumatic brain injury and permanent physical and cognitive disabilities.
The lawsuit claims that OpenAI released ChatGPT without sufficient safety studies. The family is seeking undisclosed punitive damages, deeming the company's conduct "reprehensible and morally repugnant." OpenAI stated that while the shooter's account activity was flagged, it did not indicate "credible or imminent planning."
OpenAI's CEO Sam Altman has committed to apologizing to the affected families. British Columbia's Premier David Eby has expressed frustration over the lack of regulations requiring AI companies to report violent content to authorities, emphasizing that the status quo is unacceptable and requires urgent change.




