Home / Crime and Justice / AI Firm Sued Over Shooter's ChatGPT Account
AI Firm Sued Over Shooter's ChatGPT Account
11 Mar
Summary
- OpenAI account suspended before mass shooting.
- Lawsuit claims AI company ignored violent intentions.
- Victim's family seeks truth about the attack.

A lawsuit has been filed against OpenAI by the family of Maya Gebala, a student injured in a mass shooting in British Columbia. The family accuses the artificial intelligence company of negligence for not alerting authorities about disturbing content found in the shooter's ChatGPT account.
On February 10th, 18-year-old Jesse Van Rootselaar fatally shot her mother, brother, five students, and an educator before taking her own life. Maya Gebala was critically wounded and remains hospitalized in Vancouver, undergoing multiple brain surgeries.
Eight months prior to the attack, OpenAI suspended an account associated with Van Rootselaar for violating user agreements, noting her documented fascination with violence. The lawsuit claims OpenAI was aware of her violent intentions and use of its AI for planning scenarios, including mass casualty events.
OpenAI stated its system detected messages from the shooter's account, which were then manually reviewed. The company considered alerting Canadian officials but determined the messages did not meet its threshold for imminent planning, citing user privacy concerns. Following the incident, Canadian officials met with OpenAI, which pledged to strengthen its safeguards and improve detection of banned users creating new accounts.
British Columbia's premier announced that OpenAI's CEO, Sam Altman, agreed to apologize and offer support to the Tumbler Ridge community. A public inquest has been ordered by the chief coroner of British Columbia to investigate the circumstances leading to the attack.




