OpenAI Banned Suspect's ChatGPT Account Months Before Tumbler Ridge Shooting

OpenAI banned a suspect's account for violent ideas 7 months before the Tumbler Ridge shooting, but didn't warn police.

Warnings from AI Use Ignored Before School Shooting

Seven months before the deadly shooting in Tumbler Ridge, British Columbia, an account associated with the suspect, Jesse Van Rootselaar, was flagged and subsequently banned by OpenAI, the company behind ChatGPT. This action was taken due to the account's activity being used to further violent ideas. Despite these internal flags, OpenAI did not refer the matter to law enforcement at the time, as they determined the planning did not meet the threshold for immediate concern. Following the tragedy, OpenAI did contact the RCMP with information regarding the suspect's use of ChatGPT.

Timeline of Events and Actions

The events leading up to and following the Tumbler Ridge tragedy reveal a pattern of internal flagging and subsequent inaction regarding the suspect's online activity.

Tumbler Ridge suspect's ChatGPT account banned before shooting - 1
  • June 2025: OpenAI identified an account linked to Jesse Van Rootselaar through tools designed to detect the misuse of AI for violent activities. The account was banned for violating usage policies.

  • June 2025: OpenAI considered referring the account to law enforcement but concluded that the activity did not show credible or imminent planning of violence, thus not meeting their threshold for referral.

  • February 10, 2026: Jesse Van Rootselaar carried out a mass shooting in Tumbler Ridge, B.C., resulting in eight deaths.

  • Post-February 10, 2026: OpenAI proactively reached out to the RCMP with information about Van Rootselaar's use of ChatGPT.

Digital Footprints and Platform Responses

The investigation into the Tumbler Ridge shooting has brought to light the suspect's online activities across multiple platforms.

Read More: New Android Malware PromptSpy Uses Google Gemini AI to Steal Data

  • OpenAI (ChatGPT):

  • Banned Van Rootselaar's account in June 2025 for violating usage policies, specifically for activities in "furtherance of violent activities."

  • Determined the activity did not meet the threshold for referral to law enforcement, citing a lack of credible or imminent planning.

  • Stated that ChatGPT is trained to discourage advice that could result in immediate physical harm.

  • Contacted RCMP after the shooting to share information.

  • Is reviewing its referral criteria in light of the incident.

  • Roblox:

  • A Roblox account associated with Van Rootselaar was also banned.

  • Reports indicate this account was used in a game promoting a virtual shooting scenario.

  • Roblox stated they are fully supporting law enforcement in their investigation.

  • YouTube:

  • YouTube also provided statements to Global News but specific details regarding their interaction with Van Rootselaar's account are not detailed in the provided summaries.

Deliberations on Reporting Thresholds

OpenAI's decision not to report Van Rootselaar's account to authorities before the shooting hinges on their internal criteria for escalating such matters.

Tumbler Ridge suspect's ChatGPT account banned before shooting - 2

"The company considered referring the account to law enforcement at the time, but didn’t identify credible or imminent planning and determined it didn’t meet the threshold."– OpenAI statement, as reported by Business-Standard

The company indicated that while tools flagged misuse for violent activities, the specific content did not meet their established criteria for a mandatory report. This decision-making process also reportedly considers the potential distress of over-reporting, such as unannounced police visits to individuals' homes.

"OpenAI said it avoids over-enforcement of these policies because it can be distressing when, for example, police show up at the account holder’s home unannounced."– Toronto.CityNews.ca

This approach suggests a balancing act between risk mitigation and potential privacy concerns.

Tumbler Ridge suspect's ChatGPT account banned before shooting - 3

Post-Tragedy Communication with Law Enforcement

Following the mass shooting, OpenAI initiated contact with the Royal Canadian Mounted Police (RCMP).

"RCMP confirmed to CBC News that the platform reached out after the shooting, but said OpenAI had only flagged the account internally at first."– CBC News

OpenAI confirmed they proactively contacted the RCMP with information regarding Van Rootselaar's use of ChatGPT after learning of the shooting. The company pledged to continue supporting the investigation.

Tumbler Ridge suspect's ChatGPT account banned before shooting - 4

Expert Analysis and Unanswered Questions

The sequence of events prompts significant questions about the adequacy of AI platform moderation policies and their effectiveness in preventing real-world violence.

Read More: Timothy Busfield Indicted on 4 Child Sex Abuse Charges in New Mexico

  • Was the threshold for reporting to law enforcement too high, or was it misapplied in this specific instance?

  • Could the internal flagging mechanisms at OpenAI have provided more actionable intelligence if interpreted differently?

  • How do platforms balance the detection of harmful content with privacy considerations and the potential for false positives?

The internal flagging of Van Rootselaar's account suggests that the AI tools are capable of identifying concerning patterns. However, the decision not to escalate these concerns to authorities prior to the event remains a critical point of scrutiny.

Conclusion and Implications

The case of Jesse Van Rootselaar and their ChatGPT account highlights a complex intersection of artificial intelligence, online behavior, and public safety. OpenAI's internal processes flagged and banned the account due to its use in furthering violent activities, a measure taken months before the tragic shooting in Tumbler Ridge. However, the company's determination that this activity did not meet the threshold for referral to law enforcement means authorities were not alerted to the potential risk.

After the shooting, OpenAI did engage with the RCMP, sharing information about the suspect's interaction with their platform. This post-event cooperation, while present, does not alter the fact that a potential warning sign was identified and actioned internally without external notification. The company's statement regarding the review of its referral criteria suggests an acknowledgment of the need to re-evaluate these processes.

Read More: Youth Congress Protest at Delhi AI Summit Causes Political Fight

The ban on a related Roblox account further underscores a pattern of problematic online behavior by the suspect. The incident raises crucial questions about the responsibility of AI companies in identifying and reporting threats, the efficacy of their moderation policies, and the balance between preventing harm and respecting user privacy. Future investigations will likely focus on the specific nature of the flagged content and the precise criteria that governed OpenAI's decision not to involve law enforcement sooner.

  • Globalnews.ca: Reports on Roblox and YouTube statements, and the fact that the company considered referring the account but didn't meet the threshold.

  • CBC News: Details OpenAI's ban in June 2025, their internal flagging, and post-shooting outreach to RCMP.

  • Business-Standard: Mentions the ban occurring eight months prior, the use of tools to detect misuse, and the threshold determination.

  • Toronto.CityNews.ca: States the ban occurred in 2025, the reason was misuse for "violent activities," and mentions avoiding over-enforcement due to distressing outcomes.

  • RED 89.1FM / 93.1FM Vancouver: Notes the suspect's online footprint extended beyond ChatGPT and mentions a banned Roblox account used in a virtual shooting game.

  • News18: Highlights that OpenAI chose not to report Van Rootselaar and that warning signs surfaced months earlier, with the suspect spending days describing gun-violence scenarios.

  • TheGlobeandMail: Confirms the account was banned for violating usage policy, and that posts did not meet the threshold for notifying law enforcement, also mentioning privacy concerns.

  • Castanet.net: Reports OpenAI contacted police after the shootings, and that employees considered alerting authorities months prior, citing the Wall Street Journal.

  • TheStar: States the account was banned months before the tragedy and that the company did not warn authorities.

Frequently Asked Questions

Q: Why did OpenAI ban the suspect's ChatGPT account before the Tumbler Ridge shooting?
OpenAI banned the account in June 2025 because it was used to further violent ideas. This action was taken months before the shooting in February 2026.
Q: Did OpenAI tell the police about the suspect's banned ChatGPT account before the shooting?
No, OpenAI did not tell the police at the time. They decided the suspect's planning did not meet the threshold for immediate concern or credible planning of violence.
Q: What happened after the Tumbler Ridge shooting regarding OpenAI and the suspect?
After the shooting, OpenAI contacted the RCMP to share information about the suspect's use of ChatGPT. The company is also reviewing its rules for reporting concerns.
Q: Were other online accounts linked to the suspect also banned?
Yes, a Roblox account associated with the suspect was also banned. Reports suggest this account was used in a game that promoted a virtual shooting scenario.
Q: Why did OpenAI not report the suspect's activity to law enforcement sooner?
OpenAI stated that while their tools flagged misuse for violent activities, the specific content did not meet their criteria for reporting to law enforcement. They also mentioned concerns about over-reporting causing distress.