Skip to main content
Back to Pulse
TechCrunch

OpenAI debated calling police about suspected Canadian shooter’s chats

Read the full articleOpenAI debated calling police about suspected Canadian shooter’s chats on TechCrunch

What Happened

Jesse Van Rootselaar's descriptions of gun violence were flagged by tools that monitor ChatGPT for misuse.

Our Take

This is the scariest headline of the five. Nobody's figured out whose job it is to report suspected violence, and that's going to bite someone.

OpenAI debated it, so they have no playbook. Scale to 100M+ daily users and you can't manually review everything. You're liable either way—report wrongly and face harassment suits; don't report and face complicity claims. Your safety tools can't distinguish credible threats from dark humor.

There's no law yet defining 'an AI company's duty to report.' They're setting precedent in real-time.

What To Do

Document your decision tree for reporting suspicious behavior now, before you're in a crisis.

Cited By

React

Loading comments...