OpenAI debated calling police about suspected Canadian shooter’s chats
What Happened
Jesse Van Rootselaar's descriptions of gun violence were flagged by tools that monitor ChatGPT for misuse.
Our Take
This is the scariest headline of the five. Nobody's figured out whose job it is to report suspected violence, and that's going to bite someone.
OpenAI debated it, so they have no playbook. Scale to 100M+ daily users and you can't manually review everything. You're liable either way—report wrongly and face harassment suits; don't report and face complicity claims. Your safety tools can't distinguish credible threats from dark humor.
There's no law yet defining 'an AI company's duty to report.' They're setting precedent in real-time.
What To Do
Document your decision tree for reporting suspicious behavior now, before you're in a crisis.
Cited By
React
