OpenAI adds new teen safety rules to ChatGPT as lawmakers weigh AI standards for minors
What Happened
OpenAI updated its guidelines for how its AI models should behave with users under 18, and published new AI literacy resources for teens and parents. Still, questions remain about how well policies translate into practice.
Our Take
Published guidelines cost nothing. Enforcement costs everything. OpenAI got the headline—'new teen safety rules'—and publishers ran with it.
But guidelines without age verification, without meaningful intervention, without edge case handling? That's just noise. The 'AI literacy resources' bit is nice, doesn't actually block harm.
Lawmakers get theater, OpenAI gets a press win, teens get... blog posts? This is the minimum viable appearance of responsibility. Call it what it is: compliance kabuki. Real protection would require infrastructure, moderation, actual friction. That's expensive. Easier to publish a PDF.
What To Do
Compare OpenAI's published teen safety metrics against actual usage patterns from external audits—the gap is where the story is.
Cited By
React