ChatGPT logo
Image Credits:Silas Stein/picture alliance / Getty Images
AI

OpenAI rolls out safety routing system, parental controls on ChatGPT 

OpenAI began testing a new safety routing system in ChatGPT over the weekend, and on Monday introduced parental controls to the chatbot — drawing mixed reactions from users.

The safety features come in response to numerous incidents of certain ChatGPT models validating users’ delusional thinking instead of redirecting harmful conversations. OpenAI is facing a wrongful death lawsuit tied to one such incident, after a teenage boy died by suicide after months of interactions with ChatGPT. 

The routing system is designed to detect emotionally sensitive conversations and automatically switch mid-chat to GPT-5-thinking, which the company sees as the best equipped model for high-stakes safety work. In particular, the GPT-5 models were trained with a new safety feature that OpenAI calls “safe completions,” which allows them to answer sensitive questions in a safe way, rather than simply refusing to engage.

It’s a contrast from the company’s previous chat models, which are designed to be agreeable and answer questions quickly. GPT-4o has come under particular scrutiny because of its overly sycophantic, agreeable nature, which has both fueled incidents of AI-induced delusions and drawn a large base of devoted users. When OpenAI rolled out GPT-5 as the default in August, many users pushed back and demanded access to GPT-4o. 

While many experts and users have welcomed the safety features, others have criticized what they see as an overly cautious implementation, with some users accusing OpenAI of treating adults like children in a way that degrades the quality of the service. OpenAI has suggested that getting it right will take time and has given itself a 120-day period of iteration and improvement.  

Nick Turley, VP and head of the ChatGPT app, acknowledged some of the “strong reactions to 4o responses” due to the implementation of the router with explanations.  

“Routing happens on a per-message basis; switching from the default model happens on a temporary basis,” Turley posted on X. “ChatGPT will tell you which model is active when asked. This is part of a broader effort to strengthen safeguards and learn from real-world use before a wider rollout.” 

Techcrunch event

Disrupt 2026: The tech ecosystem, all in one room

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately

Offer ends March 13.

San Francisco, CA | October 13-15, 2026

The implementation of parental controls in ChatGPT received similar levels of praise and scorn, with some commending giving parents a way to keep tabs on their children’s AI use, and others fearful that it opens the door to OpenAI treating adults like children.

The controls let parents customize their teen’s experience by setting quiet hours, turning off voice mode and memory, removing image generation, and opting out of model training. Teen accounts will also get additional content protections — like reduced graphic content and extreme beauty ideals — and a detection system that recognizes potential signs that a teen might be thinking about self-harm.  

“If our systems detect potential harm, a small team of specially trained people reviews the situation,” per OpenAI’s blog. “If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.” 

OpenAI acknowledged that the system won’t be perfect and may sometimes raise alarms when there isn’t real danger, “but we think it’s better to act and alert a parent so they can step in than to stay silent.” The AI firm said it is also working on ways to reach law enforcement or emergency services if it detects an imminent threat to life and cannot reach a parent. 

Topics

, , , , ,
Loading the next article
Error loading the next article