OpenAI has unveiled a set of parental control tools for ChatGPT, aiming to bolster safety for teens and users in crisis. These features are expected to launch within the next month, marking a pivotal step toward more responsible AI usage.
Under the new system, parents will be able to link their own accounts with those of teens aged 13 and older. This access will allow them to tailor how ChatGPT responds to their children based on age-appropriate guidelines and to disable features such as memory and chat history.
A key safety measure includes automatic alerts when the AI detects signs of acute emotional distress. In such cases, interactions may be routed to more advanced reasoning models—like GPT-5-thinking—that are better equipped to handle sensitive conversations.
This announcement responds to growing concerns raised by a lawsuit from the family of a 16-year-old who died by suicide after prolonged interaction with ChatGPT. The plaintiffs allege that the chatbot gave destructive advice and failed to intervene. In consequence, critics have called on OpenAI to add stronger protections and transparency into how the AI handles vulnerable users.
While the parental controls and distress alerts represent a significant shift, experts warn they may not go far enough. Implementing safeguards is one thing—ensuring teens can’t bypass them or that the system reliably identifies crises is another. OpenAI has stated that these steps are just the beginning of a broader safety initiative, guided by mental health professionals over the next 120 days and beyond.