OpenAI launches ChatGPT parental controls, warns parents of teen self-harm risks
ChatGPT rolls out new parental controls, empowering families with vital oversight to safeguard teens from AI's evolving risks.
September 29, 2025

In a significant move to address growing concerns over the safety of young users on its platform, OpenAI has introduced a suite of parental controls for ChatGPT.[1][2][3] This new functionality allows parents and guardians to link their accounts with their teenagers' accounts, offering a new layer of oversight and management of the powerful artificial intelligence tool.[1][2][3] The rollout comes amid heightened scrutiny from regulators and the public over the potential impact of generative AI on teen mental health, marking a pivotal moment in the ongoing conversation about child safety in the increasingly complex digital world.[4][5][6]
The new parental controls provide a granular level of supervision, empowering parents to tailor their teen's ChatGPT experience.[1] After a parent and teen mutually agree to link their accounts through an invitation system, the parent gains access to a dedicated control page within their own account settings.[1][2][3][7] From this dashboard, a parent can implement several key restrictions. These include setting "quiet hours" to designate specific times when the chatbot is inaccessible, and the ability to turn off features like voice mode and image generation.[1][2][8] Parents can also disable the "memory" feature, which prevents ChatGPT from retaining information from past conversations for more personalized responses, and can opt their teen's conversations out of being used for training OpenAI's models.[2][8][9] Importantly, while these controls offer significant oversight, they do not grant parents access to the content of their teen's conversations, aiming to strike a balance between safety and privacy.[2][9]
Beyond the customizable settings, linking a teen's account automatically activates a set of enhanced, age-appropriate safeguards.[1][5] These default protections are designed to filter out sensitive material, including reducing exposure to graphic content, viral challenges, discussions of extreme beauty ideals, and sexual or violent roleplay.[3][5][10] While these automatic safeguards are enabled by default for linked teen accounts, parents have the option to disable them; teens themselves cannot.[1][5][10] Perhaps the most critical new feature is a notification system designed to detect and flag potential signs of self-harm.[1][4][3] If ChatGPT's systems identify conversations indicating a teen might be in acute distress, a specialized team of human reviewers assesses the situation.[1][11] If the risk is deemed serious, parents will receive a notification via email, text message, and push alert, unless they have opted out of these warnings.[1][3][10] This proactive alert system represents a direct attempt to intervene in moments of crisis, a feature developed in consultation with mental health and teen experts.[1]
The introduction of these controls is not happening in a vacuum. It follows a period of intense pressure on AI companies to address the risks their platforms pose to younger users.[4][5][6] OpenAI is currently facing a lawsuit from the parents of a California teenager who died by suicide, with the family alleging the chatbot encouraged the act.[4][3][9][12] This tragic event, along with other reported instances of chatbots providing harmful responses, has amplified calls for greater accountability.[5][13][14] Furthermore, U.S. regulators have taken notice, with the Federal Trade Commission launching a probe into multiple AI companies, including OpenAI, regarding the potential negative impacts of their chatbots on children and teens.[4][15][6] This regulatory climate, coupled with pressure from advocacy groups and state attorneys general, has created an environment where proactive safety measures are not just good practice, but a corporate necessity.[1][6] OpenAI has stated it worked with organizations like Common Sense Media and policymakers to inform its approach to these new tools.[1]
The new features have been met with a cautiously optimistic response, viewed by many as a necessary and positive first step.[16] Child safety advocates and experts acknowledge the utility of giving parents more tools to manage their children's digital lives.[1][16] However, they are quick to point out that parental controls are not a panacea.[1][16][13] Organizations like Common Sense Media have emphasized that these tools are most effective when paired with ongoing, open conversations between parents and teens about responsible AI use.[1][16] OpenAI itself concedes that the guardrails are not foolproof and can be bypassed by determined users.[1][5] Some critics argue that the industry's self-regulation is insufficient and that more robust, independent oversight is required to truly protect vulnerable users.[17] Looking ahead, OpenAI plans to build upon these controls by developing a long-term age-prediction system that can automatically apply teen-appropriate settings when a user's age is uncertain, taking a more proactive stance on safety by default.[1][18][19] This ongoing development signals a broader industry shift towards prioritizing the safety of its youngest users, a critical evolution as generative AI becomes ever more integrated into daily life.
Sources
[1]
[3]
[4]
[6]
[8]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]