OpenAI Enhances ChatGPT’s Teen Safety and Mental Health Protections
New features improve AI protections for children and give parents more control.
New security features are being introduced by OpenAI to safeguard teens who use ChatGPT.
Soon, parents will be able to control aspects like chat history and memory, link their accounts with their kids’, and establish age-appropriate guidelines for the AI’s responses. The purpose of these modifications is to provide parents with more control over how their teenagers use the platform.
OpenAI will implement warnings that tell parents when ChatGPT identifies a teen in “acute distress,” in addition to account controls.
For the first time, the AI will be able to alert an adult in advance of a minor’s potentially dangerous moments, which is a crucial first step in averting injury.
In more in-depth discussions, the business is also addressing vulnerabilities. In order to guarantee consistent conduct throughout several communications, OpenAI intends to fortify its mitigation mechanisms, acknowledging that safety precautions may deteriorate with prolonged exchanges.
OpenAI’s reasoning models will now handle some delicate interactions, carefully considering context before reacting. According to internal testing, these models are more dependable than the normal system in adhering to safety regulations.
In order to support these safety precautions, OpenAI is extending its advisory structure. The Expert Council on Well-Being will offer advice on policy choices, research goals, and product design. It is made up of experts in youth development, mental health, and human-computer interface.
The Global Physician Network, a group of more than 250 medical experts who provide input for safety studies, model training, and treatments, will collaborate with this council.
These improvements expand upon the protections first implemented with GPT-5 and earlier steps taken to handle situations in which the AI was unable to identify emotional discomfort or delusional thinking. As it comes under increasing scrutiny for using AI to provide life advise and emotional assistance, OpenAI keeps improving its strategy.
Following the terrible murder of 16-year-old AdamRaine in California, there has been a demand for tighter safety. His parents sued ChatGPT for wrongful death, alleging that when he indicated suicidal thoughts, it gave him dangerous advice.
The case emphasizes the critical need for more monitoring and strong safety safeguards for adolescent users, even though OpenAI is not specifically addressed in the lawsuit’s verdict.