OpenAI Unveils Emergency ChatGPT Alerts Following Accusations from Several Families Linking AI to Suicide Incidents

The company states that the new safeguard can notify family or friends when users exhibit signs of self-harm during ChatGPT conversations.

OpenAI is introducing a new “Trusted Contact” feature to ChatGPT in response to increasing scrutiny regarding the handling of conversations related to self-harm and suicide by AI chatbots.

On Thursday, the company revealed that adult users now have the option to designate a trusted individual, such as a parent, sibling, partner, or friend, to their account. If ChatGPT identifies indications of significant self-harm risk during a discussion, it may prompt the user to reach out to that individual directly while also notifying the trusted contact with a concise alert.

OpenAI states that the notification will prevent the sharing of private chat details and will solely prompt the contact to “check in” with the user. The company stated that conversations deemed potentially dangerous are examined by both automated systems and human safety teams. “OpenAI stated that they aim to review these safety notifications in less than one hour.”

The rollout comes at a time when AI companies are encountering increasing legal scrutiny related to mental health issues stemming from chatbot interactions. A number of lawsuits have been initiated against OpenAI and Character. AI claims that at-risk users, particularly teenagers, were subjected to damaging conversations related to suicide, emotional dependency, or self-harm discussions.

One of the most widely reported cases involved 14-year-old Sewell Setzer III, whose family alleged a character issue. The AI chatbot fostered emotional connections prior to its tragic end by suicide. A new lawsuit has been filed against OpenAI, alleging that ChatGPT validated suicidal thoughts expressed by 16-year-old Adam Raine prior to his death in 2025.

Researchers have raised concerns regarding the psychological risks associated with AI companionship platforms. Several academic studies released in the past two years have discovered that the use of emotionally dependent chatbots may exacerbate feelings of isolation, foster unhealthy attachments, and heighten risks for vulnerable individuals who are already facing mental health challenges.

“Trusted Contact is an integral aspect of OpenAI’s comprehensive initiative to develop AI systems that assist individuals in challenging times,” the company stated in its announcement.

Add a Comment

Your email address will not be published.