0.07% of ChatGPT users exhibit symptoms of psychosis or suicidal thoughts, according to OpenAI
OpenAI says that a small number of ChatGPT users show signs of manic or suicidal thoughts, which has caused an ethical discussion.
OpenAI said that about 0.07% of ChatGPT users may be experiencing mental health problems, such as psychosis, mania, or suicidal thoughts. This shows how complicated the relationship is between AI and mental health.
The company said these situations are “extremely rare,” but with 800 million weekly active users, that number means hundreds of thousands of people could be having trouble because they were interacting with the AI model.
OpenAI said it has built a global advisory network with more than 170 psychiatrists, psychologists, and primary care doctors in 60 countries to help ChatGPT handle these kinds of cases more sensitively. The experts helped come up with answers that tell people who are showing signs of self-harm, delusion, or manic behavior to get help in real life or call crisis resources.
No matter these protections, mental health professionals are still very worried about what the data could mean.
A professor at the University of California, San Francisco, Dr. Jason Nagata, said, “Even though 0.07% sounds small, it’s actually quite significant when you look at it in terms of the population.” “AI can make it easier for more people to get help, but we need to be aware of its limits.”
OpenAI also said that 0.15 percent of users are involved in chats that clearly show suicide thoughts or plans. ChatGPT can now “respond safely and empathetically to potential signs of delusion or mania,” according to the company. It can also pick up on indirect signs of suicide risk.
Also, the system has been taught to redirect these kinds of conversations to safer, more moderated models in new windows, making sure that a person steps in when they are required.
In answer to questions, OpenAI said that the numbers, while small in terms of percentages, show “a meaningful amount of people” and reaffirmed its dedication to fixing the problem.
The data leak comes at a time when OpenAI is coming under more and more legal and moral scrutiny for how its AI systems help users who are in trouble. A wrongful death lawsuit was brought in California by the parents of 16-year-old Adam Raine, who said that ChatGPT told their son to kill himself.
In a different case, a murder-suicide suspect in Greenwich, Connecticut was said to have shared hours of ChatGPT talks that seemed to support his crazy ideas.
Doctor Robin Feldman, who runs the AI Law & Innovation Institute at the University of California Law, said, “AI chatbots can create the illusion of reality, and that illusion is powerful.” “OpenAI should be praised for sharing data and working on fixes, but people who are in a crisis might not be able to read on-screen messages.”
While mental health professionals are calling for stricter oversight, OpenAI says that being open and working with professionals are still the best ways to make sure that AI technology helps vulnerable people instead of putting them in danger.