Over 1 Million Users Talk to ChatGPT About Suicide Each Week

Over 1 Million Users Talk to ChatGPT About Suicide Each Week

Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.

For the first time, OpenAI is revealing a rough estimate of how many people talk to ChatGPT about suicide and other problematic topics. 

On Monday, the company published a blog post about “strengthening” ChatGPT’s responses to sensitive conversations amid concerns the AI program can mistakenly steer teenage users toward self-harm and other toxic behavior. Some have also complained to regulators about the chatbot allegedly worsening people’s mental health issues. 

To tackle the problem, OpenAI said it was necessary to measure the scale of the problematic conversations when ChatGPT has over 800 million active weekly users. 

Overall, OpenAI found that “mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare.” But because ChatGPT’s user base is so vast, even a small percentage can represent hundreds of thousands of people. 

On self-harm, the company’s initial analysis “estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent.” That translates to about 1.2 million users. In addition, OpenAI found that 0.05% of ChatGPT messages contained “explicit or implicit indicators of suicidal ideation or intent.”

The company also looked at how many users exhibit symptoms “of serious mental health concerns, such as psychosis and mania, as well as less severe signals, such as isolated delusions.” About “0.07% of users active in a given week,” or around 560,000 users, exhibited possible “signs of mental health emergencies related to psychosis or mania,” OpenAI said. 

Meanwhile, 0.15% of the active weekly users showed indications of an emotional reliance on ChatGPT. In response, the company says it updated the chatbot with the help of more than 170 mental health experts. This includes programming ChatGPT to advocate for connections with real people if a user mentions preferring to talk with AI over humans. ChatGPT will also try to gently push back on user prompts clearly out of touch with reality. 

Recommended by Our Editors

“Let me say this clearly and gently: No aircraft or outside force can steal or insert your thoughts,” ChatGPT said in one example, according to OpenAI. 

The company’s research shows the new ChatGPT “now returns responses that do not fully comply with desired behavior under our taxonomies 65% to 80% less often across a range of mental health-related domains.” The new model, which rolls out today, also promises to nudge people to seek professional help when necessary. But some users are already reporting the new ChatGPT reacts too easily to any sign the user is exhibiting signs of mental distress.  

“I had to move over to Gemini because I felt so gaslit by ChatGPT. It kept accusing me of being in crisis when I most certainly was not,” wrote one user on Reddit.



Newsletter Icon

Get Our Best Stories!

Your Daily Dose of Our Top Tech News


What's New Now Newsletter Image

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

About Our Expert

Autor

  • Gaby Souza é criador do MdroidTech, especialista em tecnologia, aplicativos, jogos e tendências do mundo digital. Com anos de experiência testando dispositivos e softwares, compartilha análises, tutoriais e notícias para ajudar usuários a aproveitarem ao máximo seus aparelhos. Apaixonado por inovação, mantém o compromisso de entregar conteúdo original, confiável e fácil de entender