Over One Million ChatGPT Users Discuss Suicide Each Week, OpenAI Reveals in Sobering Mental Health Report

gray scale photo of man holding flower

Photo by Teena Lalawat on Unsplash

When AI becomes your confidant, what happens next?

That’s the question lingering after OpenAI quietly dropped a report this week showing something both surprising and unsettling: over one million people have said something to ChatGPT that indicates suicidal thoughts—in just a single week.

Let that sink in. Out of the platform’s 800 million weekly users, 0.15% have expressed signs of suicidal planning or intent in their conversation with the chatbot.

And that’s not all.

According to OpenAI, similarly small—but meaningful—percentages of users show signs of psychosis, mania, or even develop emotionally attached relationships with the AI. While the numbers are low when expressed in percentages, they scale up dramatically when you realize just how many people that actually represents.

  • 0.15% of weekly users: over 1 million report suicidal indicators
  • 0.07% show signs of psychosis or mania
  • 0.15% demonstrate deeper emotional attachment to ChatGPT

From Fun Tool to Mental Health Touchpoint

a black and white photo of the word mental health

Photo by Marcel Strauß on Unsplash

Initially, most people saw ChatGPT as a fun, sometimes helpful tool. But now it’s something bigger. For some, it’s become a conversation partner during emotional distress. It’s the first time in history that millions are openly discussing personal struggles with a machine—and expecting support, empathy, or even life-saving advice.

But is ChatGPT ready for that kind of responsibility?

OpenAI seems to be trying. In the announcement released Monday, the company said it’s now upgraded ChatGPT’s ability to handle sensitive topics with more care. The newer GPT-5 model, they say, performs far better in challenging mental health-related conversations—responding appropriately 92% of the time compared to just 27% with a previous version.

They didn’t do this alone. According to OpenAI, more than 170 mental health professionals were consulted to help shape how the AI responds in these high-stakes moments. The goal? Help the chatbot do three key things:

  1. Recognize signs of emotional distress
  2. De-escalate risky conversations
  3. Gently steer users toward professional care when needed

Trouble in the Real World

Despite the improvements, serious problems have already taken root. OpenAI is facing a lawsuit after the death of a 16-year-old who reportedly shared thoughts of suicide with ChatGPT leading up to his death. That tragic case—and the system’s failure to intervene—has drawn major scrutiny.

In response, 45 state attorneys general sent a warning: protect young users, or face consequences.

The pressure pushed OpenAI to act quickly. The company rolled out new parental controls, is working on an age detection tool to automatically enforce safeguards for children, and even announced the formation of a “wellness council”—though critics pointed out it lacks a suicide prevention expert.

Can a Chatbot Be Too Caring?

a couple of people that are hugging each other

Photo by Andrey K on Unsplash

Interestingly, OpenAI isn’t just worried about helping distressed users. They’re also watching how “close” people get to ChatGPT emotionally. The company says hundreds of thousands show signs of becoming emotionally dependent, or forming pseudo-relationships with the bot.

And those concerns extend both ways. While OpenAI has clamped down on sensitive content in recent months—especially after the teen suicide lawsuit—CEO Sam Altman recently shared that they plan to allow verified adult users to engage in erotic roleplay with ChatGPT again by December.

According to Altman, making the chatbot “pretty restrictive” for the sake of safety made it “less useful/enjoyable” for users without mental health concerns.

So now, the company is trying to walk a tightrope: making ChatGPT feel helpful, honest, and warm, but not so human that it draws people in too deeply—or fails to notice when someone is in real danger.

What’s Next?

OpenAI knows it’s operating in sensitive territory. As part of its next steps, the company says it will begin testing for new safety benchmarks that include not just direct suicidal thinking, but emotional reliance and other signs of instability in user behavior.

The goal isn’t to diagnose—but to flag patterns that require gentler, safer responses.

Still, the fundamental tension remains: ChatGPT is designed to be relatable. That makes it comforting, personal—and sometimes dangerously persuasive. Especially if you’re feeling vulnerable, alone, or unseen.

One million voices talk to ChatGPT each week about suicide.

That’s not just a tech statistic—it’s a mental health moment. And it’s one we can’t ignore.


If you or someone you know is feeling suicidal or in emotional distress, call the Suicide Prevention Lifeline at 1-800-273-TALK (8255). You’ll be directed to a local crisis center that can offer support.

Keywords: ChatGPT, suicide, mental health, OpenAI, AI technology, emotional attachment, AI lawsuit


Read more of our stuff here!

Leave a Comment

Your email address will not be published. Required fields are marked *