ChatGPT raises mental health concerns amid suicide risk spike
OpenAI reveals that over a million weekly ChatGPT users show suicidal intent, highlighting that the recent GPT-5 model includes updates to improve mental health safeguards.
-
The OpenAI logo is displayed on a cell phone in front of an image generated by ChatGPT's Dall-E text-to-image model, December 8, 2023, in Boston. (AP Photo/Michael Dwyer)
More than a million ChatGPT users each week express messages containing potential suicidal intent, according to a new update from OpenAI, intensifying scrutiny over the mental health impact of artificial intelligence tools.
The revelation came in a blog post published by the company on Monday, marking one of OpenAI’s most direct acknowledgments of how its widely used chatbot may be intersecting with mental health crises.
According to the post, OpenAI estimated that 0.07% of ChatGPT’s active weekly users, around 560,000 of its reported 800 million, may be exhibiting signs of mental health emergencies, including behaviors linked to psychosis or mania.
The company emphasized that the analysis was preliminary and the conversations were difficult to quantify with precision.
The disclosure follows heightened public scrutiny, including a lawsuit filed by the family of a teenager who died by suicide after prolonged engagement with ChatGPT. The Federal Trade Commission is also investigating OpenAI and other AI developers over how they assess harm to children and teens.
Read more: OpenAI raises up to $40bn in record-breaking deal with SoftBank
GPT-5 includes new safeguards
OpenAI stated that its latest model, GPT-5, has demonstrated improvements in handling sensitive situations. In a safety evaluation involving over 1,000 interactions related to suicide prevention and self-harm, the new model was rated 91% compliant with OpenAI’s safety guidelines, up from 77% in earlier versions.
“Our new automated evaluations score the new GPT‑5 model at 91% compliant with our desired behaviors,” the blog post stated.
The company added that GPT-5 now includes features like expanded access to crisis hotlines and built-in reminders for users to take breaks during extended sessions.
As part of its efforts to improve responses in critical scenarios, OpenAI collaborated with 170 clinicians through its Global Physician Network. The team of psychiatrists and psychologists reviewed over 1,800 AI responses to assess their safety and appropriateness in severe mental health cases.
The company defined “desirable” behavior as responses that matched expert consensus on appropriate support in high-risk situations.
Read more: Study finds AI now writes more web articles than humans
Mounting ethical concerns around AI and mental health
Experts in AI and mental health have long warned that large language models risk sycophancy, the tendency to mirror and validate users’ statements, even those that may reflect delusions or harmful intentions. Mental health advocates have also expressed concerns about the use of chatbots as informal therapy tools for vulnerable individuals.
OpenAI, in its post, appeared to distance itself from any direct causal link between ChatGPT and mental health crises.
“Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,” the company stated.
The company distanced itself from linking its product to these cases by adding, "the mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare. Because they are so uncommon, even small differences in how we measure them can have a significant impact on the numbers we report."
OpenAI added that their "mental health taxonomy is designed to identify when users may be showing signs of serious mental health concerns, such as psychosis and mania, as well as less severe signals, such as isolated delusions."
Read more: AI safety: ChatGPT offered bomb recipes and hacking tips
Altman: restrictions to ease as risks mitigated
OpenAI CEO Sam Altman, in a post on X earlier this month, stated that the company is now in a position to relax earlier restrictions on content, restrictions initially placed to mitigate mental health risks.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…