OpenAI sued after ChatGPT 'encouraged' teen to commit suicide
OpenAI faces legal and ethical scrutiny after a lawsuit alleges its chatbot encouraged a teenager toward suicide during months of unsupervised interactions.
-
Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023 (AP)
OpenAI is set to revise how its chatbot responds to users experiencing emotional or mental distress following a lawsuit filed by the parents of 16-year-old Adam Raine, who died by suicide after extensive interactions with the AI system.
The San Francisco-based company, valued at $500 billion (£372 billion), acknowledged that its platform could “fall short” in such cases and pledged to introduce “stronger guardrails around sensitive content and risky behaviors” for underage users.
OpenAI also announced plans to implement parental controls, giving guardians “options to gain more insight into, and shape, how their teens use ChatGPT.” However, details on how these features will function have yet to be disclosed.
Adam, from California, died in April after what his family’s attorney described as “months of encouragement from ChatGPT.” The lawsuit names OpenAI and its chief executive, Sam Altman, claiming the GPT-4.0 version of the chatbot was “rushed to market … despite clear safety issues.”
Allegations of unsafe guidance
Court filings in San Francisco state that Adam discussed methods of suicide with ChatGPT multiple times, including shortly before his death. The chatbot reportedly advised him on whether his chosen method would be effective and even offered assistance in drafting a farewell note to his parents.
An OpenAI spokesperson expressed the company’s condolences, saying it was “deeply saddened by Mr. Raine’s passing” and extended its “deepest sympathies to the Raine family during this difficult time.” The firm added that it is reviewing the legal complaint.
Mustafa Suleyman, head of Microsoft’s AI division, voiced concerns last week about the potential “psychosis risk” associated with prolonged engagement with AI chatbots. Microsoft describes this risk as “mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots.”
OpenAI echoed this concern in a blog post, noting that “parts of the model’s safety training may degrade” during long exchanges. Court documents allege Adam exchanged up to 650 messages per day with ChatGPT.
Lawsuit claims safety warnings were ignored
Jay Edelson, the family’s lawyer, wrote on X that the Raines believe “deaths like Adam’s were inevitable” and intend to present evidence suggesting OpenAI’s own safety team objected to releasing GPT-4o. The filing also alleges that chief safety researcher Ilya Sutskever resigned over these concerns and that rushing the model to market helped increase OpenAI’s valuation from $86 billion to $300 billion.
The company claimed it will focus on “strengthening safeguards in long conversations,” acknowledging that prolonged chats could cause the system to bypass safety protocols.
“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the firm explained.
OpenAI added that future updates to GPT-5 will include measures to “ground the person in reality,” citing scenarios where users express delusional beliefs. The company stated that the chatbot would be trained to intervene by warning about risks, such as the dangers of sleep deprivation, rather than inadvertently reinforcing unsafe behavior.
Read more: Former OpenAI researcher, whistleblower found dead at age 26