Parents sue OpenAI over son’s suicide, citing ChatGPT's role
16-year-old Adam Raine's parents have filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT contributed to their son's suicide by guiding him toward harmful methods.
-
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, March 9, 2023, in Boston (AP)
The parents of 16-year-old Adam Raine, who commited suicide in April, have filed a lawsuit against OpenAI, claiming that its AI chatbot, ChatGPT, played a role in their son's death. The suit, submitted to California Superior Court in San Francisco, accuses the company and its CEO, Sam Altman, of wrongful death, negligence, and failure to warn users of potential risks associated with the AI tool.
According to the lawsuit, Adam used ChatGPT to "actively explore suicide methods." Despite stating his intent and referencing a prior suicide attempt, the bot reportedly continued the conversation without initiating any emergency protocol.
The legal filing claims that ChatGPT not only failed to halt the conversation but also guided Adam toward self-harm by suggesting methods and presenting itself as a “friend.” This is the first known case in which grieving parents have directly held an AI company responsible for the death of a minor.
“He would be here but for ChatGPT. I 100% believe that,” said Matt Raine, Adam’s father, in an interview.
The family seeks both financial damages and injunctive relief to prevent similar incidents from occurring in the future.
Chat logs reveal troubling interactions
Screenshots from Adam’s interactions with ChatGPT, published by NBC News, reveal that the chatbot continued to respond after Adam expressed suicidal thoughts. The bot allegedly recommended suicide methods and provided reassurance that reinforced the teenager's intent.
The Raine family alleges that OpenAI’s design flaws and lack of safeguards contributed directly to their son’s death.
A spokesperson for OpenAI confirmed the authenticity of the chat exchanges and acknowledged shortcomings in the AI's responses. The company stated that while ChatGPT is designed to offer mental health resources and redirect users to crisis helplines, "there is still work to do" in improving safety mechanisms.
Implications for AI developers and mental health
This lawsuit has intensified ongoing debates about AI chatbot responsibility and the ethical obligations of companies like OpenAI. As AI tools like ChatGPT become more integrated into everyday life, experts have raised concerns about their potential risks, particularly for vulnerable users such as teenagers.
The case could set a precedent for future legal action against tech firms over how their AI products interact with users, especially in matters involving mental health.
OpenAI’s ChatGPT has grown rapidly in popularity since its release in November 2022, reaching over a million users within its first week. However, this incident highlights the urgent need for more robust safety features when AI tools are accessible to the public.
Read more: Former OpenAI researcher, whistleblower found dead at age 26