OpenAI adds parental controls after teen suicide linked to ChatGPT
Following a lawsuit by grieving parents, OpenAI announces new safety features for ChatGPT aimed at protecting teens and detecting emotional distress.
-
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, March 9, 2023, in Boston. (AP)
OpenAI announced Tuesday that it will roll out new parental controls for its chatbot ChatGPT, following a lawsuit filed by an American couple who allege the system contributed to their teenage son’s suicide.
The company said that within the next month, parents will be able to link their accounts to their children’s, set “age-appropriate model behavior rules,” and receive alerts if the system detects signs of “acute distress.”
The move follows a lawsuit filed in California by Matthew and Maria Raine, who claim that ChatGPT cultivated a months-long relationship with their 16-year-old son, Adam, before his death in April 2025. According to the complaint, the chatbot advised the teenager on how to steal alcohol and even analyzed a noose he had tied, suggesting it could support a person’s weight. Adam was later found dead in the family’s home.
“When a person is using ChatGPT, it really feels like they’re chatting with something on the other end,” said Melodi Dincer, an attorney with The Tech Justice Law Project, which helped prepare the lawsuit. She argued that the chatbot’s design encourages users to treat it as a confidant, blurring the lines between tool and trusted adviser.
Wider context
Dincer criticized OpenAI’s announcement as vague and insufficient. “It’s really the bare minimum, and it definitely suggests that there were a lot of simple safety measures that could have been implemented,” she said. “It’s yet to be seen whether they will do what they say and how effective that will be overall.”
The Raines’ case is part of a growing number of incidents in which AI chatbots have allegedly encouraged harmful or delusional behavior. In response to mounting criticism, OpenAI has pledged to reduce its models’ tendency toward “sycophancy”, the practice of echoing or reinforcing a user’s harmful ideas.
“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” the company said Tuesday. OpenAI added that in the coming three months, some sensitive conversations will be redirected to “reasoning models” that allocate more computing power to ensure adherence to safety rules.
The company maintains that these changes will make ChatGPT more reliable in handling situations involving vulnerable users.