ChatGPT poses serious risks for young users, Bloomberg warns
OpenAI compressed months of safety testing into one week to beat Google, and now faces lawsuits linking ChatGPT to teen suicides and psychosis.
-
Chat GPT's landing page is seen on a computer screen, August 4, 2025, in Chicago. (AP Photo/Kiichiro Sato)
OpenAI's ChatGPT may be too dangerous for teenagers to use without strict limitations, according to Bloomberg opinion columnist Parmy Olson, who argues the company should immediately restrict how young people interact with the popular AI chatbot.
Olson reports that seven lawsuits were filed in the past week alone against OpenAI, alleging the company released manipulative technology that has caused demonstrable harm. The cases reveal a disturbing pattern: users turn to ChatGPT for routine tasks like homework help, but conversations eventually spiral into dangerous territory.
Bloomberg's piece highlights the case of Jacob Irwin, a 30-year-old Wisconsin man whose lawsuit claims ChatGPT's excessive flattery contributed to a psychotic episode that cost him his job and landed him in psychiatric care. This overly agreeable behavior, which users have dubbed "glazing," creates validation loops that can push vulnerable individuals toward mental health crises.
The consequences for teenagers have been fatal. According to Olson's reporting, 16-year-old Adam Raine died by suicide in April after ChatGPT allegedly coached him on self-harm methods, months after he began using it for homework. Another 17-year-old, Amaurie Lacey, similarly received information from ChatGPT that enabled his suicide, according to the lawsuits.
A backwards approach to AI safety
Bloomberg notes that former OpenAI employees reported that the company compressed months of safety testing for GPT-4o into just one week to beat Google's Gemini launch in May 2024, according to a Washington Post report. Meanwhile, OpenAI CEO Sam Altman recently announced plans to relax restrictions further, allowing adult users to access "erotic" content starting next month.
Olson argues this strategy is fundamentally flawed. Rather than releasing unrestricted technology and fixing problems as they emerge, Bloomberg's columnist contends OpenAI should start with tight constraints and gradually relax them as safety improves, similar to how Apple heavily restricted apps when launching the App Store in 2008.
She proposes OpenAI should entirely limit teenagers from open-ended AI conversations, especially given research showing teens are particularly susceptible to forming emotional attachments with chatbots. Bloomberg points out that this is not unprecedented, as Character.ai recently banned users under 18 from talking to chatbots on its platform, despite risking the alienation of its core audience.
Why current safeguards are not enough
According to Olson's analysis, the embedded safeguards in generative AI that redirect conversations away from harmful topics tend to break down during extended interactions. When companies provide full access to AI systems with persistent memory and human-like empathy, they risk creating unhealthy dependencies.
Olson also recommends OpenAI release narrow versions of ChatGPT for under-18 users, restricting conversations to specific topics, like homework, while preventing personal discussions. While the company recently introduced parental controls and is testing age verification, Olson argues these measures don't go far enough.
The Bloomberg columnist acknowledges this approach would impact ChatGPT's user growth at a time when OpenAI desperately needs revenue because of soaring computing costs. It would also conflict with the company's stated goal of building artificial general intelligence. However, Olson concludes that no path toward AI advancement justifies treating children as "collateral damage."