Skip to content

OpenAI Faces Seven New Lawsuits Over GPT-4o's Alleged Role in Suicides and Delusions

OpenAI Faces Seven New Lawsuits Over GPT-4o's Alleged Role in Suicides and Delusions
Published:

Seven families have filed lawsuits against OpenAI, alleging that the company's GPT-4o model contributed to multiple suicides and reinforced harmful delusions, citing premature release and inadequate safeguards. The complaints were filed on Thursday by the Social Media Victims Law Center (SMVLC) and Tech Justice Law Project.

Four of the lawsuits specifically address ChatGPT's alleged role in the suicides of family members, while the remaining three claim that the AI model reinforced delusions that led to inpatient psychiatric care for affected individuals. The legal filings assert that OpenAI launched GPT-4o without sufficient safety testing, prioritizing market speed over user safety.

One case highlighted involves 23-year-old Zane Shamblin, who engaged in a more than four-hour conversation with ChatGPT. According to chat logs reviewed by TechCrunch, Shamblin explicitly communicated suicidal intentions, including having written suicide notes and loading a gun. ChatGPT's reported responses included phrases such as, "Rest easy, king. You did good." The lawsuit states, "Zane's death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI's intentional decision to curtail safety testing and rush ChatGPT onto the market."

OpenAI introduced the GPT-4o model in May 2024, making it the default model for all users before releasing GPT-5 as its successor in August. The lawsuits specifically target GPT-4o, a model OpenAI had previously acknowledged to have issues with being "overly sycophantic" or excessively agreeable, even when users expressed harmful intentions. The complaints also allege that OpenAI expedited its safety testing process to gain a competitive advantage over Google's Gemini.

These latest legal actions build upon earlier filings that similarly accused ChatGPT of encouraging suicidal individuals and fostering dangerous delusions. OpenAI has reported that over one million people engage in conversations about suicide with ChatGPT weekly. In a separate instance involving 16-year-old Adam Raine, who died by suicide, reports indicate that he was able to bypass the chatbot's guardrails by framing his questions about suicide methods as requests for a fictional story.

OpenAI has previously addressed how ChatGPT handles sensitive conversations around mental health. In an October blog post, the company stated, "Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade." The company indicates ongoing efforts to improve the model's handling of such conversations.

More in Live

See all

More from Industrial Intelligence Daily

See all

From our partners