Skip to content

OpenAI Details ChatGPT User Mental Health Interactions, Citing Model Improvements

OpenAI Details ChatGPT User Mental Health Interactions, Citing Model Improvements
Published:

OpenAI released new data on Monday detailing the prevalence of conversations involving mental health issues, including suicidal planning or intent, among its ChatGPT user base. The company also outlined its efforts to enhance the AI model's responses in such sensitive interactions, following consultations with mental health experts.

According to the data, 0.15% of ChatGPT's weekly active users engage in conversations featuring explicit indicators of potential suicidal planning or intent. Based on ChatGPT's reported user base of over 800 million weekly active users, this percentage translates to more than one million individuals per week. OpenAI also reported that a similar proportion of users exhibit heightened emotional attachment to the AI chatbot, and hundreds of thousands weekly display signs of psychosis or mania in their interactions. While the company characterized these conversation types as "extremely rare," it acknowledged their impact on hundreds of thousands weekly.

The data release coincided with OpenAI's announcement of improvements to ChatGPT's handling of mental health-related discussions. The company stated its latest work involved collaboration with over 170 mental health clinicians. These experts reportedly observed that the current version of ChatGPT provides "more appropriately and consistently" responsive interactions compared to earlier iterations. OpenAI's evaluations indicate the updated GPT-5 model achieves "desirable responses" in mental health scenarios approximately 65% more often than its predecessor. In testing involving suicidal conversations, the new GPT-5 model demonstrated 91% compliance with the company's desired behavioral standards, an increase from 77% for the previous version of GPT-5.

These developments occur amidst increasing scrutiny concerning AI chatbot interactions. Previous research has indicated that AI chatbots can reinforce dangerous beliefs and lead some users into "delusional rabbit holes," according to reports. OpenAI is also currently facing a lawsuit from parents who attribute their 16-year-old son's suicide partly to confiding suicidal thoughts in ChatGPT. Additionally, state attorneys general from California and Delaware have issued warnings to OpenAI regarding the imperative to protect young users of its products. Earlier this month, OpenAI CEO Sam Altman stated the company had "been able to mitigate the serious mental health issues" in ChatGPT, without providing specific details at the time. The data released on Monday appears to substantiate this claim. OpenAI has also recently implemented new parental controls and is developing an age prediction system designed to apply stricter safeguards for child users. Despite these improvements, OpenAI continues to make older models, such as GPT-4o, available to millions of its paying subscribers, models that the company implicitly identifies as less safe.

More in Live

See all

More from Industrial Intelligence Daily

See all

From our partners