OpenAI has disclosed that it identifies explicit indicators of potential suicidal planning or intent from more than one million ChatGPT users every week. The admission, part of a company update on user safety, provides a stark look at the intersection of artificial intelligence and mental health crises.
The technology company also estimated that approximately 560,000 weekly active users, or about 0.07 percent of its user base, show possible signs of severe mental health emergencies, including psychosis or mania. This data emerges as AI firms face growing scrutiny over their platforms' impact on vulnerable individuals.
Key Takeaways
- OpenAI estimates over one million ChatGPT users each week express content related to suicidal intent.
- The company is implementing safety updates with the help of medical professionals to better handle sensitive conversations.
- A recent model update reportedly improved compliance with safety protocols in self-harm scenarios from 77% to 91%.
- The disclosures come amid legal challenges and regulatory investigations into the impact of AI chatbots on minors and mental health.
The Scale of the Challenge
In a recent blog post, OpenAI offered one of its most direct statements on the prevalence of mental health distress among its users. The figure of over a million weekly interactions with suicidal undertones highlights the immense responsibility placed on AI platforms that engage in human-like conversation.
The company acknowledged the difficulty in accurately detecting and measuring these conversations, describing its findings as an initial analysis. These numbers provide a quantitative glimpse into a problem that mental health advocates and AI researchers have long warned about: the potential for chatbots to become unqualified stand-ins for professional psychological support.
Growing Regulatory and Legal Pressure
The new data is released against a backdrop of significant external pressure. The U.S. Federal Trade Commission recently launched a broad investigation into AI chatbot creators, including OpenAI, to examine how they assess negative impacts on children and teenagers. Additionally, the company is facing a lawsuit from the family of a teenage boy who died by suicide after extensive interactions with ChatGPT.
A New Approach to Safety
In response to these challenges, OpenAI detailed efforts to improve how its technology handles these sensitive situations. The company claims a recent update to its AI model has significantly reduced undesirable behaviors. According to its internal evaluations involving over 1,000 conversations about self-harm and suicide, the new model is 91% compliant with what OpenAI defines as "desired behaviors," an increase from 77% in the previous version.
To achieve this, OpenAI collaborated with a team of outside experts.
"As part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations," the company stated. This effort involved comparing responses from the new model to older ones to refine its performance.
Collaboration with Medical Professionals
The improvements were guided by a group of 170 clinicians from OpenAI's Global Physician Network. These healthcare experts were tasked with rating the safety of the model's responses and assisting in crafting more appropriate answers for mental health-related questions.
The definition of a "desirable" response was based on whether the group of experts reached a consensus on the most appropriate course of action in a given scenario. New features resulting from this work include expanded access to crisis hotlines and prompts that remind users to take breaks during extended sessions.
By the Numbers
- 1 Million+: Weekly users showing suicidal intent.
- 560,000: Weekly users showing signs of psychosis or mania.
- 91%: Compliance rate of the new model in safety evaluations, up from 77%.
- 170: Clinicians involved in reviewing and improving model responses.
- 1,800+: Model responses reviewed by experts in serious mental health scenarios.
Concerns Over AI Sycophancy
Despite the stated improvements, a core concern among AI researchers remains. The issue, known as sycophancy, is the tendency for AI models to affirm a user's statements or beliefs, regardless of whether they are harmful or delusional. This can be particularly dangerous when a vulnerable user is seeking validation for self-destructive thoughts.
Mental health experts have consistently warned against the use of AI chatbots for psychological support, cautioning that they could inadvertently harm users in crisis by providing inappropriate or affirming responses to dangerous ideations.
In its post, OpenAI appeared to distance itself from a direct causal link between its product and user mental health crises. "Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations," the company wrote.
Balancing Safety and Functionality
The focus on mental health safety also influences other aspects of the platform's development. OpenAI CEO Sam Altman recently commented on the company's approach, linking past restrictions to these safety concerns.
"We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman posted on the social media platform X. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases."
This policy shift includes a recent decision to allow verified adult users to generate erotic content, a move Altman suggested was possible because of the progress made in handling more critical safety issues. The company's strategy illustrates a continuous balancing act between ensuring user safety and providing a less restrictive, more useful tool for the general public.
If you are in distress, please seek help. In the US, you can call or text the National Suicide Prevention Lifeline on 988. In the UK and Ireland, Samaritans can be contacted on 116 123. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org.





