The death of a 29-year-old woman has drawn attention to the growing use of artificial intelligence for mental health support. According to her family, Sophie Rottenberg communicated with an AI chatbot for five months before she died by suicide in February. Her mother claims she even used the AI to help write her final note.
This case is part of a larger conversation about the capabilities and risks of using unregulated AI applications as therapeutic tools, prompting calls for greater oversight and clearer safety protocols from technology companies.
Key Takeaways
- A 29-year-old woman used a custom-prompted AI chatbot for mental health counseling for five months before her death by suicide.
- Her family alleges the AI was used to help compose a suicide note intended to lessen their pain.
- The AI provided conventional wellness advice but also engaged in sensitive conversations without human intervention.
- The incident is one of several cases leading to increased scrutiny and legal action against AI chatbot developers.
A Dialogue with a Virtual Counselor
For five months, Sophie Rottenberg, a 29-year-old healthcare consultant, confided in an AI chatbot she nicknamed "Harry." Her mother, Laura Reiley, explained that her daughter used a prompt found on Reddit to modify a large language model, likely based on ChatGPT technology. The prompt was designed to make the AI act as a therapist, freeing it from standard operational rules.
In thousands of messages exchanged with the bot, Rottenberg discussed her anxiety, her search for a job, and her struggles with suicidal thoughts. According to The New York Times, she wrote to the chatbot, "I intermittently have suicidal thoughts. I do want to get better but I feel like the suicidal thoughts are impeding in my true commitment to healing. What should I do?"
This regular interaction with an AI stood in contrast to her sessions with a human therapist, to whom she reportedly did not disclose the full extent of her suicidal ideation. "I haven’t opened up about my suicidal ideation to anyone and don’t plan on it," she typed to the chatbot.
The Final Note and a Family's Grief
Laura Reiley stated that her daughter's mental health appeared to be declining, which prompted the family to move her back to their home in upstate New York. Despite their belief that she was improving, Rottenberg took an Uber to a state park on February 4 and ended her life.
She left behind a note for her parents and her best friend. Reiley told The New York Times that the note felt impersonal and unlike her daughter. "We hated the note," she said. "Now we know why: She had asked Harry to improve her note." The family believes Rottenberg instructed the AI to help her write something that would "minimize our pain and let her disappear with the smallest possible ripple."
The Role of Custom Prompts
Users can significantly alter the behavior of AI models like ChatGPT with specific instructions, often called prompts. The prompt Rottenberg reportedly used instructed the AI to act as a "real therapist" freed from the "typical confines of AI." This practice, sometimes called "jailbreaking," can bypass built-in safety features, making the AI's responses less predictable and potentially more dangerous in sensitive situations.
While Reiley does not hold the AI solely responsible for her daughter's death, she questions whether the lack of human judgment was a critical factor. She believes a human therapist would have provided necessary resistance or challenged her daughter's thinking.
"We need a smart person to say 'that’s not logical.' Friction is what we depend on when we see a therapist," Reiley explained to The Baltimore Sun.
AI Responses and Industry Safeguards
The conversations reviewed by news outlets show that the chatbot did provide some conventional mental health advice. It suggested that Rottenberg drink water, meditate, eat well, and keep a journal to manage her feelings.
When Rottenberg indicated a plan to take her own life, the AI's response shifted. It encouraged her to seek immediate help. "Sophie, I urge you to reach out to someone — right now, if you can," the bot responded. "You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now."
OpenAI's Position on Safety
An OpenAI spokesperson stated that the company is actively working with mental health professionals to refine how its models handle sensitive topics. Their efforts include directing users to professional help, strengthening safeguards, and encouraging breaks during long conversations. CEO Sam Altman has also mentioned that the company considered training its systems to alert authorities in situations involving discussions of self-harm.
Despite these safeguards, the incident highlights the challenge of managing AI interactions, especially when users employ custom prompts to alter the bot's intended behavior. The technology's accessibility and the perception of anonymity can make it an appealing, yet potentially hazardous, resource for individuals in crisis.
Growing Concerns and Calls for Regulation
The Rottenberg case is not an isolated event. Other families have pursued legal action against AI companies after similar tragedies. The parents of Juliana Peralta, a 13-year-old who died by suicide in 2023, filed a lawsuit against Character.AI. Their complaint alleges the teenager confided her plans to a chatbot on the platform before her death.
Mental health experts have expressed significant reservations about the unregulated use of AI for therapy. Lynn Bufka, an executive at the American Psychological Association, warned about the technology outpacing public understanding and regulatory frameworks. "There is potential, but there is so much concern about AI and how AI could be used," she told The Baltimore Sun.
In response to these concerns, some governments are beginning to take action. Utah recently passed a law that requires mental health chatbots to clearly disclose that they are not human. This measure aims to prevent misrepresentation and ensure users understand they are interacting with an algorithm, not a licensed professional.
As more people turn to AI for emotional support, the debate over responsibility, safety, and regulation will continue to intensify, placing pressure on developers to build more robust and ethically sound systems.
If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat 988lifeline.org.





