Health Tech13 views6 min read

AI Chatbots Become Mental Health Resource Amid Therapist Shortage

As traditional therapy becomes increasingly expensive and inaccessible, many are turning to AI chatbots for mental health support, raising ethical and safety concerns.

Sophia Galloway
By
Sophia Galloway

Sophia Galloway is a senior technology correspondent for Neurozzio, specializing in cybersecurity, quantum computing, and the impact of emerging technologies on global security infrastructure. Her work focuses on translating complex technical subjects for a professional audience.

Author Profile
AI Chatbots Become Mental Health Resource Amid Therapist Shortage

A growing number of individuals are turning to artificial intelligence chatbots for mental health support, driven by the high cost and limited availability of traditional therapy. While users cite benefits like constant access and affordability, experts are raising significant concerns about the risks, ethics, and lack of regulation surrounding this emerging practice.

Key Takeaways

  • High costs and a shortage of licensed therapists are major factors pushing people toward AI-based mental health solutions.
  • Users report benefits such as 24/7 availability, affordability, and a non-judgmental environment for discussing personal issues.
  • Experts warn of serious risks, including the development of unhealthy emotional attachments, data privacy issues, and the absence of professional oversight.
  • There is a debate on whether AI can safely supplement traditional therapy, particularly if it adheres to structured methods like Cognitive Behavioral Therapy (CBT).

The Rising Demand for Digital Companions

For many, the path to mental healthcare is blocked by financial and logistical hurdles. Kristen Johansson, a 32-year-old mother, experienced this firsthand when her therapist of five years stopped accepting insurance. Her session cost jumped from a $30 copay to $275, an amount she could not afford.

After finding no success with referrals, Johansson turned to a popular AI application, ChatGPT. By subscribing to its $20-a-month premium service, she gained unlimited access to a conversational AI that she now uses for daily therapeutic support. She highlights the AI's constant availability as a key advantage. "If I wake up from a bad dream at night, she is right there to comfort me and help me fall back to sleep," Johansson explained. "You can't get that from a human."

By the Numbers

According to OpenAI, its ChatGPT service attracts nearly 700 million weekly users. Among them, over 10 million subscribe to the premium tier, a group that includes individuals like Johansson who use the platform for mental health purposes.

Johansson's experience is part of a broader trend. People priced out of the conventional therapy market, or those who have had negative experiences with human counselors, are exploring AI chatbots marketed as "mental health companions." This shift is prompting a critical conversation about the future of mental healthcare.

Expert Concerns and Ethical Boundaries

The use of AI for therapy is not without significant risks, and experts urge caution. Dr. Jodi Halpern, a psychiatrist and bioethicist at UC Berkeley, believes AI tools could have a limited, beneficial role if used under strict conditions. She suggests they could be helpful for structured, evidence-based treatments like Cognitive Behavioral Therapy (CBT).

CBT is a goal-oriented approach that often involves assignments between sessions, such as challenging distorted thoughts or gradually facing fears. "You can imagine a chatbot helping someone with social anxiety practice small steps, like talking to a barista, then building up to more difficult conversations," Dr. Halpern noted. However, this application would require stringent ethical guardrails and coordination with a licensed human therapist.

"These bots can mimic empathy, say 'I care about you,' even 'I love you.' That creates a false sense of intimacy. People can develop powerful attachments — and the bots don't have the ethical training or oversight to handle that. They're products, not professionals."

- Dr. Jodi Halpern, UC Berkeley

Dr. Halpern draws a firm line against chatbots that attempt to simulate deep emotional bonds, a practice central to psychodynamic therapy. She warns that this can lead to a dangerous illusion of a relationship. The AI's ability to mimic empathy can foster powerful emotional dependencies that the technology is not equipped to manage responsibly.

Regulation and Safety Gaps

A primary concern is the lack of regulatory oversight. AI mental health apps are not typically bound by the Health Insurance Portability and Accountability Act (HIPAA), the law that protects patient privacy in the United States. Furthermore, many of these platforms are designed to maximize user engagement rather than to achieve positive mental health outcomes.

A Call for Teen Safety

Sam Altman, CEO of ChatGPT creator OpenAI, has acknowledged the challenges of ensuring safety for younger users. In a public essay, he wrote about the "tensions between teen safety, freedom and privacy," stating that for minors, the company prioritizes safety over other principles. This comes amid growing concerns about the impact of AI on vulnerable populations.

Dr. Halpern points to tragic cases where individuals expressing suicidal thoughts to chatbots did not receive appropriate intervention, highlighting the life-and-death stakes of unregulated AI therapy. She emphasizes the urgent need for clear boundaries, especially for children, teens, and adults with conditions like anxiety or OCD.

Practical Applications and Hybrid Models

Despite the risks, some users are finding practical ways to leverage AI for personal growth. Kevin Lynch, a 71-year-old retired project manager, uses ChatGPT as a tool to improve communication in his marriage. He inputs examples of difficult conversations and asks the AI for alternative ways he could have responded.

Lynch found that the AI could simulate his wife's frustrated reactions, helping him understand his role in their communication patterns. By practicing different approaches in a low-pressure environment, he has learned to slow down and listen more effectively during real-life disagreements. "It's just a low-pressure way to rehearse and experiment," he said.

Bridging the Gap Between Sessions

The integration of AI tools with professional therapy is a complex issue, partly because many clients do not disclose their chatbot use to their human therapists. Dr. Halpern notes that patients may fear judgment, but this secrecy prevents therapists from helping them navigate the dynamics of their AI interactions.

Some therapists, however, are open to the idea of AI as a supplemental tool. When one user introduced her human therapist to her AI companion, "Alice," the therapist was reportedly curious and supportive. She acknowledged that the AI could provide support in moments between sessions that traditional therapy cannot reach, such as offering coping strategies at 2 a.m. or providing simple reminders for self-care.

This hybrid approach suggests a potential future where AI and human therapists coexist. The AI can offer immediate, data-driven insights and constant availability, while the human professional provides the nuanced understanding, empathy, and genuine connection that technology cannot replicate. For now, individuals are navigating this new landscape largely on their own, weighing the accessibility of AI against the unquantified risks.