The Dark Side of AI Therapy: Why Mental Hea…


Handwritten text “AI?” on a whiteboard, symbolizing questions about artificial intelligence in therapy.

 AI therapy apps pose serious risks to users, which is why the American Psychological Association recently called for a federal investigation. Recent cases include teen suicides linked to chatbot guidance. With 987 million chatbot users worldwide, understanding these dangers is critical before trusting AI with your mental health.

Why AI Therapy Is Dangerous:

  • No crisis support: AI can’t recognize emergencies or connect users to immediate help when they’re in danger
  • Deadly consequences: Teens have used AI guidance for self-harm planning, with at least one reported suicide
  • Zero accountability: No licensing, ethics oversight, or malpractice protections exist for AI therapy
  • Worsens isolation: Replaces human connection with algorithms, potentially deepening loneliness
  • Minimal regulation: Only Illinois requires AI disclosure in mental health apps as of August 2025

Artificial intelligence has crept into nearly every corner of our lives, from the algorithm that curates your morning playlist to the chatbot that handles your customer service complaints. Now, it’s knocking on the door of one of our most intimate spaces: the therapist’s office. And the conversation around AI therapy has gotten complicated quickly.

While tech companies promise revolutionary mental health solutions at your fingertips, mental health professionals and advocates are raising red flags that are impossible to ignore. The question isn’t whether AI can mimic therapeutic conversation: it’s whether it should, and what happens when it inevitably gets things wrong.

 

The Rise of AI Therapy and Why It’s Under Scrutiny

Let’s be real: AI’s takeover of healthcare was probably inevitable. The technology has proven useful for everything from diagnosing medical images to streamlining administrative tasks. But can AI be your therapist? That’s where things get complicated.

The Numbers Don’t Lie:
987 million people have used chatbots, with 88% having interacted with one in the past year alone. These aren’t just casual users, many are turning to AI for mental health support.

The explosion of AI chatbots and therapy apps between 2023 and 2025 has been nothing short of dramatic. We’re talking about 987 million people who have used chatbots, with 88% having interacted with one in the past year alone. These aren’t just casual users: many are turning to AI for mental health support, often without fully understanding what they’re getting into.

Did You Know? The state of Illinois made headlines when it passed legislation on August 1, 2025, requiring clear disclosure when AI is being used in mental health applications.

The regulatory landscape is scrambling to catch up. It’s a small step, but it signals that lawmakers are finally paying attention to what’s happening in this largely unregulated space.

Meanwhile, GoodTherapy professionals remain committed to what AI simply cannot replicate: accredited, expert care that’s genuinely personalized and grounded in ethical practice. Therapy isn’t just about having someone (or something) to talk to: It’s about the nuanced, deeply human work of healing.

Read More: Why AI Can’t Be Your Therapist

 

The Human Cost: When AI Gets Mental Health Wrong

The consequences of AI therapy-gone-wrong can be devastating, which is why the conversation about AI’s ethics is so meaningful. When we’re talking about mental health, the stakes aren’t abstract: they’re life and death.

There have been alarming reports of kids using AI chatbots to plan self-harm or suicide. Even more devastating was the recent case of a teen suicide that was reportedly linked to AI guidance. These aren’t isolated incidents or statistical outliers: they’re real people whose lives were affected by technology that simply wasn’t equipped to handle the complexity of human crisis.

Recent Study Reveals Critical AI Therapy Risks:

  • the danger of an AI “therapist” that misinterprets crucial information
  • the inherent problem of a non-human “therapist” that lacks genuine empathy
  • the risk of a large language model (LLM) that appears credible but can’t grasp the full scope of human experience

But perhaps most troubling is how AI therapy might actually reinforce the very isolation that drives people to seek help in the first place. When someone is struggling with feelings of disconnection and loneliness, does it really make sense to offer them a relationship with a machine? AI therapy can feel like a polite mirror that reflects back what you say without the genuine human connection that makes therapy transformative.

AI therapy’s fundamental limitations are glaring: no crisis intervention capabilities when someone is in immediate danger, no ability to pick up on emotional nuance that might signal deeper issues, and zero accountability when things go wrong. These aren’t bugs that better programming can fix. They’re features of what it means to be human that simply can’t be replicated.

 

Watchdogs Step In: APA and Advocates Push for Oversight

Federal Action: The American Psychological Association (APA) recently made an unprecedented move, requesting a federal investigation into AI therapy platforms.

The concerns have reached such a fever pitch that federal officials are finally taking notice. The American Psychological Association (APA) recently made an unprecedented move, requesting a federal investigation into AI therapy platforms. This move puts AI therapy’s risks of misrepresentation, failure to protect minors, and the absence of ethical guardrails on full display.

Misleading Users
About the nature of service received

Inadequate Protection
For vulnerable populations

No Oversight
Professional standards missing

The APA’s concerns center on platforms that may be misleading users about the nature of the service they’re receiving, inadequate protections for vulnerable populations (especially children and teenagers), and the lack of professional oversight that would exist in traditional therapeutic relationships.

This regulatory push represents something crucial: recognition that the mental health space requires different standards than other AI applications. When a restaurant recommendation algorithm gets it wrong, you might have a mediocre meal. When a mental health AI gets it wrong, the consequences can be irreversible.

This is exactly why GoodTherapy remains committed to connecting people with real, qualified professionals who can provide the quality care and ethical oversight that human mental health requires. The role of ethics in therapy isn’t just about following rules: it’s about protecting people when they’re at their most vulnerable.

Read More: Explore the Importance of Ethical Therapy

 

What Stories Like This Reveal About Human Connection

Real Story, Real Connection

“Recently, a young woman, Savannah Dutton, got engaged and reported being so excited to quickly tell her longtime therapist. As one of the first people she told, her therapist of almost four years was crucial to helping Dutton feel safe, not judged, supported, and confident in her future.”

When done right, your therapist should be a healing, safe, and encouraging part of your life that helps you navigate how to be human, which is something AI platforms can’t offer. Recently, a young woman, Savannah Dutton, got engaged and reported being so excited to quickly tell her longtime therapist. As one of the first people she told, her therapist of almost four years was crucial to helping Dutton feel safe, not judged, supported, and confident in her future.

Therapy works because it’s human. It’s about the subtle dance of empathy, the ability to sit with someone in their pain, the intuitive responses that come from years of training and human experience. When we replace that with algorithmic responses, we lose something essential: not just the warmth of human connection but also the clinical expertise that comes from understanding how complex trauma, relationships, and healing actually work.

GoodTherapy knows that the therapeutic relationship is the foundation of effective treatment. Our network includes professionals who do what AI can’t:

  • provide the human connection
  • set appropriate boundaries
  • apply clinical intuition that make real healing possible 
  • take accountability for their role

Whether you’re looking for culturally responsive care or simply want to find a therapist you can trust, the human element isn’t optional: it’s everything.

Abstract glowing brain with circuit patterns and split design, representing AI in therapy and mental health.

The Future of Ethical AI Therapy: What Needs to Change

AI isn’t going anywhere. The technology will continue to evolve, and mental health professionals need to figure out how to work with it rather than against it. But the key to a future of AI and effective therapy is clear guardrails and safety measures that keep patients safe. 

The future of ethical AI in mental health will likely involve hybrid models with robust human oversight, transparent regulation that protects consumers, and clear boundaries about what AI can and cannot do. Maybe AI can help with scheduling, treatment tracking, or providing psychoeducational resources between sessions. But replacing the human relationship entirely is not innovation: it’s a fundamental misunderstanding of how care works.

For consumers, the message is clear: research your providers, look for licensed oversight, and use major caution when considering AI-only mental health services. There are eight key ways that AI is not therapy, and understanding these differences could prevent serious harm.

If you are thinking about or actively looking for a mental health therapist, start by seeking safe, evidence-based care from qualified professionals. Real therapy, with real humans, is still the gold standard for mental health treatment. At GoodTherapy, that’s exactly what we’re here to help you find: genuine care, clinical expertise, and the irreplaceable power of human connection with no algorithm required.

Read More: Ready to Find a Therapist? 

Resources:

American Psychological Association: APA Calls for Guardrails, Education, to Protect Adolescent AI Users

Futurism: American Psychological Association Urges FTC to Investigate AI Chatbots Claiming to Offer Therapy

National Library of Medicine: AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks

The New York Times: A Teen Was Suicidal. ChatGPT Was the First Friend He Confided In

Exploding Topics: 40+ Chatbot Statistics (2025)

CNN: Your AI Therapist Might Be Illegal Soon. Here’s Why

People: Woman Shocks Therapist When She Calls to Tell Her Big News (Exclusive)








© Copyright 2025 GoodTherapy.org. All rights reserved.

The preceding article was solely written by the author named above. Any views and opinions expressed are not necessarily shared by GoodTherapy.org. Questions or concerns about the preceding article can be directed to the author or posted as a comment below.



Leave a Reply

Your email address will not be published. Required fields are marked *