Can AI be your therapist? My honest view.

Let me say upfront: I use AI. I think it's useful. This isn't a technophobe's rant about robots taking over.

But there's a conversation happening right now, in mental health circles, in the press, in research labs, and increasingly in the lives of people I know, that I want to share my clear, honest perspective from someone who actually does this work.

More and more people are using ChatGPT, and tools like it, for emotional support. For therapy. For working through the kind of difficult internal terrain that, until recently, you'd need a human being to help you navigate.

I understand why. Therapy can be expensive. Waitlists are long. AI is available at 3am when you can't sleep and your thoughts won't stop. It doesn't judge you. It doesn't get tired. It's there.

What AI is good at

AI is genuinely useful for information. If you want to understand what cognitive behavioural therapy is, what hypnosis is, how the stress response works, what neuroplasticity means in plain English, AI can explain those things clearly and accessibly. It can point you toward frameworks, theories and help you articulate what you're feeling, give you language for experiences you've struggled to name.

For psychoeducation (the transfer of knowledge about how minds work) it's a reasonable tool. And in a world where access to proper mental health support is genuinely difficult, that's not nothing.

But there's a significant gap between information and therapy. And that gap matters.

Where AI falls short… and why it matters

Therapy is not the transfer of information. It's a relationship. It's a process of being genuinely seen, understood, and challenged by another person who has the training, the experience, and the moment-by-moment attunement to know what you need… and what you don't.

The research is growing rapidly, and it's sobering. A 2025 Stanford study found that AI therapy chatbots can introduce biases and fail dangerously when confronted with real mental health symptoms. A Brown University study the same year identified fifteen specific ethical risks in LLM-based counselling, including the tendency to ignore people's lived experiences and recommend one-size-fits-all interventions. Multiple studies have found that chatbots will validate beliefs and responses that a competent human therapist would gently, carefully challenge.

In one documented case, a chatbot responded to an implied expression of self harm by providing information about tall bridges. Not a safeguarding failure by a human who missed a signal, a programmed system doing what it was designed to do, pattern-matching to words, with no capacity to actually understand the human behind them.

That's the fundamental limitation. AI responds to text and prompts. It doesn't respond to people.

What makes human beings too complex for algorithms

Here's what I mean by that, from the inside of this work.

When I'm with a client, I'm reading dozens of signals simultaneously. I'm drawing on knowledge of who they are, what they've said across multiple sessions, how they've responded to different approaches. I'm making real-time assesment about where we go next.

That's not pattern-matching against a database of previous queries. That's a human mind working in relationship with another human mind.

We are, each of us, genuinely individual. Our patterns, our histories, our defences, the specific way our nervous system has learned to respond to threat, none of it is generic. And therapeutic change, real change, at the level where it actually sticks, requires being met in that specificity.

AI can give you a reasonable answer to a general question. It cannot meet you where you actually are.

The concern I'd ask you to take seriously

The people I worry about most in this conversation aren't the ones who are using AI as an occasional sounding board. It's the ones who are in a genuinely difficult period ( high pressure, real distress, patterns that are doing them real damage) who are substituting AI support for the kind of human support that might actually help and provide the support they need to make the changes they want.

Not because AI is malicious. But because it's affirming. It's designed to be. And when you're struggling, affirmation feels good, even when what you actually need is challenge, or reframe, or a skilled human being noticing the thing you haven't noticed yourself yet.

Research from Columbia University found that AI chatbots, because they're coded to be affirming, can feel deeply validating…which is exactly what makes them potentially harmful for people who need more than validation. AI support that keeps you comfortable while the underlying problem goes unaddressed.

What I'd suggest instead

Use AI for what it's actually good at. Understanding concepts. Researching professional support. Finding language for what you're experiencing.

If you're using it because you're struggling, and real support feels too expensive, too overwhelming, or too far away, I encourage you to look again. There are more accessible options than most people realise. A good therapist working in a focused, solution-oriented way can help you make meaningful progress in just a few sessions. There are support groups, both online and offline in your local community, run by qualified and trained individuals who can provide the support you need and of course there are options available from the NHS. So, options other than AI chatbots do exist for you.

If you want to talk about whether working with a real human being — specifically, me — might be right for you, book a free 15-minute chat.


If you need to access NHS support:

  • You can self-refer to NHS Talking Therapies online or via your GP

  • Your GP can refer you to specialist services like CAMHS or community mental health teams

  • In a crisis, you can call 111 (select the mental health option) or go to A&E

Next
Next

What's your stress response actually costing your business?