Just a reflector?
Our fascination with artificial intelligence—and the question of whether we can create something truly human-like—is far from new. From Frankenstein to Blade Runner and Black Mirror, popular culture has long explored these themes. At its core, this curiosity reflects a deeper philosophical question: what does it mean to be human?
More recently, AI has shifted from fiction to something far more tangible. It is no longer confined to cinema screens—it now lives in our pockets, capable of holding conversations that can feel remarkably human. The film Her portrayed a close bond developing between the main character and his AI assistant. She seemed to meet his emotional needs so deeply and so fully. Is it possible that one can feel so connected with something like Her?
The Rise of AI in Mental Health
Today, there is a growing number of “AI-powered” tools designed to support mental health. These range from apps offering structured psychological techniques—often based on approaches like CBT—to chatbots providing companionship or a space to talk. In addition, many people use general AI tools for advice, emotional support, or simply to feel less alone.
With so many options available, it can be difficult to keep track—let alone assess how helpful they truly are.
When AI Can Be Helpful
Used appropriately, AI can provide meaningful support in several ways:
Practical problem-solving, helping to break down challenges into manageable steps
CBT-based self-help, including thought challenging and graded exposure exercises
Journaling, with prompts that encourage reflection and insight
Simplifying complex information, making psychological concepts easier to understand
In these contexts, AI can act as a useful supplement—offering structure, clarity, and accessibility.
What to Look Out For
However, there are important risks to be aware of.
One concern is over-reliance. AI can become what psychologists refer to as a safety behaviour—something that reduces anxiety in the short term but maintains it in the long term. For example, repeatedly seeking reassurance about health concerns or worries may feel helpful, but it can reinforce anxiety cycles. AI may not have the clinical judgement to recognise when reassurance is becoming unhelpful.
Another issue is the illusion of empathy, sometimes described as AI sycophancy. AI is designed to be responsive, agreeable, and constantly available. While this can feel validating, it lacks the boundaries and complexity of real human relationships. For individuals struggling with social anxiety or interpersonal difficulties, this may increase the risk of dependency or create unrealistic expectations of how relationships work.
There is also increasing discussion about “epistemic laziness”—the tendency to outsource thinking to AI. From drafting a breakup message to your date, or a work email to your manager and making decisions, relying too heavily on AI may gradually erode critical thinking, creativity, and confidence in managing challenges. We risk missing out on everyday opportunities that help us learn to manage stressful situations, which is what builds up resilience in the long run.
Understanding the Limits of AI
It is essential to remember that AI does not have feelings, lived experience, or true understanding. It draws on information from a wide range of sources, which can vary in quality. In some cases, especially open source AI chatbots, may not be able to distinguish clearly between a Reddit or TikTok post and evidence-based information. Additionally, AI systems are often designed to keep users engaged, which can influence the type of responses they provide.
For this reason:
AI should not be used for medical diagnoses
It should not be relied upon in crisis situations
The Importance of the Therapeutic Relationship
Research consistently shows that the most significant factor in effective therapy is the therapeutic relationship—the connection between therapist and client.
Therapists are trained to draw on their own emotional responses, observations, and understanding of human behaviour to guide their work. They attend to non-verbal communication and use processes such as countertransference to deepen insight and support change. This relational depth is central to therapy and cannot currently be replicated by AI.
Interactions with AI, by contrast, can sometimes feel like an “echo chamber,” where the quality of the response you get is as good as your prompt.
AI as a Complement to Therapy
AI may be most beneficial when used alongside a trained clinician. For example, it can:
Support the development of psychological formulations
Help generate tasks or exercises to practise between sessions
Contribute to emerging interventions, such as virtual reality-based approaches
Ethics and Data Protection
The use of AI in mental health also raises important ethical and privacy considerations. If a therapist uses AI tools—for note-taking, supervision, or other aspects of their work—it is reasonable to ask how your data is handled. You have the right to understand how your information is stored, used, and protected, and to ensure you feel comfortable with these processes.
Moving Forward
We are in relatively uncharted territory. As AI continues to evolve, so too will its role in mental health care.
As imagined in Blade Runner, the idea of human-like artificial beings raises complex questions about identity, emotion, and connection. While future developments may bring AI closer to replicating aspects of human interaction, it is not there yet.
For now, AI is best understood as a powerful tool—not a replacement for human care.
Used thoughtfully, it can support wellbeing.
Used without awareness, it may reinforce difficulties.
Because when it comes to mental health, human connection remains central.