Analysis

March 4, 2024

Meet the AI chatbot therapists filling the gaps in Europe’s mental health care shortfall

Startups are using AI to offer therapy in the form of a chatbot — but is it worth the risks? 


Sadia Nowshin

8 min read

Photo: Clare&Me

It's a normal afternoon in the office, and my AI chatbot therapist has just told me to quit my job. 

It started as an exchange about anxiety (“anxiety is not dangerous to you,” Clare had reassured me) but escalated fast. To be fair, I suggested the nuclear option — handing in my notice in response to a (fictional) problem at workbut once prompted, Clare had been all for it. 

“That's a great idea!” she enthused. “I suggest you make it a goal for yourself to take this first step this week.” 

Advertisement

Created by Berlin-based startup clare&me, Clare is one of several AI-augmented mental wellbeing startups to pop up in recent years.

It’s not the first time chatbots have been brought into the most intimate corners of our lives, from AI girlfriends to sextechs providing aural stimulation to the lonely and frustrated.

But AI therapists like clare&me are looking to bridge the gap between the demand for mental health support and the stretched capacity of healthcare services. 

Rather than talking to your pet or teddy bears, there is a space now which almost represents a sentient being

In 2023, a UK parliamentary report found that while the NHS’s mental health workforce grew by 22% between 2016 and 2022, patient referrals rose by 44%. Across Europe, the Covid-19  pandemic has only widened this gap.

AI chatbots could offer a resource for patients who are waiting for help, says Catherine Knibbs, a psychotherapist and spokesperson for the UK Council for Psychotherapy. 

“Between sessions, people who have mental distress and trauma can often feel like they have nobody to talk to,” she says.

“Rather than talking to your pet or teddy bears, there is a space now that almost represents a sentient being. Maybe it will reduce some of the fatigue that happens when people are on waiting lists.” 

Emilia Theye, cofounder of Clare&Me, was a clinical psychologist before founding the startup. She says chatbots are not intended to be a replacement for a therapist for everyone but as a way to alleviate some of the demand for care. 

“Right now, the idea is to be a support system in the very early journey so that we take pressure off the whole system. So the person doesn't even enter the clinical healthcare system, because they have got the support from another source,” she says. 

Other startups in the space are being trialled by health services to test this hypothesis. 

London-based Limbic offers a conversational AI which supports patients between clinical sessions, as well as background AI software that relays information around risk and engagement back to the care provider. It's the first AI mental health chatbot in the world to achieve UKCA Class IIa medical device status — a product marking that signals its clinical effectiveness, safety, and risk management. It is used in 33% of the UK’s NHS Talking Therapies services across the country, which covers over 260k patients. 

Advertisement

Limbic’s CEO, Ross Harper, says the AI chatbot offers support between sessions. Its assessment software guides the chatbot’s questions to form real-time hypotheses about what a patient is most likely to be struggling with. So far, he says, it’s helped clinicians reclaim over 50k hours of work that would have been spent on patient assessment and referral.

One risk of bringing AI into this space, says Katharina Neuhaus, principal at Vorwerk Ventures, is data: “AI can only be as good as the data set it works with, and there’s a question about how good the data sets that are currently out there are.” 

This information, she says, could already be biased as some demographics are more likely to access mental health support and are, therefore, represented in the data whereas groups that aren’t already engaged won’t be. 

Harper says its Limbic chatbot has increased engagement among some of these harder-to-reach demographics. A 2023 survey of 129.4k patients across several NHS sites found the therapy services that used Limbic saw a 179% rise in non-binary individuals accessing mental health support and a 29% increase for ethnic minority groups. 

The platform can use the data generated from its interaction with over 260k patients to further refine its AI models and create more representative data. 

The shortcomings of AI

But Knibbs says AI falls short when it comes to the more complex side of therapy: “even if it passes the Turing test (a test to see if a machine can express the equivalent intelligence behaviour to a human), or it’s supposed to have the parasocial aspects of a relationship, it is still not a therapeutic alliance that you would achieve between two humans.” 

She says this means AI bots typically struggle to pick up on signals of suicidal intent versus ideation, or signs of self harm, which are “extremely nuanced and complicated topics as it is”.

Being the first mover in a hot market is usually the goal for an aspirational startup but Javier Suarez, cofounder and CEO of Oliva — which provides companies with access to therapy services as an employee benefit — says the company is content to trail behind when it comes to AI: “We’re happy to be followers and not leaders.” 

That’s mainly because AI isn’t yet sophisticated enough that Oliva feels comfortable with using it for a chatbot: “There’s no proof that care can be delegated to machines,” says Suarez. 

The platform uses AI in a different capacity: its proprietary model analyses a user’s messages or speech during sessions to recommend relevant activities between meetings to complement real-person care. 

“We do envision in the future for AI to go into the care delegation side of things,” says Suarez, which he sees as “properly exchanging a practitioner with a machine.” “But we will only do that once there's proof that it’s as effective [as human therapy], and we're far away from that. There's no proof, so we don't want to compromise our care in any way or form.” 

While Theye admits that current AI chatbot tech has its shortcomings, she suggests that this could make it well-suited to be a therapy assistant. “We believe we can build emotional intelligence, but we cannot build empathy — this is where we draw the line. We don't want an empathetic bot, we want an emotional, intelligent bot,” she says. 

Emilia Theye and Celina Messner, cofounders of clare&me
Emilia Theye and Celina Messner, cofounders of clare&me

That’s because empathy isn't useful in a therapy context, she says: while a chat with a friend about a problem at work might prompt them to share their own similar experience and empathise, a therapist must instead stay neutral and offer sensitive solutions instead.

To accommodate this, clare&me’s chatbot is trained on the more logical therapy path of cognitive behavioural therapy (CBT), which is based on clinical data and information contributed by professional psychologists. 

Knibbs says this form of therapy is easier to train an AI to replicate given its systemic nature — but when it comes to the more nuanced forms of therapy like analysing behaviour or body language, AI is a long way off from that complexity. 

Personal privacy

If a therapist is concerned about a patient, some clauses allow them to break confidentiality and contact emergency services. But as a digital app, many of these startups are bound by data privacy regulations that could restrict them from doing the same. 

Though clare&me is intended as a support for people early in their therapy journey, there’s little way to stop patients who require more professional help from signing up. In the onboarding process, the chatbot says that “when people feel very down or are in a state of mental crisis, it is a widespread phenomenon to think about ending your own life,” and asks if you can relate to those thoughts. If you press ‘No,’ you can continue onboarding.

If a user does say something concerning once in the chat, then the bot is trained to spot warning signs according to guidelines set by the psychologists that helped to train it, says Theye. 

It’ll then ask questions to assess the risk and decide whether it was a false alarm or recommend emergency helplines. 

Theye acknowledges the limitations of AI when it comes to spotting these signs, and says that the chatbot is trained to be overly cautious to catch any potential dangers. There’s also a human in the loop who looks at each potential case the AI flags to double-check the decision it made — an addition that Neuhaus says every startup using AI in this context should be using. 

“We do as much as we can while protecting the user’s privacy, but we’re not allowed to reach out to the user in any other way,” says Theye. As a safety measure, the startup is considering adding a buddy system where users can nominate a trusted contact who will be notified if the bot thinks the user could be at risk. 

Limbic’s connection with NHS services means that in cases where the user’s safety is a concern, there is a way for a professional to be notified. “If Limbic recognises that a patient is in distress and/or in need of urgent treatment or attention,” says Harper, “the platform immediately refers the patient to seek in-person support and alerts the healthcare provider to this as well.” 

Ultimately, Neuhaus says AI in this space could be worth the pay-off if done right. “The opportunities are definitely bigger than the risk — but given that it is such a sensitive topic, they cannot be neglected.” 

Sadia Nowshin

Sadia Nowshin is a reporter at Sifted covering foodtech, biotech and startup life. Follow her on X and LinkedIn