Analysis

June 11, 2024

European neobanks take a gamble on AI chatbots - will it pay off?

Europe's neobanks initially exercised caution with generative AI — but a small group of challenger banks are now willing to take the leap


Tom Matsuda

5 min read

Lunar CTO Kåre Kjelstrøm

In the 18 months since ChatGPT’s debut, Europe’s biggest neobanks have largely shied away from incorporating Generative AI into consumer-facing products. 

In a March interview with Financial News, Monzo’s chief operating officer Sujata Bhatia spoke of AI’s potential to assist in fraud prevention and make internal processes smoother but notably left out any mention of a consumer-facing chatbot product.

Researchers are concerned that often-unreliable GenAI chatbots could give users unpredictable advice, warning that any product put in front of consumers must be tightly contained. Some challenger banks are, however, seeing that the rewards might outweigh the risks.  

Advertisement

In February, Klarna kicked off fintech’s GenAI love affair by launching its customer service AI assistant, which the Swedish buy now, pay later company claims can handle the role of 700 customer service agents. 

Alongside run-of-the-mill customer service queries like refunds and returns, the chatbot also advises on spending limits and outstanding payment schedules — requests that some might argue verge into the territory of financial wellbeing. 

Digital banks are starting to get in on the action. In recent months, Dutch neobank Bunq and Denmark’s Lunar unveiled AI-powered financial assistants that can give insights into customer spending habits and breakdowns of holiday spending.  

While currently in beta, the goal for Lunar’s AI assistant is far more ambitious than just tallying up the cost of a trip to Mallorca. 

“It’s just the beginning of a journey towards providing a bespoke banking experience,” says Lunar CTO Kåre Kjelstrøm. 

So will fintech’s foray into consumer-facing AI be good for users, bringing with it more personalised experiences and products, or is the technology simply still too risky to tell your everyday consumers what to do with their money?

Don’t give investment advice

“If you are building a chatbot for consumers, I would keep it very contained,” says Ram Gopal, a professor at the Warwick Business School’s fintech research centre, adding that GenAI tools are still prone to hallucination (inventing facts) and can often give inconsistent answers to the same question. 

“I’d make sure that the scope of what you’re providing is tight enough so not too many things go wrong.” 

Kjelstrøm wants Lunar’s chatbot to be able to handle sensitive requests like guiding a user to change a PIN code, examining spending habits to create automated savings accounts and providing information on the day’s top trading stocks. 

The company is using OpenAI’s API to power its AI but says that it has finetuned the model based on proprietary data, which he says will make the technology more reliable than off-the-shelf chatbots like ChatGPT.

Advertisement

“This is the key element,” says Kjelstrøm. “We can give the bot a lot of context about who it’s talking to right now.” 

Still, the bank is holding off from allowing the chatbot to give investment advice until the company can be more sure of its accuracy. 

“If we start having an AI that gives investment advice, we might get into a legal situation if it misinforms,” says Kjelstrøm. “So before we roll something out like that, we want to make sure that it really works and does better than a human.” 

B2C versus B2B

Anna Storåkers, cofounder of Swedish investment firm Yanno Capital, also argues that it’s tricky to build a financial chatbot for the consumer market in a responsible way, citing the risk of inaccuracy. Instead, she prefers to invest in AI startups geared towards the B2B market, which she deems as safer. 

“From a more B2B perspective, it’s more about finding and identifying already existing information in order to streamline the work process and deliver the data,” she says. “In direct-to-consumer, if you’re generating a judgement on something and starting to give advice, it’s a very different thing.” 

Yanno recently invested in Grasp, an AI assistant for investment bankers, as part of the startup’s $1.9m funding round. Founded in 2020 by former McKinsey consultants, Grasp’s chatbot scans press releases, company websites and news articles to identify M&A opportunities in a specific region or industry. 

CEO and cofounder Richard Karlsson says the risk of hallucination is less likely as its model is geared toward a specific niche, making it easier to put in guard rails. 

“We build our AI systems to only do that workflow for that use case and if you try to do something else it just won’t work,” he says. “If you ask it for something completely different, it will just tell you it can’t do it.” 

Grasp’s clients are investment bankers, private equity firms and financial advisors. This means it can rely on its customers’ professional experience in making financial decisions to mitigate risk. 

“What we are building is something that helps finance workers do their work more efficiently but they are ultimately doing the analysis on top of the data as well,” he says.

Making it work

That’s not to say that fintechs won’t make AI work for consumers, and some have been doing so since long before the rise of ChatGPT. 

Saving app Plum — which was founded in 2016 — uses automation for its investment and savings tools. AI is also utilised in its financial crime prevention and customer service processes.

The company is looking to get into the GenAI game and is developing its first user-facing GenAI product, its head of data Joao Cunha tells Sifted, although Plum says it’s too early to give details.

Lunar is pushing ahead with further improvements to its AI assistant, planning to supplement it with data from the company’s own internal large language model Lunar Mind to improve its performance.

Warwick’s Gopal argues that it is inherently safer for tech companies to build GenAI products for employees, rather than customers, as they are more likely to double-check the information that a chatbot spits out.

But, given the huge amount of VC excitement around the technology, it’s likely that consumers should steel themselves for a whole new suite of AI-powered products landing on their doors before long, and take the right precautions with any automated advice they’re given.

Tom Matsuda

Tom Matsuda is a fintech reporter at Sifted. Find him on Twitter and LinkedIn