Analysis

December 20, 2023

Can AI replace your toxic boss, and should it?

AI is increasingly being used to help manage employees in the workplace. Can it be better than human bosses?


Tim Smith

4 min read

If you’ve been paying attention you’ll know by now that in every field from warehouse work to financial services, AI is coming for our jobs.

But if there is one role most of us probably won’t mind being made obsolete it is the most-loathed figment of any workplace: the toxic boss.  You know the type: the manager that stops junior staff from progressing and piles on more work without regard for people’s capacity to get it done.

Now companies are selling AI-powered products to take the pettiness out of professional assessment, and to try and help employees when work stress gets too much. 

Advertisement

“Individuals have their favourites, and then [they] have the people they get on less well with. Favourites typically progress better than people they get on less well,” says Jonathan Passmore, professor of coaching and behaviour change at Henley Business School.

Evidence-based management

Passmore says that one key area where AI can help to make employee management more objective is by removing inter-personal bias is by making data-driven decisions about performance.

“You look at boards — they often appoint people who are just like them, so you have less diversity on those boards than you might otherwise expect, if people were simply recruiting on the basis of criteria or talent,” he says.

“Evidence-based feedback frequently is lacking. And there is the potential as we move forward that AI-powered tools could help us to do that… Particularly being integrated with much of the workplace office software that we use each and every day.”

Germany-based Retorio has built an AI-powered platform to assess the performance of customer-facing employees, allowing them to train in simulated conversations, and by assessing their performance based on criteria like “openness” and “agreeableness".

Cofounder Christoph Hohenberger says that Retorio’s system also allows staff to find information that they need to improve in their role, without always having to ask a human boss, who might be busy or unwilling to help.

“There are a lot of people that might not always get the answer they need from their manager, and then they get stuck and they don't know what to do,” he says. “Retorio gives people more flexibility to acquire that knowledge.”

Burnout

AI isn’t only being used for professional performance assessment and development, but also being designed to step in when things go wrong.

London-based BobbyChat is a startup using the kind of large language model (LLM) technology that powers apps like ChatGPT to develop a tool that it says can help workers deal with burnout, in theory, better than an untrained or uninterested boss might be capable of.

Users of the app — which is currently in beta mode but will run a subscription model — can talk to an AI via a messaging interface. Cofounder Antonia Cresswell says the AI doesn’t give advice, but asks questions to help people find solutions for themselves.

Advertisement

“Burnout is devastating. Some people can take around three years off work if they've experienced burnout, it can be really hard to recover,” she says. 

“Around 75% of people require some kind of mental health support, but they don't necessarily reach a threshold for formal therapy. The goal was to also help that group of people who otherwise don't really have any options.”

Responsible use

BobbyChat is treading a fine line here. Burnout, Creswell explains, is not an officially recognised medical condition in the UK, meaning that the AI tool does not need to be approved as a “medical device” and the company doesn’t have to go through a stringent safety check process.

Cresswell says that this allows BobbyChat to iterate faster than if it were a licensed medical product, but also says that it means that the AI can’t be used to signpost people whose burnout might be connected to other mental health issues.

“If we started putting in detectors for anxiety and depression, technically that would be diagnosing, and that would potentially make us a medical device. So we don't do this,” she says.

Many argue that using generative AI, which is known to sometimes behave erratically and “hallucinate” (give incorrect information), to deal with complex emotional issues isn’t a good idea.

“If you don't have a human working at this very personal intimate level — particularly in areas such as therapy or individuals who are vulnerable — there are potential dangers around that,” says Passmore, pointing to the example of a case in Belgium where a man died by suicide after using a chatbot therapy app.

Creswell says that BobbyChat avoids many of the issues that come with general purpose language models, by incorporating an AI approach called “decision trees,” which make the answers more predictable.

Some will say that there’s a big risk that by allocating career development and workplace mental wellbeing support to AI runs the risk of turning our offices into an Orwellian surveillance state where the machine knows and controls too much about our lives.

But given that our workplaces are already full of power hungry leaders who commonly treat their juniors without a human touch, who’s to say the AI would do a worse job?

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn