Analysis

March 16, 2022

The government algorithms making life and death decisions — and getting them wrong

Algorithms are being used by governments to make key decisions on benefit payments and immigration, often unfairly


Tim Smith

6 min read

Around three years ago, Rick Burgess and other members of the Greater Manchester Coalition for Disabled People (GMCDP) began noticing a change in the way that UK authorities were investigating their benefits payments.

“There were weird patterns in how some people seemed to get treated. And there certainly seemed to be people that got subjected to repeated scrutiny,” says Burgess.

Then, in February 2021, an investigation by charity Privacy International found that the UK Department for Work and Pensions (DWP) — the wing of the British government that handles benefits payments — was using an algorithm to filter people suspected of benefit fraud.

Advertisement

Burgess says that a high proportion of disabled people from the GMCDP have been investigated for benefit fraud, despite the fact that fraudulent benefit claims from disabled people are a tiny issue.

“Fraud in disability benefits is 0.5%, that’s one in 200. It's not a systemic problem. The fact that people even associate the word fraud with benefits is a political manipulation of consciousness,” he tells Sifted.

It’s a cautionary tale for any tech companies using AI, especially when it’s something more investors are pushing for. With more and more startups claiming they have AI capabilities, leaders who aren’t on top of any biases in data risk losing the trust of their customers and stakeholders — or in more extreme cases, like the one Burgess describes, causing distress and harm.

Rick Burgess, Greater Manchester Coalition for Disabled People

'There appear to have been suicides'

Often socially endemic biases are at the root of problems in AI, such as racial discrimination documented in facial recognition technology. 

Burgess believes that a culture of discrimination towards disabled people is likely to reflected in the way the algorithm makes decisions, which might explain why so many disabled people he knows have been investigated.

The impact of these misplaced investigations into alleged benefit fraud is huge. The process can be long and involve distressing interrogations, Burgess says. 

“It's super stressful. And if this is someone who suffers mental health distress, this can be absolutely devastating. It's not great for anyone, but if you're particularly susceptible to paranoia and anxiety, particularly, this can absolutely be the final straw,” he says. “The worst instances are of course where the stress has been too much for people. There appear to have been suicides.”

Burgess pointed Sifted to a blog post written by Nila Gupta, a disability activist and journalist who died by suicide in June 2021. The post is worth reading in full, but in it she describes the "traumatic and violent and all-encompassing" experience of dealing with the DWP to get access to disability benefit payments.

Last month, Foxglove, a London-based non-profit that advocates for “tech justice”, worked alongside the GMCDP to submit a formal “pre-action” legal letter to the DWP, which demands that they “come clean about exactly how its algorithm works and provide clear evidence that it does not unfairly discriminate against disabled people”.

In response to claims that the stressful nature of the investigation process could have contributed to deaths by suicide, and to the allegation that algorithmic bias might unfairly result in disabled people being targeted for fraud, the DWP said they "will be responding to the letter written by the representatives of the Greater Manchester Coalition of Disabled People in due course”.

Advertisement

Recreating biases

This isn’t the first government algorithm that Foxglove has fought against. In 2020 it worked with the Joint Council for the Welfare of Immigrants to launch a legal challenge against a UK government algorithm that was being used to filter visa applications.

In August 2020 the government stopped using the algorithm following claims that the technology “discriminated on the basis of nationality — by design”.

Foxglove director Martha Dark tells Sifted that visa applicants from certain countries became automatically blocked from the system.

“There was a list of secret nationalities and if you held one of those nationalities, you automatically got your visa application denied,” she says.

Martha Dark, Foxglove director

The fact that a whole set of nationalities were automatically denied visas is a clear example of pre-existing human biases being written into an algorithm, but Dark also described how unfair or prejudiced decisions can become entrenched in a machine learning “feedback loop”.

“One algorithmic decision process will inform future decision processes. So you're sort of in a doom loop where if the algorithm has made a decision about someone based on a set of data once, it will make that decision again,” she explains.

While the UK’s Home Office did stop using the algorithm, it denied that it represented “speedy boarding for white people”, as Foxglove described it.

"We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure," a spokesperson said at the time.

Impacting the most vulnerable

And this isn’t just a UK issue. In October 2021 digital rights organisation Eticas Foundation launched the Observatory of Algorithms with Social Impact (OASI), a public registry of algorithms used by governments from all over the world.

Some European examples include an algorithm used by the Polish Ministry of Justice to match judges to court cases, one that allocates social benefits in Sweden and an algorithm to determine what services unemployed people in Spain can access.

Gemma Galdón-Clavell, founder of the Eticas Foundation, says: “What we are finding is that those systems often fail, and so they make bad decisions. They're deciding to not give unemployment benefits to someone who is eligible for them. They’re deciding to not assign the right risk to a woman, who then six months later ends up being killed by her partner.

"When bad decision making is in social services, that means that people that are entitled to things don't get those things, or people that should be protected are not being protected.”

Gemma Galdón-Clavell, founder of the Eticas Foundation

Galdón-Clavell adds that, outside of the UK and US, very little is being done to hold automated decision-making in government to account, partly due to the lack of transparency around how algorithms are being used and what data they’re processing.

She also believes that, while automated decision-making might be brought in to make time and efficiency savings, the real effect is often the opposite: “What we see is inefficiency after inefficiency after inefficiency, up to the point where many of the systems that we've looked at end up not being used… The investment in AI is just a massive waste of money.”

Algorithms like these tend to affect the vulnerable in our society. Whether it’s those in need of social security payments, or migrants trying to find a better life in a new country, it doesn’t tend to be the people in power who rely on these kinds of decisions. When those decisions are being automated, the public has, at very least, the right to know how that automation is working.

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn