Opinion

October 8, 2021

Europe wants to champion human rights. So why doesn’t it police biased AI in recruiting?

EU jobseekers are navigating AI algorithms blind. They shouldn’t have to.


Nakeema Stefflbauer

EU policymakers say they want to chart a “third-way” for AI regulation. One that respects human rights like privacy. But proposed AI regulation and regulators at all levels of government in the region are ignoring one key factor: race. 

Nowhere is that more clear than in the area of recruitment. While the EU aims to define its digital future with the Digital Services Act, job seekers all over Europe are being left at the mercy of biased algorithms on digital jobs platforms. The existence of these biases is well documented, though LinkedIn, in particular, has taken steps to counter allegations. We aren’t just talking about US job platforms but also homegrown ones that link to the giants, availing themselves of the same algorithmic filtering mechanisms. 

This is important from a purely human rights and equality standpoint. No one should be denied equal opportunity to employment because of their race. But it’s also economically important. We know diversity is a driver of innovation and growth, two things that Europe desperately needs to compete globally.

Advertisement
No one should be denied equal opportunity to employment because of their race...We know diversity is a driver of innovation and growth, two things that Europe desperately needs to compete globally.

Algorithms are a kind of computer shorthand used to identify, reproduce and predict patterns in data, like a child learning to speak the language of adults around it. Sound useful? It is, especially for employers looking to cut through tons of résumés to get to the “good ones” quickly. 

Not so nifty, however, is how algorithms treat the outliers or people whose data differs from the majority of applicants or from the majority of already-hired employees, especially those from non-white ethnicities. That data might be as simple as your name, according to a 2017 Stanford University study. The study analysed billions of words on the Internet to find that embeddings for names associated with being white were similar to those for positive works, and names associated with non-white ethnicities were more similar to negative words.“Bias was baked into the words.” 

The race-and-ethnic-name-association problem is an ongoing contradiction in European recruitment. It is illegal in countries like France and Germany to collect data specific to racial or ethnic identity, yet standard European CVs include an applicant's name and photo, which can expose you to bias and create a 'gotcha' in the algorithmic sorting of your credentials. One founder in Germany took to LinkedIn to share his frustration with Europe’s old-fashioned, human recruitment bias:

Now consider the impact of these human biases at scale, when computer algorithms embed and associate meaning with the data they assess. Algorithms don’t, technically, create biases, but they aggregate data in ways that can segregate you from candidates whose data differs from yours. With the collection of gender identity, the date of your degrees, your zip code or photographic classification of race, it’s easy to see how being deemed too old, too female, too ethnic or too foreign on Europe’s algorithm-driven jobs platforms is a recipe for discrimination. 

Avoiding the collection of ethnic and racial data is not the answer, since large-scale studies like the one conducted in France in 2018 and 2019 show that banning photos and addresses from websites does not protect minority job seekers from racial discrimination. Asking job seekers who aren’t straight, white or cisgender men to hide their identities online isn’t practical. The worst part of large-scale algorithms being used in the EU? Apart from the hype about more AI as the solution to algorithmic bias, every “AI” solution further normalizes human classification. 

It’s a man’s world on the internet. A young, well-educated white man’s world to be specific — complete with algorithms that track and isolate people without those characteristics. (The recently-evaluated “ethnicity estimator” that VisionLabs has rolled out in Europe is another example of this). 

If the Digital Services Act is to do anything meaningful for European residents, it should require digital employment and networking platforms to make their algorithms transparent. AI algorithms have been proven to enable social, cultural and other biases, yet EU-active digital platforms continue to employ them at scale. Instead, when and if algorithmic bias is discovered, those algorithms should be prohibited for use in the EU.

You would think we’ve had enough of racial-, gender-, age- and sexual-orientation-classification in Europe’s past. Shouldn’t there be a better way to understand job seekers’ diverse identities than using algorithms that tell us how closely they resemble young, straight, cisgender white men? 

Nakeema Stefflbauer is the division director, digital client services for ERGO and the founder of FrauenLoop, an NGO computer programming school for women in Germany.