• What does it mean when a recruitment AI shows prejudice? Amazon has scrapped its recruiter robot after it taught itself to make sexist hiring choices.
• The ASA upholds a complaint against a misleading job advert. Are your listings compliant?
Amazon ditches prejudiced recruitment AI
In a time of equality, diversity, and worker rights, artificial intelligence is often hailed as a great solver of disputes over fairness. The recruitment industry can barely go a month without a new think-piece tackling unconscious bias and unfair hiring practices. These articles usually resolve with the idea that unconscious biases are impossible to avoid – and that recruitment AI will herald a new era of equality at work.
But what happens when your robot recruiter is an obvious sexist?
This was the awkward position that global retailers Amazon found themselves in, when they started to develop a recruitment AI, back in 2015.
The project – praised as a "holy grail" of recruitment – aimed to sift resumes and produce an in-house shortlist of top candidates. One commentator described the aim was to input high volumes of resumes; "it will spit out the top five. We’ll hire those.”
Instead, this recruitment software taught itself to omit candidates who shared common traits with previous, failed applicants. The machine then learned from its own results, and reinforced its pre-existing rules as time went on. Uncomfortably, these rules often focused on traits that singled out women. Eventually, it led to candidates who listed attendance at female-only universities, or activities like "women's soccer" being removed from shortlists.
When Amazon discovered this, they ended the project. A spokesperson said that the recruitment automation software was “never used by Amazon recruiters to evaluate candidates."
A problem with programming
The Amazon process also failed to weight individual traits and qualifications. It meant that wholly unsuitable candidates were often shortlisted, simply because they shared similar extracurricular activities with previous success stories.
Human recruiters might welcome the Amazon revelations, following years of being told we'd all be replaced by machines. But this revelation does not expose inherent flaws in the concept of automated recruitment software.
The principle of "garbage in, garbage out" – bad programming reaps bad results – is fundamental to machine learning. The algorithms which informed the AI decision-making were creating prejudiced decisions. A sort of bias-by-design, if you like. The machine operated to rules that a human had provided it. When those rules contained flaws – or were not thoroughly tested – it generated errors. The machine then used these errors to learn from, over time.
Teaching a machine to match prospects to previous success stories is simple. Teaching them how to learn which factors are significant and which are superficial is proving a more arduous task. Nihar Shar, computer science lecturer at Carnegie Melon University says the self-teaching machine is still a pipe dream – for now. “How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable – that’s still quite far off.”
But the Amazon incident provides us with two important market insights. Firstly: human oversight remains indispensable in today's recruitment processes. Secondly: big business is already investing time and money, looking for ways to replace traditional recruiters.
