animate search
Despite significant advances, is AI actually exacerbating recruitment bias against women?
Talent Management /

Despite significant advances, is AI actually exacerbating recruitment bias against women?

Written by Iain Flinn

Helping business leaders in the enterprise software, Cloud/SaaS/PaaS and emerging technology sectors to identify talent and build high performing teams across EMEA.

LinkedIn

In December 2019, leading human resources publication Personnel Today published an article with the headline, “Recruitment algorithms are ‘infected with biases’.” It argued that while efforts to eliminate bias in the hiring process are to be welcomed, unintentional artificial bias is often unavoidable and can even exacerbate the very issues it is attempting to eradicate. So, can AI ever truly support employers and candidates and make hiring fairer?

In 2018, Amazon made the headlines when they found that their parsing software had a bit of an issue: it didn’t like women. The company, which now employs over 1 million people worldwide, unsurprisingly was looking to automate much of its recruiting processes and provide hiring managers with a manageable shortlist of candidates to interview. But the glitch in its model meant that candidates for software developer and related technical positions were not being rated in a gender-neutral way.

The problem lay in the initial programming of the algorithm itself a decade earlier. At that time, applications for technical roles by and large came from men, and so the algorithm was developed with a ‘typical’ applicant in mind. For instance, it searched for words such as ‘executed’ and ‘captured’ which were commonplace on the CVs of male engineers but rarely – if ever- on those of their female peers. And so male candidates were always favoured.

It may have been a failed experiment for Amazon, but the case served as a stark reminder that the technology is not the one producing the bias, it is the humans who program it. Amazon was using AI to make hiring decisions based on data that was dated and failed to take into account shifting trends and recruitment practices that occur in real-time.

Indeed, at the time Amazon was developing its hiring platform, the issue of diversity and inclusion barely registered along the corridors of HR departments. But by the early 2010s that soon changed and fast-forward to now, especially in the wake of last summer’s Black Lives Matter movement, the need for employers to address any underlying biases that might exist – both unconscious and conscious – has never been greater.

The technology simply isn’t ready yet to make hiring decisions on its own and cannot be seen as a replacement for human recruiters. But for those employers who often receive dozens if not hundreds of applications for a single role, can AI still support their hiring efforts?

An article by IEEE Spectrum reasoned that “AI-powered recruiting tools can in fact play a positive role in mitigating human bias and help make the hiring process fairer.” They are right, but it won’t be easy or quick. Fine-tuning the language used in job postings to attract as diverse a range of applicants as possible needs to be matched by the way in which technology learns how to extract certain words, terms and expressions. It will need to do this intuitively – by determining an individual’s level of expertise and how this relates to the specific requirements of the advertised role.

For example, looking for a type of required education, such as a program keyword rather than identifying the name of a university. Or using software such as Textio’s ‘Tone Meter’ which enables hiring managers to create job postings that have more inclusive language, as opposed to phrase such as ‘rock star’, ‘ninja’ or ‘high-performer’ that have been found to appeal in the main to men only.

Right now, AI works brilliantly well at enabling busy hiring managers to automate often repetitive or bulk processes, such as sourcing and screening applicants, scheduling calls, and helping to neutralise certain human biases (such as name of university attended). But as we saw in the Amazon example, when the algorithms powering the process are flawed, biases will exist and there will be problems because humans directly influence how machines learn.

Both humans and machines have their flaws. However, while tech is getting better at helping hiring managers to create equitable workforces, it still needs humans in the loop to evaluate the decisions being made by AI and override them if necessary.

According to an article published in Harvard Business Review: “The HR departments that realise that science and data, and not intuition or instinct, should be the basis for decisions will attract and retain the best talent.” We agree and would add that while we would advise against the notion of allowing hiring decisions to be made by AI – because human oversight will always be needed – the overall decision-making process can in fact be enhanced by it. But this is on the proviso that those making the final decisions learn to better understand and interpret the data that AI makes available to them.

Written by Iain Flinn

Helping business leaders in the enterprise software, Cloud/SaaS/PaaS and emerging technology sectors to identify talent and build high performing teams across EMEA.

LinkedIn