Here’s how companies are using machine learning and data-led assessment to mitigate the influence of bias in the recruitment process and improve hiring outcomes.
How many of us have made the preemptive assumption that our ‘nurse’ would be a woman, or were surprised upon learning a colleague’s husband was a stay at home dad?
As much as we’d like to think of ourselves as rational, objective decision-makers, the preconceptions and preferences with which we are all born (and raised) result in unconscious personal biases that are nearly impossible to erase. This, of course, is not to say that we are all maliciously prejudicial individuals, only that our decision-making matrices are so complex and convoluted that it is difficult to recognise when certain factors are having undue influence on our choices.
In a business context, unconscious biases can be very costly, especially when it comes to hiring new employees. It is substantially more difficult to determine whether an applicant is a right-fit hire when they’re judged according to personal preferences instead of objective measurements, leading to an unfair, inconsistent process. Furthermore, biased hiring practices are likely to lead to higher employee turnover and potentially damage a company’s employer brand.
Unfortunately, this problem is neither uncommon nor financially insignificant. According to our research, when multiple reviewers score a single candidate on a five point scale, they disagree on scores 95% of the time and strongly disagree (by two points or more) 49% of the time. Such inconsistencies open the door to a bad hire, a misstep that can often cost a company upwards of £50,000.
Organisations have attempted to remove unwanted personal preferences from their hiring operations using everything from blind CV reviews, automated screening processes to online aptitude tests and organisation-wide unconscious bias training programmes. Some of these offsets work better than others, but none offers a comprehensive solution capable of preventing potentially qualified candidates from falling through the cracks.
In recent years, machine learning and AI-driven technologies have emerged as potential solutions to this predicament. By leveraging powerful data analytics and predictive technologies, machine learning tools keep preconceptions and personal preferences at bay, effectively preventing biases from skewing applicant evaluation and influencing critical hiring decisions.
That being said, organisations should not assume that simply switching to algorithmic recruitment will necessarily solve the problem. In truth, without the proper design and implementation, many machine learning systems end up absorbing and replicating the biases of the individuals who coded them. As David Oppenheimer, a professor of discrimination law at the University of California Berkeley, explains, “Even if [algorithms] are not designed with the intent of discriminating against [specific] groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination.”
As such, the most effective – and reliably bias-counteracting – deployments of data-led recruitment technologies involve collaboration between man and machine. Algorithms are incredibly adept at combing through massive piles of reviewer data, enabling them to pick up on subtle patterns that a human audit would likely overlook.
In short, the best machine learning tools measure every decision that is made about each candidate against a pre-established baseline. When a machine learning platform receives the data from these multiple evaluations, it analyses each reviewer’s scores and identifies their “scoring behaviour”: if they tend to give higher scores to a given gender, they have a propensity for scoring every candidate as “average,” they look kindlier upon candidates from a certain type of university, and so on.
Once all of this data is centralised, the platform identifies and points out anomalies in reviewer decision-making, which could be an indication of bias or subjectivity. If a candidate receives consistently good or consistently bad marks from every reviewer, an organisation can proceed with the appropriate next steps confident that human bias has been neutralised. If, however, a candidate receives generally positive marks but is eliminated from contention on account of a seemingly aberrant evaluation or two, the machine learning platform will automatically reintroduce the candidate into the applicant pool and alert the hiring manager to a potential issue with the baseline measurement or the process itself.
77% of companies agree that bias reduction is a key strategic business goal and 35% already use some form of big data analytics to reduce bias in their recruiting and hiring processes, which is unsurprising when you consider that for every 1% increase in diversity and inclusion, overall sales grow by an additional 0.6%. As such, one would be hard-pressed to dispute that machine learning technologies will play an ever-increasing role in modern HR.
If you'd like to learn more about reviewer decision-making and data driven assessment, download our recent research, How Consistent Are Interviewers When Rating Job Applicants?