In the world of recruitment, machine learning is going to create efficiency, it’s going to add value to the candidate experience and it’s going to help companies select the best people. But, we still prefer to trust human judgement over algorithms (algorithm aversion), it’s in our nature.
Nearly twenty years ago, online assessments arrived on the HR scene. At the time I had recently started work in occupational testing, where a key challenge was to champion this new way of hiring in the face of resistance from leaders.
What happens if people cheat? What if it doesn’t work? How will we ensure exam conditions? How can these tests possibly predict performance in a role? Despite these (often exaggerated) concerns, business efficiency took hold and companies started to adopt it.
We now face a similar challenge with machine learning.
Plenty of academic papers (including this one in The Journal of Experimental Psychology) explain why. In our context it translates into concerns about:
Here I’d like to demonstrate why there’s nothing to fear from machine-made decisions through answering some of the questions I hear when I’m talking about new developments like LaunchPad Predict, an application of machine learning that predicts right-fit hires.
No, as long as you build the model correctly.
Whether algorithms can learn ‘bad habits’ and inadvertently perpetuate discriminatory hiring is a common concern. Humans are inconsistent; recruiters’ decisions can be affected by unconscious bias, tiredness, stress and even hunger.
Any kind of bias can be stripped out of an algorithm.
We take a supervised learning approach with psychologists and data scientists working on a model that is built on what underpins good performance in a job (we call it construct validity) and not based on what recruiters say or do. This enables the model to predict which applicants are likely to be the best fit or poor fit for a role.
2. Is ‘algorithm aversion’ limiting the effectiveness of recruitment tech?
At the moment, yes it is. People are still apt to dismiss decisions made by a machine even when they are shown evidence, as this recent piece in The Quarterly Journal of Economics shows.
Charismatic CEOs “don’t like delegating critical business decisions to smart algorithms. Who wants clever code bossing them around?” and likewise, all humans like to think they make the right decisions and so getting people to use AI is a huge challenge. People tend to focus on the evidence which supports their own viewpoint, and ignore contradictory information, it’s in our nature.
Something that helps with this is an understanding of the reason for a decision, even if the user doesn’t understand the algorithm. At LaunchPad the system creates an output that explains the reason for a decision (why someone is, is not, or is possibly right-fit) to help build trust.
3. Does machine learning de-personalise the candidate experience?
It means recruiters have more time for right-fit candidates. It’s true that there is less human contact in the early stages, but candidates often want convenience: they want to be able to record their responses anytime and anywhere, they want speedy progression to the next stage.
There are benefits to candidates, for example removing bias from the hiring process, and candidates can ask for feedback or even a human review. If they wish, candidates can opt out of automated decision-making, this could be especially important for those with disabilities. And with the implementation of GDPR we need candidates’ consent to use machine learning to facilitate recruitment, so the process is very transparent.
Automation and machine learning are where the future of recruitment is heading. Organisations already recognise the efficiency, consistency and cost benefits. The challenge is helping people understand that they are improving decision-making too.
Read more about how LaunchPad Predict uses machine learning to predict right-fit hires.