BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Ways In Which Big Data And AI Automate Recruitment Bias Audits

At any given time, a job opening on LinkedIn receives over 250 applicants. Unilever gets around 1.8 million applications a year for measly 30K positions, and screening through a trove of data isn’t child’s play. It is where the company deploys AI and big data in HR to run a series of tests to trace behavioral traits and then a list of successful candidates is passed onto the human recruiters. Surprisingly, Unilever ends up hiring around 50% of those candidates. Artificial Intelligence (AI) has proven its mantle on countless occasions, making it a viable option in the recruitment process.

However, both good and bad outcomes have been well-documented in the past. AI can be biased, not limited to just gender, sexuality, disability, race, minorities, etc. After all, AI is as good as the trove of training data it is fed, and that’s where recruitment bias comes into play. Amazon’s AI for recruitment is one of the famous examples of the biased outcomes that AI produced after years of research leading to its shutdown.

Talking about big data and AI in recruitment has undoubtedly enlightened the hiring industry, with more than 47 Fortune 500 countries using AI for recruitment purposes. However, recruitment bias hasn’t been eliminated yet.

Definition Of Bias

An AI model is only as good as the dataset fed into it. If the data used to train the AI has a human origin, the result will mirror it. Similarly, if the data is biased, the outcomes will reflect the same. It could be as obvious as white and black, or it could be an unconscious bias which is something out of your control, and yet, it is something humans do involuntarily almost always.

Rundown of Recruitment Biases

Ecommerce giant Amazon had to scrap its AI and ML-based algorithm that favored male over female candidates, thereby giving the former an upper hand. This was because men dominate the tech world and the data reflected the same. It also emphasized masculine words in the resume while downgrading anything that has to do with the female counterparts. After years of work under the hood, Amazon shut down the algorithm in 2017.

It’s not just Amazon; Google, too, had issues with gender bias with its translation functionalities. Another instance where an AI algorithm failed was with the criminal sentencing, where black defendants were penalized higher than their white counterparts. The AI assumed that black people are prone to commit crimes more often than white people.

Built-in human biases

Even with the use of AI and big data in the recruitment process, humans are very much involved at some touchpoints, such as structuring the job descriptions and picking up the final candidate for a post. These points may introduce conscious or unconscious biases that could propel the AI to reflect decisions based on this mindset.

Simply picking up the best candidates for a role isn’t enough, as the AI will continue to learn from your moves and make recommendations based on the input. This way, the next batch of recommendations will be based on the decisions made in the first round, and the process will continue infinitely. Alternatively, re-evaluating the process and using psychometric assessments of the candidates, among other strategies to diminish the impact of personal preferences and human biases, can work wonders. Auditing the AI from time to time will help resolve any biases.

The proper amalgamation of both technology and humans can diminish the biases of achieving a talented workforce for any organization.

Assessing an Entire Pipeline of Candidates Instead of Limiting It

Many companies worldwide admit that they only take a small portion of the total number of applicants for review. It is perhaps a flawed practice. Instead, using the entire pipeline of all the applications with policies and tools created to analyze and review it can do wonders. Technologists can also work on AI fairness to rule out biases when deciding the selected candidates.

As per the California Assembly Concurrent Resolution 125, a resolution has been passed advertising companies to use unbiased AI applications in hiring a diverse workforce. It adds that discrimination based on class, gender and race from the resumes should be removed as indicators for reliable outcomes.

Addressing AI Bias

According to a report, using certain techniques can help mitigate the risk of AI bias during recruitment. It includes using candidate-masking as the first process wherein personal characteristics such as age, gender and ethnicity can be masked to prevent biases. Hiring managers can also run AI predictive tools using masking and without it, to compare the outcomes to find deserving candidates.

IBM is working on automated bias-detection algorithms that should be able to discourage AI biases from enhancing during screening, grading and the entire recruitment process. Finally, it takes more than AI to analyze candidates and find the perfect ones for any position, whereas other systems can be used along with manual reviewing to find the best applicants.

Neutralizing AI Bias

There are a few approaches to take for neutralizing any AI bias. In this use case, we have taken three considerations, so here’s what you need to know.

Firstly, it is important for hiring managers to look out for a proven AI approach that has worked well in the past and deploy it in their workplace. Unilever uses games to track behavioral traits, and similarly, different approaches are used to gain insight on candidates, their talents, behavioral traits, and other information, rather than sticking to the few words on their resume.

Even AI has its limitations and understanding it can work wonders. It can help recognize any bias in the models, training data, or the AI approach to mitigate it before it does any damage. Here, debiasing tools can be used to eliminate AI bias. Finally, the input data fed into the AI has to be refined. It has to be structured so that the AI won’t find demographic or gender-related data points to be based on. Instead, it could be trained to scan through the skills required for a specific job profile and filter out the resumes of the eligible candidates accordingly.

AI biases are merely a construct of the flawed dataset fed onto the system where the AI finds patterns and joins the dots to produce outcomes. Neutralizing AI bias is crucial in facelifting the outcomes and that can be possible by using various assessments and anti-biasing tools and preparing unbiased big data in HR, among other strategies, to mitigate the ongoing biases.

Follow me on Twitter or LinkedInCheck out my website