The unique characteristic of our AI that helps reduce bias

You have probably heard the terms ‘Machine Learning’ and ‘Artificial Intelligence’ thrown around in recent years. Perhaps you have been left intimidated with complicated literature describing the concept of using computers to make decisions, however, we’re here to disseminate these hesitations. 

Machine learning may be complicated, but it’s not as complicated to explain as much as you might imagine. IBM describes machine learning as a branch of artificial intelligence where we use data and algorithms to train AI to think the way we do, all with the purpose of improving accuracy. Simply put, Machine Learning Engineers, like our fantastic team here at Knockri, train models by feeding them data and algorithms to produce results that imitate human decisions as accurately as possible. 

However, the most complicated part of this process is making sure that your own human biases don’t bleed into the AI you are teaching. 

Bias in Machine Learning 

We all have bias; it comes with being human. Especially in the recruiting process, unconscious bias may rear its ugly head, causing us to make decisions about a person not based on their skills or qualifications, but rather how they represent themselves. I’m sure we have all been told to make the best first impression during an interview. 

AI can easily learn these traits if we aren’t careful. If we accidentally teach it the same unconscious biases we suffer from, it will replicate this, resulting in greater disparities and bias in the recruiting process. 

You might have read about Amazon scrapping their AI recruitment tool in 2018 because it demonstrated bias against women. The reason for this was because they used resumes to train their AI to recognize specific skillsets and patterns in job applicants. However, most of the applicants were male. This taught the AI to select male candidates over female, eliminating candidates whose resumes mentioned women-specific traits, such as attending women’s colleges or participating in women-specific organizations.  

Training Our Models 

We have recognized that training these models requires eliminating as much bias as possible. We avoid teaching the models information based on human interview ratings or performance metrics, and rather train the models to identify foundational skills and behaviors in a candidate’s interview responses.  

Using an objective annotation process, we’ve brought together machine learning and I/O psychology to teach our models to identify behaviors in candidate transcripts to score candidates in a way that doesn’t consider race, ethnicity, gender, age, disabilities, or other biases that may appear with human evaluators. By doing this, we are able to fully automate our structured behavioral interviews to help you find the best candidate. 

We wish we could tell you more about our product here, but we need to leave some things open to discussion. If you’d like to know more about how we are using machine learning to eliminate bias in hiring, feel free to set up a meeting with us below. 

Book a meeting with an HR solutions consultant