Whether consciously or unconsciously, everybody is influenced by some degree of bias. When it comes to hiring and recruiting, that bias can become a big problem. Finding the right person for a job — someone who matches your team’s values, has the right qualifications, and wants the position — can be very difficult.
On average, every corporate job receives about 250 resumes; that is a significant number of people to sort through and compare. Add unconscious bias into the mix, and you have qualified candidates who may be ignored and companies missing out on diverse talent, proven to make companies more innovative. From the WSJ:
“These biases stem from our preference for people who are similar to us, provide a feeling of safety, or feel familiar. Indeed, research has shown that men and women alike start to treat minorities differently within milliseconds of seeing them. Our brains automatically carve the world into in-group and out-group members and apply stereotypes within the blink of an eye.”
Bias affects every part of the hiring process.
Education vs. Experience: Indeed discovered 37% of managers who went to a top school said they like to hire candidates from highly regarded universities. That compares to just 6% of managers who didn’t attend a top school. 41% of managers who didn’t graduate from a top-ranked college said they consider candidates’ experience more important when making hiring decisions. Just 11% of managers who did attend a prestigious school said the same.
How Genders Apply: Certain words in a job description will draw male applicants instead of female applicants but using more female-oriented, or gender neutral words does not discourage male applicants. Interestingly, even skillset is measured differently. According to an internal Hewlett-Packard survey, women tend to apply to a job when they meet 100% of stated qualifications, while men will apply when their experience is a 60% match with the requirements.
Minority Report: In 2015, one big tech company began incentivizing recruiters by giving them extra credit for diversity hires. Recruiters reached out to engineering schools in African countries, introduced interns from a wide variety of backgrounds and built a mentor-buddy program into interviews. Unfortunately, final hiring decisions relied on traditional metrics like the prestige of their alma mater, top tech firm experience or employee referrals.
This is where AI can change the game. Let me first qualify by saying AI is not perfect: At least, not yet. In theory, much of the above wouldn’t happen with a “machine,” we still have humans building algorithms and create sorting and matching systems, and as we’ve discovered, humans are biased.
Take, for example, the use of algorithms and AI in the U.S. court systems. Software assigns risk scores to criminals when they are brought in after committing a crime to determine how likely they are to commit crimes in the future. With this program, black defendants were 77% more likely to be pegged as at higher risk of committing a future violent crime than whites. These scores are given to judges before sentencing.
These are real, life-changing consequences coming from AI technology. Because humans are biased, the software can learn from this bias, just like the problems in our courts. Of course, this is only a risk if we aren’t vigilant. This example serves to prove how dangerous AI can be when programmed with our own human bias embedded in its coding. As we see in the earlier example of the large tech company, we realize when bias is applied to the final 10% of the process, it can render the entire process pointless.
Fortunately, many HR tech companies are learning to be more careful. In fact, several AI programs in the HR field are actively working to eliminate the kind of bias that plagues current hiring practices. Unlike humans, AI programs can look objectively at large quantities of data and determine which candidates are best suited for a job. They might also pick up on a link that can actually promote bias:
“ Say an AI/machine learning model could accurately pick up on a statistical relationship between college football quarterbacks and high performance in sales roles. That relationship could be predicated on entirely meritocratic causative factors?—?the quarterback’s role demands a combination of mental skill, decision making and influencing skills to lead a team, which translates well into high-pressure sales roles. Of course, an unintended consequence of that is excluding female applicants, by virtue of the fact that only men can be college football quarterbacks. The algorithm would then be favoring a trait which it would never find in women, thus disadvantaging them.”?—?Aman Alexander, product management director, CEB
Algorithms that power matching, screening, assessments and chatbots can be programmed to ignore gender, age, and ethnicity and eliminate much of the preliminary bias we’ve seen in the past. For example, we know that “black, Hispanic and ethnic sounding names” receive fewer callbacks from hiring managers than their white counterparts. We also know that gender neutral names receive more callbacks and interview requests than female-sounding names.
Once we learn to develop AI systems perfectly, it will be the only way to hire completely free of bias. While we’re not quite there yet, current AI technologies are already making great strides forward.
One of the best features of both humans and AI is that both are able to learn from past mistakes. When any form of bias is found in our technologies, we can re-code, re-program, and re-train to make the AI even closer to perfect. By 2020, research firms predict the AI market will be up to $5.05 billion: Perfection may not be here yet, but it is coming fast.
There are a few things we can do to speed up the process. From the beginning, we need to be training AI to ask the right questions. When we build out auto-responder questions for our AI to ask candidates, the questions need to be rigorously reviewed for bias and language patterns. We need to train our hiring managers to reduce microaggressions, as well.
Using AI to focus on skills is a step in the right direction. A study through the Clayman Institute of Gender Studies at Stanford found the number of female musicians in orchestras went up from 5% to 25% since the 1970s–a shift that happened when judges began auditioning musicians behind screens so that they could not see them. While not every interview can be blind, allowing AI to match, screen and assess candidates can get the most highly skilled applicants further in the process than ever before.
Ideally, anything you input in an AI’s dataset, should not only be reviewed by your own team, but other teams as well. That way, a diverse group will have scrutinized any algorithms that end up in the final program as possible.
Once the questions and algorithms are deployed, AI begins the work of eliminating bias from our processes. AI recruiting chatbots like Karen.ai are scripted to ask the same questions of the same candidates regardless of differences in background, gender, age or ability. A human could follow a similar script exactly, but still betray bias in tone or body language; a computer program has no such problem. The same goes for candidate experience. AI will follow a process and protocol for every applicant no matter their background or name, ensuring each candidate receives the same experience every time.
Gartner Summits predicts by 2020, 85% of computer interactions between customers and corporations will be handled without interacting with an actual human. As we hand over larger responsibilities to computers and AI, the need to eliminate bias from our hiring and recruiting process becomes more important than ever. Someday soon, AI programs in HR may be rejecting applications, completing preliminary interviews of candidates and handling onboarding and offboarding processes. If we are vigilant, the use of AI in HR could be a great benefit to businesses and candidates alike.
Back to Recruitment blogs