BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Will A Robot Be Interviewing You For Your Next Job?

Following
This article is more than 4 years old.

First it’s Siri doing your kid’s research assignment. Then Alexa takes over basic household functions. And now a robot may be conducting your job interview. That’s right—portions of corporate America are now using artificial intelligence (“AI”) to conduct interviews of job applicants. How does this work, what are the risks and has there been a legislative response? And how would the Luddites respond to this?

How Does an AI Job Interview Work?

How exactly AI interviews work remains something of a mystery. Driven by an interest in increasing efficiency in the interview process, the virtual interviewer/robot asks a standard set of questions (pre-designed rather than spontaneous or follow-up questions that probe into answers) and evaluates candidates’ responses, including physical reactions and the cluster of words used. The physical reactions under scrutiny could include microexpressions—which, unlike regular facial expressions, are difficult to hide or manipulate—as well as other forms of body language, vocal tone and volume, and response time. (Don’t panic just yet…it doesn’t appear that the algorithms judge an applicant’s grooming or appearance.)

But how can the robot evaluate whether an answer shows ingenuity and creativity, business acumen, a capacity for empathy or a collegial vibe? Doesn’t it take a seasoned interviewer—much less one with a pulse!—to do that?

Well, it’s conceivable that an algorithm could detect things like enthusiasm, passion, confidence and the quality of an applicant’s vocabulary, right? And isn’t it also conceivable that an algorithm could measure a candidate’s appreciation for social cues by examining responses and expressions? 

What Risks Does it Pose?

Some fear that AI run amok could engender disparate impact discrimination claims (i.e., where a facially neutral policy or practice disproportionately and adversely affects individuals with legally protected characteristics). For example, there is a fear that since AI may involve “machine learning,” the robot may develop assumptions over time regarding the suitability of certain groups of people for employment based on historical interview results with respect to members of those groups. 

On this score, it has been reported that the EEOC is investigating at least two cases involving claims that algorithms were unlawfully excluding certain groups of workers. And it’s noteworthy that in 2017, the EEOC issued a strategic enforcement plan identifying the “increasing use of data-driven selection devices” as one of its priorities in 2017-2021. 

Also, query as a practical matter whether the expression of emotions may vary by culture. As explained in an article from Psychology Today,

“Not all is straightforward when it comes to reading emotions—especially when reading emotions across cultures. Despite the universality of basic emotions, as well as the similar facial muscles and neural architecture responsible for emotional expression, people are usually more accurate when judging facial expressions from their own culture than those from others. This can be explained by the existence of idiosyncratic and culture-specific signatures of nonverbal communication. These cultural ‘accents’ influence interactions between nature (biology) and nurture (cultural contexts), which, in turn, affect the perception and interpretation of emotions.”

As another practical matter, as noted, can a case be made that an algorithm is ill-equipped to judge a human being’s fitness for a job that requires the exercise of sound judgment, discretion and creativity? But maybe vetting of those factors will be the subject of subsequent interviews when humans are asking the questions.

Recent Legislation

With this backdrop in mind, on May 29, 2019, the Illinois legislature passed the Artificial Intelligence Video Review Act, which appears to be the first statute of its kind. This law requires the following:

First, the employer must notify each applicant before the interview that AI may be used to analyze the video interview and consider the applicant’s fitness for the position. The statute conspicuously fails to define AI, which is curious since there are many different types of AI, and the levels of sophistication of this technology vary considerably. Yet, the statute does not regulate the particular types of AI that may be used. 

Second, the employer needs to provide each applicant with information before the interview explaining how the AI works and what general types of characteristics it uses to evaluate applicants. This requirement is somewhat amorphous, as it’s unclear how deep of an explanation an employer needs to provide. Can the employer simply inform a candidate that he or she will be asked questions by a machine that will score responses? Does the employer need to identify the particular factors that will be evaluated?

Third, before the interview, the employer is required to obtain the applicant’s consent. 

Fourth, an employer may not share applicant video interviews except with persons whose expertise or technology is necessary to evaluate an applicant’s fitness for a position.

Last, upon the applicant’s request, within 30 days after receipt of the request, employers must delete an applicant’s video interviews and instruct any other persons who received copies of the video interview to also delete it.

This law does not include a private right of action, which raises the question of how it will be enforced. 

Given the nascent stage of this advanced technology and its potential legal and practical implications, employers should carefully consider the benefits and risks before injecting it into the hiring process.

Follow me on Twitter or LinkedIn