Prompted by the widespread adoption and use of videoconferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (“AI”) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
Although these AI-powered video interviewing tools offer the promise of optimizing recruitment and selection efforts, these products can raise a host of legal issues, including questions about hidden biases, disparate impact, disability discrimination, and data privacy.
Although no federal laws expressly regulate the use of AI in employment decisions, at a recent event entitled “Initiative of AI and Algorithmic Fairness: Disability-Focused Listening Session,” U.S. Equal Employment Opportunity Commission (“EEOC”) Chair Charlotte Burrows expressed concerns about the use of video interview AI technology, noting, for example, that such technology may inappropriately screen out individuals with speech impediments. The same concerns would apply to individuals with visible disabilities, or disabilities that affect their movements.
Shortly thereafter, the EEOC released technical guidance on “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.”
Legislative bodies in Illinois, Maryland, and New York City have taken a more active approach, passing laws that directly impact the use of AI-powered video interview and facial recognition software.