Matthew Savage Aibel Quoted in “AI: Discriminatory Data In, Discrimination Out”


Matthew Savage Aibel, Associate in the Litigation and Employment, Labor & Workforce Management practice, in the firm’s New York office, was quoted in SHRM, in “AI: Discriminatory Data In, Discrimination Out,” by Allen Smith.

Following is an excerpt:

Artificial intelligence (AI), increasingly used in recruiting, might inadvertently discriminate against women and minorities if the data fed into it is flawed. Vendors of AI may be sued, along with employers, for such discrimination, but vendors usually have contractual clauses disclaiming any liability for employment claims, leaving employers on the hook. …

Uses of AI

Employers use AI in many ways. Some AI tools recruit by sending targeted job advertisements to individuals who are not actively seeking a job. …

AI enables chatbots to respond to routine queries from candidates, check details such as availability and visa requirements, and help onboard new hires, noted Matthew Savage Aibel in law firm Epstein Becker Green’s New York City office.

“Other AI tools analyze resumes or video-recorded interview responses to either narrow the choices for a hiring manager or recommend the successful candidate,” he said. Illinois has placed restrictions on the use of artificial intelligence for video interviews, which take effect Jan. 1, 2020. …

Risk of Discrimination

Aibel explained that AI can result in bias by selecting for certain neutral characteristics that have a discriminatory impact against minorities or women. For example, studies show that people who live closer to the office are likelier to be happy at work. So an AI algorithm might select only resumes with certain ZIP codes that would limit the potential commute time. This algorithm could have a discriminatory impact on those who do not live in any of the nearby ZIP codes, inadvertently excluding residents of neighborhoods populated predominantly by minorities.

AI also can result in bias when a company tries to hire workers based on the profiles of successful employees at the company. If the majority or all the high-performing employees who are used as examples are men, any AI algorithm evaluating what makes a person successful might unlawfully exclude women, Aibel cautioned. …

What if all the top-rated employees at the company belonged to a particular fraternity in college, and the AI program identified that from resumes or by searching the Internet? “Suddenly, the program might only suggest candidates who were also members of that fraternity, thus creating a discriminatory impact,” Aibel said.

“The scope of people impacted expands greatly when a computer can make these decisions in fractions of a second.”