Nathaniel Glasser, Adam Forman Quoted in “Employers and Artificial Intelligence: 6 Pitfalls to Watch For”

Law360 Employment Authority

Nathaniel M. Glasser and Adam S. Forman, Members of the Firm in the Employment, Labor & Workforce Management practice, were quoted in Law360 Employment Authority, in “Employers and Artificial Intelligence: 6 Pitfalls to Watch For,” by Vin Gurrieri. (Read the full version – subscription required.)

Following is an excerpt:

With states rapidly lifting pandemic restrictions, employers looking to ramp up operations after a year of layoffs and lockdowns may be in the market for technology that speeds up the hiring process. But while artificial intelligence hiring tools may be tempting, they could leave businesses facing discrimination claims if not used carefully. 

Whether employers are looking to use an automated system developed internally or one from an external party, there are benefits that employers may feel they're getting, such as the elimination of unconscious bias from the hiring process.

But those benefits may come at a cost, namely the potential that the AI tools unlawfully screen out people based on protected characteristics like age, sex and race. That could lead to discrimination lawsuits and costly class actions if the use of a faulty algorithm is widespread, a risk that is evermore acute if businesses are hiring in bulk as pandemic restrictions wane.

Nathaniel Glasser, co-leader of Epstein Becker Green's artificial intelligence practice group, said employers' increasing reliance on AI tools is "consistent with the trend we were starting to see even before the pandemic."

"We expect that because the pandemic has caused employers to utilize very strict tools not just for remote work but to hire people remotely, that trend will accelerate as people go back to work and a number of employers have to quickly increase their workforces," Glasser said. ... 

Skewed Underlying Data Yielding Biased Results

In the hiring context, AI tools can help match employers with candidates who are most likely to excel in the job. Some common ways those tools are used by businesses include sorting through resumes and other job application materials based on keywords that are likely predictors of future performance or excluding people with items on their resumes that the algorithm considers indicators of a bad fit for the job. …

AI vendors who may not be knowledgeable about equal employment opportunity laws might create a biased hiring process by building such standards into their algorithm, according to Adam Forman, a member at Epstein Becker whose practice focuses on emerging issues in the workplace related to technology.

"You have an unlawful disparate impact claim because even though the selection criteria on its face is neutral — it's just based on where someone lives, not their race — it has the impact of excluding people based on their race," Forman said. "That would be unlawful and makes the entire selection process suspect."

Overlooking Disability Accommodations

For employers that do have avoiding bias in mind when thinking about AI tools, the focus is usually on race, gender and age discrimination. But a different type of bias may fall through the cracks: disability discrimination.

Under the Americans with Disabilities Act, employers are required to make reasonable accommodations for employees or job applicants who have physical or mental disabilities. In the context of AI tools, it may be easy for employers to overlook the fact that some people, because of a physical limitation, may not be able to use the tool, Epstein Becker's Glasser said.

"Employers have to think about how the use of an algorithm might impact people with disabilities, and whether and to what extent they can offer a reasonable accommodation for an individual with a disability who might not be able to use the assessment as intended by the creator," Glasser said.

Forman added that disability inclusion "is a really big deal," since there are millions of Americans who have a mental or physical condition that would qualify as protected under the ADA.

"Some of these selection tools that require games or swiping right and left to test your personality are not accessible by people who have a disability," Forman said. "Or they are doing an assessment on how quick you answer or verbalize your answer. And if you have a learning impairment, that doesn't mean you are unqualified for the position with or without a reasonable accommodation, but you're getting screened out by the algorithm before you start. By the way, the same is true for people of different ethnicities or for whom English is not their native language."  

Narrowing the Field Too Much with 'Microtargeting'

Even before resumes are sorted and applicants screened, there are AI tools available that help employers craft their initial job advertisements, as well as "passive sourcing" technology, which lets employers proactively seek out candidates who fit particular positions, according to Forman and Glasser.  

Particularly when it comes to passive sourcing, one of the major legal risks employers face involves the practice of "microtargeting," or sending unsolicited employment ads to potential candidates whom algorithms identify as having the right background for a specific job.

The risk for employers lies in whether an algorithm identifies the best possible matches in a way that excludes a particular protected subset of applicants, like an algorithm that targets only people under 35 years old.

"The computer just follows instructions and it sends the ad to people in that demographic," Forman said. "You can imagine that if you're looking for certain skill sets or criteria, there's a risk that you may be either intentionally — disparate treatment — or unintentionally by having a statistically significant negative impact on a particular group of people — disparate impact — discriminating against swaths of individuals." …

Failing to Test Algorithms

Employers can also easily land in hot water if they simply buy or develop an AI tool and effectively plug it in without fully understanding how the algorithm actually works.

That could result in an unlawful disparate impact claim down the line if the AI tool uses a proxy that adversely affects a particular group of people. And if the use of the algorithm is widespread and affects a large number of people, it could open the door for class actions, Glasser said, calling it "a major legal consideration that employers have to be aware of [and] be prepared for whenever they are implementing [AI]."

To mitigate any risk, businesses have to test their algorithms for adverse impacts and, if those impacts exist, they have to decide whether to keep using it, Glasser said.

"If so, they have to have the algorithm validated to demonstrate that there is a connection between the assessment being conducted by the algorithm and the job duties and responsibilities for which it's being used," Glasser said. "Employers that don't audit and validate their algorithm expose themselves to this initial risk."