Adam S. Forman, Member of the Firm in the Employment, Labor & Workforce Management practice, in the firm’s Detroit and Chicago offices, was quoted in Law360 Employment Authority, in 5 Tips for Curbing Bias Risk When Using AI for Hiring,” by Anne Cullen. (Read the full version – subscription required.)
Following is an excerpt:
Artificial intelligence can help a company quickly size up a stack of resumes, but these high-tech hiring tools can also unfairly screen out women and minorities, and experts warn it’s tricky for an employer to suss out if the program they want to implement could have bias coded in.
AI hiring software generally uses algorithms to evaluate job application materials, like a video interview or resume, to predict the future job performance of a candidate. While this technology can speed up the recruitment process, it can also magnify human biases and get users in trouble under anti-discrimination laws.
These algorithms are often built on the resumes and backgrounds of job seekers who were successfully hired, and plugging in this historic data can cause a machine to mirror the potentially discriminatory human-based motives behind those employment decisions, like, for example, a preference for hiring men over women.
In other words, this hiring tech will only be as unbiased as the data it relies on. …
“There’s a lot of unknown and untested areas with the use of these technologies, so there’s a lot of risk for an employer,” said Epstein Becker Green employment partner Adam S. Forman. “The algorithm itself is created by a human and therefore it’s subject to the same biases and prejudices that any assessment would have.” …
Take the Machine for a Test Drive
Another important step is to do a trial run, according to Epstein Becker’s Forman.
“What we often do with our clients is beta test first,” Forman said, during which they run a traditional hiring round, then use the algorithm on the same applications and cross-check the results. “We’ll do hiring the traditional way, and then we’ll use the algorithm, and compare,” he said.
He made clear they wouldn’t rely on the algorithm in the first go, just analyze its recommendations in comparison with the candidates the employer picked through the typical hiring process.
“After a time, we’ll compare how those candidates actually picked are doing against what the computer said,” he said. “Especially if they were a ‘don’t select,’ but they end up doing really well, we’ll probably need to tweak the factors used in the algorithm.” …
Push for an Indemnity Clause
Companies should press their vendors to sign off on an indemnity clause in which they commit to funding an employer’s defense for any lawsuit they might face over an AI hiring tool, experts recommended. …
But experts warned not to view this part of a contract as an infallible safety net, as they said if litigation blows up over a biased program, the bill could run up fast.
“While I think it’s an important thing to seek in your negotiation with the vendor, I would not rely on it,” said Epstein Becker’s Forman. He predicted that if a widely used software program is found to be discriminatory, that indemnity clause might not be able to cover the full extent of the blowback.
“How good is that indemnification promise going to be when a vendor is blown away after funding one class action case?” Forman said.