Theodora McCormick, Nathaniel M. Glasser, and Alexandra Nienaber, attorneys in the Litigation & Business Disputes and Employment, Labor & Workforce Management practices, co-authored the Bloomberg Law Practical Guidance, “Employment, Professional Perspective—Overlooked Risks for Employers Using AI Tools.”

Following is an excerpt:

“Artificial intelligence” (“AI”) is the buzzword in today's business world. This is unsurprising, given AI's exponential developments, including the accessible use of AI tools, such as generative AI platforms or large language models. But the use of AI by employers and the buzz around it goes beyond these two tools. For the past several years, employers have quietly used AI tools to either augment or displace individual performance of certain human resource (“HR”)-related activities.

A Feb 2022 survey on AI use in business found that nearly one in four businesses used automation and/or AI to support HR-related activities. Collecting feedback from businesses ranging from two to 5,000 employees, the survey revealed that these AI-supported HR activities related to recruitment and hiring, learning and development, productivity monitoring, promotion decisions, performance management, and succession planning.

The ultimate question driving most business decisions, including the use of AI tools, is, “How much will this cost/save me?” Businesses typically implement AI tools to drive organizational efficiency and cost savings.

The Risks

Despite numerous potential benefits, AI tools also come with significant legal risks. Many AI tools are automated decision systems that use computation, in whole or part, to determine outcomes, influence decisions, inform policy implementation, or collect data.

These computational systems are only as good as the implementing instructions. A computational system based on flawed assumptions, bias, or instructions will produce flawed conclusions. This may lead to unfavorable impacts on a candidate or employee, which can trigger liability in a discrimination case.

Disparate Treatment & Impact

Discussing the dangers of using AI in hiring decisions should start with the legal concepts known as “disparate treatment” and “disparate impact.” An employment practice that intentionally discriminates against members of a protected class of people constitutes “disparate treatment” in violation of Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e, et seq. (“Title VII”). A hiring algorithm constructed to favor (or disfavor) one protected group versus another constitutes unlawful disparate treatment discrimination.

Title VII violations can also occur when facially neutral policies or practices adversely affect the members of a particular class the law protects. In such “disparate impact” cases, an employer discriminates against a class of people even without intentionally screening out members of any protected class.

Automated systems can unintentionally screen out a disproportionate number of protected class members. For example, if an AI tool identifies job candidates whose characteristics mirror those of the employer's star employees—all of whom happen to be men—the use of AI can disparately—and illegally—impact women.

Accessibility & Disability Accommodation

AI tools also may be inaccessible to people with disabilities, which could violate the Americans with Disabilities Act of 1990, 42 U.S.C. §§ 12101-12212 (the “ADA”). The ADA prohibits an employer from discriminating based on a disability and requires employers to provide reasonable accommodations to qualified individuals with disabilities.

An AI tool may unintentionally violate the ADA if it does not provide reasonable accommodations allowing a disabled individual to be accurately and fairly rated by the algorithm; screens out an individual with a disability, even if that person can do the job with an accommodation; or conducts an unlawful pre-employment disability inquiry or medical examination.

Developments in Legislation and Technical Guidance

Evaluating legal risks related to the potential for disparate impact caused by AI tools can be difficult for employers due to the patchwork of legislation and technical guidance that has been proposed or passed in the last few years.

Presently, no federal laws or regulations expressly address the use of AI tools. In July, Senator Bob Casey introduced the No Robot Bosses Act, S.2419, 118th Cong. (2023-2024). Under this bill, an employer could “not rely exclusively on an automated decision system,” such as an AI system, “in making an employment-related decision with respect to a covered individual.” While that bill remains pending, federal regulatory agencies have committed to enforcing existing anti-discrimination laws, such as Title VII, already protecting employees from disparate treatment or disparate impact discrimination that may arise from an employer's use of AI tools. Additionally, federal agencies have published guidance and technical assistance documents to assist employers using AI tools. Since early 2022, the EEOC released technical assistance documents related to the ADA and Title VII, recognizing that employers ultimately are responsible for any decisions made or aided by an AI tool and thus may be liable for adverse impact caused by a third-party-created tool.

There are also various state and local legislative and regulatory initiatives. Illinois's Artificial Intelligence Video Interview Act (ILL. COMP. STAT. 820/42 (2022)), Maryland's Use of Facial Recognition Services Law (MD. CODE ANN. LAB. & EMPL. § 3-717), and New York City's Automated Employment Decision Tools Law (New York City, N.Y., Code § 20-870) constitute recent efforts by states and localities to fill the void left by federal inaction. Other jurisdictions, like California, New York State, and Washington, D.C., intend to follow these leads.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.