Bradley Merrill Thompson, Member of the Firm in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, authored an article in The Journal of Robotics, Artificial Intelligence & Law, titled “Why a Data Scientist Needs a Lawyer to Correct Algorithmic Discrimination.”
Following is an excerpt (see below to download the full version in PDF format):
We all have heard the stories. A recruiting app that prefers men. Facial recognition software that cannot recognize Black women. Clinical decision support tools for evaluating kidney disease that often give doctors the wrong advice when the patient is Black. Triage software that puts White people ahead of Black and Brown people. The list is long and growing.
These are real harms to our friends and neighbors, costing them economic opportunities or causing physical injury in the case of health care. For organizations seeking to use artificial intelligence (AI)–powered tools, they also create potentially expensive legal liabilities and damage in the court of public opinion.
Local, state, and federal agencies are racing to implement regulations to address these issues. Regardless of the domain, a common thread in the current and proposed regulations is bias. Soon, the Department of Health and Human Services may require that users of algorithms in health care evaluate those algorithms for bias. New York City presently requires all AI-powered selection and hiring tools to be audited for bias, and several municipalities are currently considering similar regulations. In the European Union, big tech companies will have to conduct annual audits of their AI systems from 2024, and the upcoming AI Act will require audits of “high-risk” AI systems.
But what exactly do the antidiscrimination laws require? While the law is often unclear, just picking age discrimination as one example, some attorneys would argue that:
- Age may be considered by an airline as “bona fide occupational qualification” for safety reasons.
- Age may not generally be considered by software companies when hiring new programmers.
- Age, in certain circumstances, may be considered affirmatively as part of a company’s comprehensive recruitment strategy to help older Americans get programming jobs where the software company can establish that its own hiring patterns disadvantaged older workers historically.
- Age may not be considered in scheduling access to radiological imaging.
- Age may be considered when deciding who should receive a particular transplanted organ.
- Age may not be considered when deciding who should go on a physically demanding business trip.
- Age might unconsciously be considered in promotions so long as there is no statistical evidence of disparate impact overall.
- Age may not be considered by a health insurance company in deciding whether to cover an expensive procedure.
- Age may, and really must, be considered when diagnosing macular degeneration.
- Age may not be considered in targeting advertising for certain credit services in social media.
- Age may be considered in targeting advertising for certain healthcare products in social media.
If your head hurts now, that is understandable.
Age, like sex, race, and dozens of other demographic factors, is a protected class in America that is supposed to be free from discrimination. But knowing that hardly provides sufficient guidance for the development of a wide range of algorithms that make, or advise on, decisions impacted by age.