We’ve all heard troubling stories involving emerging tools powered by artificial intelligence (AI), in which algorithms yield unintended, biased, or erroneous results. Here are a few examples:

  • A monitoring tool for sepsis that performs less well for patients of certain races
  • A selection app that prefers certain backgrounds, education, or experience, with no showing of job relatedness or business necessity
  • Facial recognition software that struggles with different skin tones
  • An employment screening tool that doesn’t account for accents
  • A clinical decision support tool for evaluating kidney disease that gives doctors inconsistent advice based on the patient’s race
  • Triage software that prioritizes one race over others

The list is long and growing, and companies that use these tools do so at increasing legal, operational, and public relations risk.

AI-powered tools, unchecked, pose real but hidden risks to our friends, neighbors, and countless others, often limiting economic opportunities or, in the extreme, causing physical harm. For organizations seeking to use these tools, they also create potentially expensive and disruptive legal liability, operational shortcomings that may impede greater success in the marketplace, and reputational damage in the court of public opinion. Currently, the impact of algorithms on organizations and target populations is poorly understood and rarely measured.


This virtual briefing focuses on the legal risks, methods for finding those risks, and solutions in the form of tailored compliance programs that address AI risks specifically.

Key takeaways in labor and employment, health care and life sciences, as well as consumer product use cases:

  • Identifying the key laws and regulations implicated in these domains
  • Techniques for finding bias and discrimination in algorithms, including formation of multidisciplinary teams
  • Developing a holistic approach to establishing a compliance program specific to the creation and use of AI tools in these domains
  • Navigating privacy laws while seeking solutions to bias and discrimination
  • Predicting the future direction of regulation in this space


Review the full agenda.

Access the Virtual Briefing


If you have any questions, please reach out to Dionna Rinaldi or Amy Oldiges.  Members of the media, please contact Zack Zimmerman.

More Like This

Event Detail

Virtual Briefing
Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.