Artificial intelligence (AI) and machine-learning algorithms are powerful tools that can automate or inform decision-making. At the same time, those algorithms can be quite complex and appear to be a “black box”—inscrutable at best.
To comply with an ever-growing number of statutes and regulations that impose standards on the use of AI, and to increase the trust the stakeholders have in these algorithms, users need to understand the basis for automated decisions or recommendations. This virtual briefing will introduce you to the relevant legal standards as well as the strategies for achieving transparency and explainability in AI, for assuring regulatory compliance, and for avoiding legal liability.
Join us as we explore explainability and transparency in AI and gain insights into this technology and how it is used.
The breakout sessions will work through case studies specific to issues in labor and employment, health care and life sciences, and privacy and liability risks.
Presenters will include Vladimir Murovec, Supervising Associate, Simmons & Simmons; Michael Hind, Distinguished Research Staff Member, IBM Research AI; and Michael Zagorski, Strategic Consultant, EBG Advisors.
This virtual event is only available live and will not be recorded to view at a later date.
Members of the media, please contact Piper Hall.