The lawyers of Epstein Becker Green have a long and successful history in counseling and defending businesses and providing guidance with respect to regulatory compliance, cybersecurity and privacy, and other issues related to data asset management.
More recently, our lawyers have also been advising on enterprise risk management issues related to one particular data compliance issue: the creation and use of artificial intelligence (AI) and machine learning tools. This is a particularly challenging area with the recent availability of large language “generative AI” models, such as ChatGPT, Microsoft Bing, and Google Bard, among others.
Our clients, particularly those in human resources, talent acquisition, health care, and life sciences, are creating or adopting AI and machine learning applications that substantially affect the products and services they offer and how they do their work. AI has penetrated many aspects of work, including employee recruitment selection and evaluation, generation of transactional documents and press releases, health care diagnosis support and billing and coding support, and the creation of original publications ranging from academic papers to works of fiction and music. The effect of large language AI models cannot be underestimated. Indeed, recently, it was predicted that, over the next 10 years, 300 million jobs worldwide will be lost or diminished by generative AI.
The immediate needs of our clients start with the application of existing legal, regulatory, and enforcement structures to AI in such areas as employment discrimination, intellectual property and confidential business information protection, product liability, medical malpractice, HIPAA privacy and international data protection, ownership of data, and the expansive use of governmental enforcement tools, like federal and state false claims acts, etc. Epstein Becker Green is already providing compliance and litigation advice to our clients in these areas.
Although the body of AI-specific law is now comparatively thin, it is rapidly expanding. Congress has begun hearings, and various agencies, including the Equal Employment Opportunity Commission, the Federal Trade Commission, the Department of Health & Human Services, the Securities Exchange Commission, the National Institute for Standards and Technology, and the Cybersecurity and Infrastructure Security Agency, are developing AI regulations and guidance. International frameworks, such as the European Union’s proposed AI Act, are also emerging.
While the opportunities for improvement and efficiencies with AI and machine learning technologies are great, the compliance challenges posed by them are even greater with respect to risk analysis, regulatory compliance, litigation defense, and post-event resilience. With its multidisciplinary capabilities and transactional, regulatory, and litigation experience of over 50 years, Epstein Becker Green is ideally suited to advise companies on how to reap the benefits of AI and machine learning technologies while minimizing their legal and business risks.
Our attorneys advise clients in all industries and of all sizes—from Fortune 100 companies to startups—about the creation and use of AI. We counsel clients on how to develop, leverage, and monetize AI and machine learning technologies and how to maintain a defensible compliance posture that stays attuned to evolving AI laws and regulations.
Health Care Services and Reimbursement
Generative AI and machine learning applications are already transforming how diseases and conditions are predicted and managed and how patients are diagnosed and treated. AI drives dramatic advances in robotic surgery, remote telehealth services, and smart hospital rooms. It also revolutionizes the administrative side of health care, particularly the billing and coding of claims to governmental and private payers. However, most commentators and enforcement agencies anticipate increasing qui tam and governmental fraud claims as a result.
In responding to this transformation, clients rely on Epstein Becker Green, a thought leader in the health care industry for 50 years, to help them develop AI-based products and steer these products through the approval processes of the Food and Drug Administration (FDA) and other federal and state agencies. As claims-payment schemes are adapted to AI-based services, we help clients navigate the new complexities of reimbursement from public and private payers. And if clients face matters alleging AI-related harm or fraud against public or private payers (whether individual or class action cases or government enforcement actions), our lawyers offer the extensive arbitral and trial experience necessary for a successful defense.
In addition, our affiliates, EBG Advisors and National Health Advisors, are advising clients on numerous AI issues—addressing business strategy, policy analysis, data analytics, regulatory compliance, privacy and data protection, performance improvement, and payment/reimbursement for services and technology.
Opportunity and Disruption in the Workplace
The use of AI in human resources and labor management decision-making is fraught with the risk of inaccuracy and bias. With increased reliance on AI at every stage of the employment process—from recruitment and hiring to evaluations and compensation and benefit design and administration—taking place within a wide range of industries, the opportunities presented by the use of these helpful new tools are matched only by their ethical and legal risks. Without proper vetting and examination of AI and machine learning tools and their underlying training data, subjective biases and unlawful discrimination might infiltrate workplace processes and decisions, thus exposing a business to potential liability for unintended discrimination. Also, new and emerging regulations impose additional compliance obligations on vendors and employers using AI and machine learning tools.
Through EBG Advisors, we offer both developer and user algorithm bias testing services, as well as a full range of legal and consulting services, to help mitigate bias while preserving AI utility and effectiveness.
Data Privacy and Cybersecurity
AI and machine learning applications require access to an ever-increasing universe of data, much of it personal information. This phenomenon creates a magnified risk of error, bias, and data insecurity. For example, the operation of the audio and video recording functionality of AI, as well as fingerprint, retinal, and facial scanners, used for everything from providing locational access to timekeeping to wellness program participation, might violate federal and/or state laws and regulations. And the collection of patient data through health-related devices and apps could give rise to violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Unauthorized use of locational tracking also offers risks to women seeking reproductive services in many states. The use of personal data from outside the United States to operate, test, train, and validate AI may raise considerations under international data protection frameworks.
Clients have long trusted Epstein Becker Green to create and administer effective compliance programs under existing laws. AI compliance is a natural extension of our existing skills. We advise clients using AI on how properly to store and protect sensitive employee, patient, and personal data from unauthorized access by third parties or from exposure through a breach. We also counsel clients on complying with both privacy and cybersecurity laws and standards, nationally and internationally. In addition, clients seek our help in creating effective cybersecurity policies and vulnerability tests.
Machine Learning Software Development Risks
When creating machine learning-based software, AI and machine learning development companies face numerous legal risks. In addition to the potential for violating data privacy and security laws, perpetuating or amplifying biases, or causing harm or erroneous decisions, their software could infringe on intellectual property rights. We have deep technical experience with the mechanics of supervised and unsupervised machine learning, as well as deep learning, through attorneys at Epstein Becker Green with special backgrounds (including one with a Master of Applied Data Science) and consultants from EBG Advisors. Our level of know-how helps us work with AI and machine learning development companies to ensure appropriate validation that their machine learning-based software does what the companies say it can do, in compliance with legal and regulatory requirements and intellectual property laws. We also provide informed legal and strategic advice to help these clients minimize legal risk.
- Academic and Clinical Research
- Business Torts, Competition & Trade Secrets
- Commercial and Contract Litigation
- Corporate Compliance Program Development, Implementation, and Effectiveness
- Data Breach/Cybersecurity Investigations & Litigation
- Digital Health
- Drug and Medical Device Coding, Coverage, and Payment
- Drug and Medical Device Litigation
- Employment Compliance Counseling
- Employment Litigation
- FDA Inspections and Enforcement
- Federal and State False Claims Act (Including Qui Tam)
- Federal Research Grants: Compliance, Investigations & Enforcement
- Fraud and Abuse Compliance Counseling and Defense
- Health Care
- Intellectual Property Litigation
- Life Sciences
- Litigation & Business Disputes
- Evaluated, on behalf of a major financial institution and a publishing company, an AI vendor for their employee recruitment, selection, and onboarding functions. We assisted our clients in assessing the product offerings, reviewing vendor contracts, identifying the appropriate questions to ask the vendors about their AI products, monitoring and testing those products, and evaluating whether those products would raise red flags from a legal perspective.
- Developed a comprehensive compliance plan for a health care company utilizing AI in billing and coding.
- Advised various employers regarding the consideration, adoption, and implementation of AI and predictive analytics in the workplace, particularly in HR communications and pay equity audits.
- Counseled all types of health care providers, including payers and pharmaceutical and device manufacturers, on regulatory requirements for our clients’ innovations in telemedicine and AI.
- Tested and monitored, on behalf of a transportation client, an AI software program that will be used to recruit and vet candidates in its HR department.
- Obtained new regulatory guidance for AI health technology companies by clarifying the FDA’s pathway to market for AI-based devices.
- Counseled numerous startup medical AI companies on marketing options and regulatory burdens.
- Assisted a global client in the fast-food industry with implementing an AI chatbot for scheduling interviews and onboarding new employees.
- Advised several companies on risk analysis of AI selection tools.
- Counseled multiple AI vendors on bias audits and compliance with New York City’s AI law.
- Advised a large staffing agency using its own AI tool for sourcing candidates. We analyzed whether the tool is subject to the New York City AI law and are in the process of evaluating whether the tool is compliant with other employment laws.
- Worked with an employer using AI to assess the performance of its workers.
- Member of the Firm
- Member of the Firm
- Member of the Firm
- November 9-12, 2023