Adam S. Forman, Frances M. Green, and Nathaniel M. Glasser, attorneys in the Employment, Labor & Workforce Management practice, were quoted in The Wall Street Journal, in “Want to Use AI for Talent Management? Here’s What to Consider,” by Isobel Markham.
Following is an excerpt:
Generative AI is expected to significantly impact talent acquisition, with global investment in talent management applications projected to be worth about $1.7 billion by 2032. More than 50% of this expected value is related to recruiting and hiring technologies, and while the technology is helping hiring managers to scale talent outreach and engagement, new generative AI use cases for job seekers are emerging, including automated interview coaching.
There are also new applications within employee training and development, including using AI to create personalized learning experiences. For example, virtual reality/augmented reality can simulate experiences and capture reactions in real-time to deliver learning tailored to the individual. AI models can also provide supervisors with tailored coaching recommendations to address an employee’s skills gaps.
However, as AI use grows, so do the potential risks.
“AI models can reflect historical processes and reinforce legacies of bias, which can be unintentional. This can be exacerbated by applying a model or tool that was designed for a different candidate pool or employee population,” says Don Williams, a managing director with Deloitte Transactions and Business Analytics LLP.
“The use of AI to analyze an employee’s skill, experience, and performance to identify appropriate career trajectories is becoming more common,” says Frances Green, of counsel at employment law specialist Epstein Becker & Green PC. “Although AI can offer significant potential to enhance performance management, these tools should not replace human judgment.”
There is also legislation—both proposed and in force—for organizations to assess and determine how laws might affect their talent management processes,
“We are continuing to see state legislation that requires ongoing algorithmic auditing and monitoring of AI tools and systems to promote nonbiased output, as well as clear communication to employees and job seekers about the system,” says Green.
For employers looking to use generative AI in talent processes, seeking advice from legal and HR teams can be helpful. Further, leaders who are looking to embed AI into their talent management lifecycle may want to explore the following considerations.
Regulations Are Disparate
Today, there is no uniform set of federal guidelines or regulations related to the use of AI specific to talent management.
“We think it’s more likely that the patchwork of legislation and regulation at the state and local levels will spread,” says Adam Forman, an attorney with Epstein Becker & Green.
“While various administrative agencies have been following the incorporation of AI into employment processes, in 2023, New York City Local Law 144 became one of the first AI-specific laws that required compliance by employers,” explains Nathaniel Glasser, co-leader of Epstein Becker Green’s AI practice group.
More states have followed New York City’s lead, including Colorado and Illinois. There is also proposed legislation in other jurisdictions, including Texas, California, New York State, Massachusetts, and Virginia.
“Each of those laws has its own compliance obligations that employers have to keep in mind when they use these tools for hiring,” Glasser adds. “For example, whereas the New York City law requires a bias audit, the Colorado law reflects the approach that we’ve seen in the European Union and its EU AI Act., which takes a risk-based approach and requires governance programs and impact assessments to be conducted for high-risk AI systems. These obligations apply to not just the deployer/employer, but the developer of the tools.”
The National Conference of State Legislatures notes that 45 states (as well as Puerto Rico, the Virgin Islands and Washington, D.C.) have introduced AI bills as of September 2024. Further, 31 of those states have adopted resolutions or enacted legislation regarding AI.
AI Can’t Provide Empathic Coaching
AI tools are being used for collecting and collating measurable performance metrics that may then provide a basis for promotion and career path decisions. While this can be efficient from an administrative perspective, there can be a tradeoff in the form of a lack of empathic coaching, counseling, and development conversations that usually comprise a meaningful performance review.
A management-responsible approach is balancing the information gleaned from AI with human managerial expertise, Green explains. This would create a more holistic undertaking of an employee’s career development and one less fraught with potential liability.
“The adoption of AI in general brings challenges of not just explaining certain decisions but also ensuring that employees get the timely and relevant feedback they need to reflect and grow,” Green says.
Putting Physical Safety on the Agenda
The use of AI in the workplace goes far beyond hiring decisions and performance evaluations; for many organizations, particularly for those in industries such as manufacturing, one of its most valuable applications is in automating various processes and procedures. Where that automation sets into motion something in the physical world, it immediately becomes a workplace safety—and therefore a human resources—issue that needs strict safeguards.
Some government agencies, including the Occupational Safety and Health Administration (OSHA), have indicated concern around the use of robotics and robotic AI systems on manufacturing floors and have published guidance related to this issue.
“The concern there, of course, is primarily the safety of workers, as well as the potential for improper surveillance of employees via the robotic systems that provide video output,” says Green. “While video output can be critical to safety by ensuring the correct use and engagement of robotic systems, it must be balanced against improper collection of data that might adversely impact the employee, or where it may form the basis for discriminatory decisions about work performance.”
Don’t Forget About Third-Party AI Risks
Organizations often focus risk management on the tools that their teams are developing, and do not consider the various external tools—such as job search engines or third-party apps—that they may be using for applicants or employees.
“A potential blind spot for organizations that are assessing AI risk is not recognizing when they are working with third-party tools that use AI,” says Brendan Maggiore, a senior manager at Deloitte Transactions and Business Analytics LLP.
Some of the most significant risks with using tools for candidate selection or promotion relate to whether the algorithm discriminatorily treats candidates based on their protected status, says Adam Forman, an attorney with Epstein Becker & Green.
“If the algorithm is biased, then its recommendations will also be biased. As they say: Garbage in, garbage out,” Forman continues.
Risk Management, Regulatory Awareness: Going Beyond Legal and Compliance Teams
Given the importance and complexity of risk management, it cannot be solely owned or managed by an organization’s legal or compliance teams—rather, it requires the collaboration of AI specialists and engineers, vendors, and talent leaders.
“It’s not enough to have a compliance team manage these risks and regulations; they need to be educating the teams that are using AI systems,” says Maggiore. “In fact, regulations like the EU AI Act require that users of AI systems have AI literacy and training. Similarly, organizations that routinely educate and integrate their AI and engineering teams into risk management processes can often streamline the level of effort involved in compliance activities.”
Educating engineering teams on regulatory requirements is particularly important as it is those teams who will need to make available certain evidence of testing as well as documentation of how the models were designed.
“If compliance is chasing them down six months later to extract evidence, it’s just going to be disruptive,” says Maggiore. “One approach is to build templates and tools so that as the engineering teams are working, they know what needs to be recorded and how those records should be formatted.”
Bringing other functions into the AI strategy discussion might also help address compliance issues. For example, a helpful way to approach disparate regulations is to consider how to make compliance more “self-service,” explains Williams. This includes building in controls and checkpoints that enable AI stakeholders to assess their compliance risk management with oversight from legal, risk, and compliance leaders so they have visibility into the self-service process.
“These elements encourage employees to be proactive around AI governance, so that development and implementation of AI capabilities includes risk management from the start,” adds Williams.
People
- Member of the Firm
- Member of the Firm
- Of Counsel