A panel of experts discussed the changing regulatory landscape surrounding artificial intelligence in employment and fundamental ways to mitigate risks at the SHRM Workplace Law Forum 2024 in Washington, DC, on November 20th.
While federal action has primarily been at the awareness and guidance level, state legislatures have begun enacting laws aimed at curbing AI discrimination, said Seyfarth, a senior consultant at the Washington, D.C., law firm. Rachel Sea points out.
Mr. Shih said the U.S. Equal Employment Opportunity Commission (EEOC) has stated that AI enforcement is a strategic priority. We have faced discrimination charges related to AI and other workplace technologies. It is also interested in potential investigations and litigation, but has so far not done much on that front.
The EEOC has issued 2023 technical guidance for employers on how to measure adverse effects when employment selection tools use AI, warning about the impact of AI and algorithmic bias in hiring decisions. The agency also supports plaintiffs in a 2023 lawsuit alleging that HR software vendor Workday is directly liable for unlawful employment discrimination caused by employers’ use of Workday’s AI-powered recruitment technology. A court brief was also submitted.
Some AI policy experts said the incoming Trump administration would rescind President Joe Biden’s October 2023 executive order on AI and replace it with a more hands-off approach to spark more innovation.
“While federal legislation regulating AI is not expected to be enacted anytime soon, the Trump administration has expressed an interest in prioritizing AI research and funding rather than regulating AI in employment.” Mr. Shi said. “If there is no federal action, that gives state legislatures an incentive to do something.”
She predicted that “very complex compliance regimes” will emerge as more cities and states pose significant compliance burdens and legal risks to employers.
“This is the scariest time for all of us in the AI space, because there is so much more unknown than known,” said Mike Childers, Amazon’s senior general counsel. “We are all trying to keep up with our understanding of technology, while also thinking about the new requirements that come with these laws. Unless your company actively participates in digital isolationism, you are not physically present You will be subject to the law even outside of your location.”
Joan McFadden Papinchock, director of litigation services at DCI Consulting in Washington, DC, agreed: Employers who use AI to make employment decisions need to be aware of state laws everywhere, not just in their home state. He said there is. This is because they are likely to attract applicants from jurisdictions with AI laws in place.
“We have experience with these AI laws in New York City, Illinois, and most recently Colorado,” she said. “Colorado has a comprehensive overview of expectations, which are consumer protection laws that apply to employment settings.”
State AI laws related to employment typically address:
Inform applicants and employees about the use of AI. Obtain consent before use. Be transparent about your technology and disclose how it works. Take steps to avoid algorithmic discrimination. Complete impact assessments and audits of AI systems and results. Implement an AI risk management policy.
A notable feature of Colorado’s law, which takes effect in 2026, is that employers will be required to inform candidates who have been passed over for a job or promotion, what information was used, how it was used, and why that individual Papinchook said that there is a need to notify and explain what happened. Not selected.
“That’s new,” she said. “Until now, employers didn’t have to explain why someone didn’t get a job.”
Childers said another place to look for what might happen next is the EU AI law, which will come into force in August and be applied in stages. The law places AI in the risk category, and AI systems deemed high risk, such as systems used for biometric identification, employment, and worker management, must comply with strict requirements.
“If you are not physically located in the EU, the risk of regulators showing up at your home is probably low. However, if you are operating within the EU, you will need to comply with this law,” Childers said. said. “One of the first duties of an employer is to support the AI literacy of their employees. That means explaining what the technology is, understanding what is and isn’t acceptable, and stopping its misuse. It means making sure your employees understand how to do it.”
Additionally, there are other EU laws regulating AI, with legislation being considered in Brazil, Canada, and China.
HR department steps
Human resources departments “don’t have the luxury of saying, ‘We’re not going to engage with AI,'” said Nicholas Trussall, organizational and talent analytics manager at Andersen in Bayport, Minn. This is a powerful tool and should be considered. Remember, the law has not changed when it comes to discrimination in recruiting and employment. ”
He said safe use cases for AI include things like translating text, creating early drafts of job descriptions, and setting schedules.
“When you start automating decisions about other people, you become increasingly anxious,” Turksal said. “In general, look at the AI and decide whether it is making choices that favor you or that could be seen as discriminatory. Understand where the data is coming from. , and try to be transparent and explainable where appropriate.
Mr. Childers said there is a “huge rush” in AI right now. “We’re trying to get into AI so that no one is left behind, but that’s not necessarily the best approach to AI,” he said. “With AI, we can process so much data so quickly that random variance can quickly exceed two standard deviations.”
Significant differences in outcomes between groups are considered potentially discriminatory if they exceed two standard deviations from the mean, a common consideration when assessing disparate effects.
“The legal risk is not just what happens if the tool doesn’t work as expected, but also what happens if it does work as expected, but validation reports are available when trying to defend disparate impacts. The same is true if you haven’t,” Childers explained.
Papinchock said there is a growing recognition that serious consideration is needed before purchasing AI tools in the workplace. “It’s good to talk to marketers, but you also need to talk to data scientists,” she said. “There was a time when vendors didn’t expose their backends for users to see, but now vendors with ethics will do so.”
To that end, See recommended asking provocative questions when purchasing AI, such as “How does it work?” “How do I know it works?” “How do I explain it if it’s wrong?”