listen to article
This voice is automatically generated. Please let us know if you have any feedback.
Using artificial intelligence at work This is increasingly becoming a reality, especially for knowledge workers. For better or worse, this includes human resources. AI is increasingly being implemented in recruitment, compensation, and even performance management.
But it also means that it becomes increasingly contentious and sometimes leads to legal action. take something in progress A class action lawsuit has been filed against Workday.for example.
Federal and state policymakers are scrambling to catch up, leaving employers caught in the middle and creating tension between the two.
Fed and states race to set the tone
Recent White House policies aim to make the United States a global AI leader. Initiatives include: presidential order President Donald Trump signed it in July, along with an AI Action Plan issued this summer.
A bipartisan bill was introduced in Congress by Sen. Josh Hawley (R-Missouri) and Sen. Mark Warner (D-Virginia). They’re trying to make it mandatory in November. Employers report AI-related layoffs.
And then there is the ever-growing patchwork. State AI employment laws.
AI in New York City Employment LawFor example, it has been in effect since 2023, with other states and local governments following suit. This summer, California adds AI at Work clause It builds on the existing Fair Employment and Housing Act, which went into effect on October 1, and also applies to employment. promotion and training In the state.
Similarly, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) was signed into law this summer; Effective January 1, 2026. Similarly, AI in Illinois Employment Law It takes effect on January 1st. and AI in Colorado Employment Law Scheduled to take effect in June 2026.
“There’s some tension between the messaging from a federal perspective and what we’re seeing on a state-by-state basis,” Ogletree Deakins’ Jen Betts, co-chair of the firm’s technology practice, told HR Dive.
“While a federal framework that pre-empts the state-level patchwork would be ideal, that seems unlikely,” Niloy Ray, an attorney in Littler’s e-discovery practice who frequently litigates AI cases, said in an email.
Because of this legal discrepancy between the federal government’s stance on AI and state laws, many employers are “developing internal governance programs and strategies that make sense for their organizations,” Betts observed. This is also true of Ogletree Deakins himself. Betts is also co-chair of the Innovation Council, an internal governance group that governs how the company integrates technology into its workflows.
State requirements vary widely
Betts and Ray both said human resources departments should consider AI laws in California, Colorado, Illinois, Maryland, New York City, Texas and even the European Union.
The requirements are wide-ranging and can have both direct and indirect impacts on HR. The California law, which took effect in October, does not directly impact human resources operations because it focuses on the model itself, Ray said. However, SB 53 “ensures that the use of AI models is generally safer and expands whistleblower protections for employees who raise safety concerns.”
Meanwhile, Ray called TRAIGA a “concerning change” in terms of employee protection.
Ray said the law “largely exempts AI from its requirements and regulations when it is used in employment or commercial contexts.” All that is required is that the AI is not intended to cause physical harm or incite criminal activity.
Additionally, TRAIGA clearly states that “disparate impact alone is not sufficient to demonstrate intent to discriminate,” Ray said. He called this “a significant change from more than 50 years of federal and state law, which has held that adverse, or discriminatory, effects constitute a basis for liability, even if the intent of the policy is ostensibly neutral.”
But overall, given the legal patchwork, Ray’s advice is that employers “need to adhere to the HCF, or highest common denominator, when setting up AI disclosure, risk assessment, opt-out, appeals, and record-keeping processes.”
Betts also said HR departments need to consider several different variables when it comes to creating internal governance systems and policies. For example, employers should consider the size of the company. That industry. How often and for which tasks are employees using AI? If the employer is running a business. and what level of risk tolerance it has.
“There’s not necessarily a one-size-fits-all approach that organizations are taking to manage the relative evolving risks here, but there are some commonalities,” Betts said.
Flexibility and realism rule the day
Above all, Betts’s main advice was thoughtful. Use a “robust vetting process” before deployment, draft a workplace AI use policy, provide AI-related training, audit all tools, and send notifications to employees and applicants as necessary.
HR professionals also need to remain flexible, Betts says. “This is an evolving field and will continue to evolve, so we need to recognize that what makes sense today won’t necessarily mean a year from now,” she explained.
Mr Ley said the best advice for employers dealing with the current fragmented regulatory environment was to do so with “resolute pragmatism”.
“Limit the deployment of AI to high-ROI applications, set aside budget for compliance, and know when to be proactive and when to be reactive,” he said.

