Companies using AI-powered tools for personnel decisions will need to navigate small but growing sheds of state and local regulations after a lack of proposed moratoriums in Congress.
Statewide restrictions on managing corporate artificial intelligence are set to take effect in California on October 1st in Colorado on February 1st, 2026. They will be added to almost narrow existing laws like New York and Illinois.
https://www.youtube.com/watch?v=zlcfasjtwvy
Surveillance: Can law keep up with the fast pace of AI?
Measures focused on automation in employment decisions form part of the state AI law universe, ranging from election-related deepfakes to digital replicas of performer voices. The US House of Representatives has passed a state where a decade-long moratorium blocks AI technology restrictions, but the US Senate stripped it from the Republican budget bill signed by President Donald Trump on July 4th.
Failure to book AI regulations with the federal government is likely to encourage more state legislative legislatures to limit employer use of tools, said Melanie L. Ronen, lawyer for Stradley Ronon Stevens & Young LLP in California.
“We were increasingly interested in state regulations even under the Biden administration. “It just increases if there is no federal movement.”
The Trump administration’s withdrawal from a different theory of impact when prosecuting discrimination cases could exacerbate this, as some policymakers appear to more clearly allow such state-level unintentional discrimination claims to target AI-powered biases.
A bipartisan combination of governors and state legislators opposed the moratorium before the senator stripped it from the budget bill. Some state lawmakers said Congress would violate the 10th Amendment by enacting a drastic preemption without passing federal standards.
“It is irresponsible to impose a broad suspension on all state actions while Congress does not act in this area and deprives consumers of reasonable protection,” he wrote to Congressional leaders.
The protest shows interest from state officials in regulating aspects of AI, but it is not clear to specifically target employment decision-making tools.
Growing patchwork
Colorado’s SB 205 has placed the broadest requirements on AI technology developers and employers to date. Lawmakers in Connecticut, Massachusetts, New York and Washington have considered similar legislation. Although the details vary, measures generally require public disclosures to job seekers and consumers as assessed by AI along with bias assessments of the tool.
The moratorium debate “arguably made state lawmakers realize more about the issue,” proved how “condemn the tech industry’s opposition to AI regulation is “above,” said Matt Scheller, a senior policy advisor at the Center for Democracy Technology.
The net outcome, he said, is “a more welcoming climate for future AI regulatory proposals,” among state policymakers.
Companies such as Alphabet Inc., Meta Platforms Inc., Microsoft Corp., Openai Inc. and Industry Associations are opposed to state legislative proposals. Similarly, the industry advocated a federal moratorium.
The Colorado Attorney General’s Office is expected to provide details on state requirements through regulations or guidance after Congress and government Jared Police (D) disagree with revision or delay of the law. AG’s office declined to comment.
With California’s new civil rights regulations, which recently won final approval, automated decision-making tools can cause illegal discrimination, including different or “adverse effects,” and employers must maintain a record of these decisions for four years, highlighting specific games and tests where applicant disability counts as illegal medical investigations.
Texas enacted a drastic AI measure last month, but it has fewer private sector business obligations than Colorado. Texas law in effect in 2026 (HB 149) prohibits intentionally discriminatory use of AI, including employment decisions, but states that different effects alone do not equal bias.
Utah and Minnesota each have enacted laws with AI-related disclosure or opt-out requirements. Minnesota’s data privacy law must come into effect on July 31st. Companies need to allow consumers to opt out of automated processing of personal data that affects important decisions such as employment and housing.
Compliance’s “Invisible Hands”
Mark Giluard, an attorney for Minnesota’s Nilan Johnson Lewis PA, said the moratorium’s failure would disappoint some employers who wanted to avoid the rise in state-level AI laws.
“That would mean a patchwork of AI regulations that employers have to deal with,” he said.
However, it is early to know how practical impacts employers will be made by state measures. The broadest Colorado laws have not come into effect, and narrow laws like restrictions on Illinois’ A-evaluated Video Pove interviews are mostly brought about by investigations and penalties.
New York City laws requiring bias audits of automated employment decision-making tools have been written very narrowly. This covers only those that almost or entirely replace human decision-making.
“From what we’ve seen so far, there wasn’t much enforcement at the state level anyway,” said Alice H. Wang, lawyer at Ritler Mendelsson PC in California. “They are still invisible hands pushing businesses and vendors to adhere to them.”
Colorado law sets general themes that many state proposals reflect, such as transparency notices and bias assessments, she said.
Employees are likely to be able to meet many state laws by following Colorado requirements across the US, Giluard said, but that part may be too much of a burden to apply universally. For example, Colorado asks businesses to challenge job seekers and consumers applying for housing, credit and other services to their services to oppose their AI-supported decision-making tools and to request human reviews.
“It’s going to be difficult given the speed at which we need to hire,” Gillard said.
While AI-specific measures pose compliance challenges, greater concern for employers remains the liability risk under federal and state anti-discrimination laws that cover all employment decisions, including those supported by AI.
“The focus is to ensure that AI does not work in a discriminatory way when used,” she said. “Not irrelevant to the subtle law.”