As artificial intelligence (“AI”) systems and their use continues to progress, the US regulatory environment is evolving rapidly, but not in a uniform way. Unlike the European Union’s comprehensive approach to EU AI law, the United States does not enact national AI law. Instead, regulatory frameworks have emerged at the state level, as well as privacy regulations, with an increasing number of states introducing their own AI laws and legislative proposals. This creates a patchwork of rules and requirements that create complex, fragmented compliance environments for businesses deploying and adopting AI and automated decision-making systems.
Previous client alerts on state AI regulations analyzed state and local AI regulations in Colorado, Illinois, and New York, as well as how state privacy laws provide additional protection and protection for automated decision-making systems.
This client alert provides an overview of the newly enacted state law timestamps aimed at regulating AI and automated decision-making systems. It highlights key provisions in these regulations and uses AI to provide actionable insights to businesses.
CALIFORNIA – Despite concerns raised by Gov. Gavin Newsom on May 1, 2025, California’s Privacy Protection Agency (“CPPA”) has unanimously voted to launch a public comment period for proposed regulations regarding cybersecurity audits, risk assessments and automated decision making technology (“ADMT”) (“ADMT of CPPA”). The comment period will remain open until June 2, 2025. In particular, the current draft CPPA rules reflects significant revisions from the version released in November 2024, following the public feedback received during the formal rulemaking process. Important revisions include:
Narrow down the definition of ADMT and key decisions: CPPA regulations impose certain obligations, including opt-out, on businesses that use ADMT to make important decisions about consumers. Current CPPA regulations narrow down the definition of ADMT significantly. This applies to technologies that process personal information and use calculations to “replace or “substantially replace” human decision-making, and remove a broader range of “substantially easily” language from previous drafts. This change has important practical implications. For example, if your business uses automated tools to help make decisions, but has a meaningful human involvement in the final decision, its use may be outside the scope of the CPPA. Therefore, it does not pose an obligation to provide an opt-out right. Similarly, under current CPPA regulations, “significant decisions” are defined as “as a result of provisions or denials of financial or lending services, housing, educational registration or opportunities, employment or independent contract opportunities or compensation, or health services. This revised definition narrows the scope of the previous one by removing references to decisions that affect “access” to these services. Under current CPPA regulations, processing of companies engaged in profiling employment or educational contexts, or to train ADMTs, narrowing the scope of ADMT and risk assessment obligations, or of personal information to train ADMTs, is no longer necessary to comply with ADMT obligations. However, it is still necessary to complete the risk assessment. With regard to public profiling, current CPPA regulations require risk assessments only when businesses profile consumers based on their presence in sensitive locations such as educational institutions, pharmacies, and residential shelters. Finally, the revised rules state that behavioral ad profiling (e.g., first-party ads) no longer causes risk assessments or ADMT compliance requirements. ADMT Pre-Usage Notices – Current CPPA regulations make it clear that businesses using ADMT may include required pre-Usage Notices within existing notifications at the time of collection. Elimination of submission of summary risk assessments – Under current CPPA regulations, businesses no longer need to positively share their summarized risk assessments with the CPPA. Nevertheless, businesses must submit risk assessments to the institution for any year the business has conducted a risk assessment. Risk assessments conducted in 2026 and 2027 must be submitted by April 1, 2028.
ARKANSAAS – Gov. Sarah Huckabee recently signed two AI regulations. (i) HB 1958. This requires the development of a comprehensive policy regarding the approved use of AI and ADMT. Specifically, individuals who provide input or commands to generative AI tools own the generated content as long as they do not infringe copyright or intellectual property rights. In addition, individuals who provide data for AI model training (excluding employees) own the resulting training model unless training data is illegally acquired. Both regulations take effect on August 3, 2025.
Kentucky – SB 4 has been established. This directs the Commonwealth Office of Technology to develop policy standards governing the use of AI.
Maryland – Established HB 956, which will study the use of AI in the private sector and establish a working group aimed at creating recommendations for the General Assembly on AI regulation and policy standards.
MONTANA – SB 212, signed to the law by Governor Gianforte, provides individuals with the right to calculate that government restrictions on the ownership or use of calculation resources must be limited and narrowly adjusted to meet attractive government interests. The law also requires critical infrastructure facilities, whether fully or partially managed by AI systems, to develop risk management policies that take into account national or international AI risk management frameworks.
UTAH – Perhaps most importantly, Governor Spencer has signed several AI regulations, including SB 332 and SB 226. SB332 and SB 226 amend existing AI policy laws. SB 332 will extend the date of repeal of the AI Policy Act to July 2027. SB226 narrows down legal applications (e.g., collecting health, financial, or biological data) by providing that disclosure is required only during “high-risk” interactions (e.g., during the collection of health, financial, or biological data). Finally, HB 452 introduces new regulations for AI-supported mental health chatbots in Utah, including banning advertising products and services during user interaction and banning the sharing of user personal information.
West Virginia – HB 3187 was established. It is responsible for identifying economic opportunities related to AI, providing recommendations to delegates, the Senate and governors, developing best practices for use of AI in the public sector, and protecting individual rights and consumer data.
Finally, it is important to be aware of new deregulation trends in AI at the federal level. The change is particularly evident following President Trump’s executive order, rescinding President Biden’s executive order on “safe, safe, reliable development and use of artificial intelligence.” Instead, the new policy, titled “Removing barriers to AI’s American leadership,” aims to maintain and strengthen the global American domination of AI in order to promote human prosperity, economic competition and national security. To advance this goal, an action plan will be developed and submitted to the President within 180 days. This deregulation approach is also evident in the recently introduced House Budget Bill. It proposes to preempt and ban state enforcement of AI-related laws for the next decade.