Congress is weighing drastic proposals that could significantly reconstruct how artificial intelligence (AI) is regulated across the United States. At the end of May, the U.S. House of Representatives was voted 215-214 in One Big Beautiful Bill Act (OBBBA), a budget adjustment bill with a provision that imposed a 10-year round of to enforce most state and local laws targeting AI systems. If enacted, the OBBBA suspends enforcement of existing state AI laws and regulations, taking priority over new AI laws in state legislatures across the country.
The impact is huge for healthcare providers, payers and other healthcare professionals. While moratoriums can streamline AI deployments and facilitate compliance burdens, they can also raise regulatory uncertainties and patient safety questions, which can undermine patient trust.
What OBBBA does
Section 43201 of the OBBBA prohibits enforcement of any state or local law or regulation of the “limits, restrictions, or regulations” of an AI model, AI system, or automated decision system, or an automatic decision system. OBBBA defines AI as “a machine-based system that allows predictions, recommendations, or decisions that affect real or virtual environments, with a specific set of human-defined objectives.” The definition of “automated decision systems” is similarly broad: “Computational processes derived from machine learning, statistical modeling, data analysis, or AI-derived calculations that issue simplified output (scores, classifications, or recommendations) can effectively influence or replace human decision-making.”
As proposed, the OBBBA will preempt several enacted and proposed limitations on AI use in healthcare, including:
California AB 3030. (While few exceptions) we require a disclaimer when generative AI is used to convey clinical information to patients and requires patients to inform them of how to reach the human provider. California SB 1120 prohibits health insurance companies from using AI to deny coverage without adequate human supervision. The Colorado Artificial Intelligence Act regulates the developers and deployment of AI systems, particularly what is considered “high risk.” The Utah Artificial Intelligence Policy Act, which requires regulated occupations (including medical professionals) to disclose prominently at the initiation of communications in which consumers interact with generated AI. Massachusetts Bill S.46 would require providers to disclose the use of AI to make decisions that affect patient care, as proposed.
Importantly, however, the OBBBA contains exceptions that are likely to cause debate over the true scope of the moratorium. Under the OBBBA, state AI laws and regulations can remain enforceable if any of the following exceptions apply (not preemptively):
Exceptions to main objectives and effects. State laws or regulations have the following “major objectives and effects” regarding the adoption of AI or automated decision systems: (i) Removal of legal obstacles; (ii) facilitate deployment or operation; or (iii) integration of administrative procedures. There are no exceptions to design, performance, or data processing imposition. No state law or regulation imposes similar requirements on AI or automated decision systems, unless these requirements are imposed under federal law or apply to other models and systems that generally perform similar functions. or a reasonable, cost-based pricing exception. State laws or regulations are “reasonable and cost-based” and only charge fees or bonds equally charged to other AI models, AI systems, and automated decision systems that perform equivalent functions.
In particular, the last two exceptions mean that moratoriums only affect state laws that treat AI systems differently than other systems. As such, general applicable laws at the state and federal level will continue to regulate AI, including those relating to anti-discrimination, privacy and consumer protection. But even with this sculpture, the moratorium would undoubtedly translate the AI regulatory environment, given that there is no robust federal regulations to replace state-level restrictions.
Why is this important to medical professionals?
The proposed moratorium is part of the Trump administration’s broad focus on innovation around regulation in the AI field. Advocates argue that a single federal standard will help reduce compliance burdens for AI developers by eliminating the need to track and implement AI rules in 50 states. This promotes innovation and protects the nation’s competitiveness as the US competes to keep its rankings in AI development.
But for healthcare providers, trade-offs are complicated. State-level regulations have advantages. For example, patients may be wary of AI-enabled care when transparency and surveillance appear to be less sensitive, especially in sensitive areas such as diagnosis, care triage, and behavioral health. Additionally, states often act as early responders to new risks. Moratoriums can prevent regulators from addressing evolving clinical concerns related to AI tools, particularly given the lack of comprehensive federal guardrails in this area.
Legal and Procedural Challenges
The moratorium also faces significant constitutional and procedural hurdles. For example, legal scholars and 40 bipartisan state attorney generals raised concerns that the OBBBA could violate state police powers related to health and safety, causing problems under the 10th Amendment. Furthermore, if enacted, moratoriums are expected to face legal challenges in court given bipartisan opposition.
What should medical institutions do now?
Healthcare organizations should maintain strong compliance practices and stay abreast of general application laws such as HIPAA and state data privacy and security laws. This is because AI tools are likely to be subject to such laws despite the uncertainty that could emerge if the OBBBA was enacted. Even if the moratorium fails to pass the US Senate, Congress clearly shows that it aims to regulate AI through future laws and agency-led rulemaking by organizations such as the US Department of Health and Human Services and the Food and Drug Administration. Therefore, healthcare organizations should have a clear vision of the organization’s policies and practices, including AI compliance, including:
Stay compliant ready. Continue to monitor and prepare state-level AI regulations that are currently in effect or soon to be implemented. Audit your current AI deployment. We will evaluate the methods that AI tools are currently being used across clinical, operational, and management functions, and continue to assess their consistency with a broader legal framework, including, but not limited to, HIPAA, FDCA, FTC Act, Title VI, and Consumer Protection Act. As explained, AI tools continue to be subject to many laws of general application, even if moratoriums pass. Engage in strategic planning. Organizations may need to readjust their compliance programs depending on whether the moratorium has been approved by the US Senate and survived legal scrutiny.
Whether the OBBBA was ultimately enacted or not, the proposed federal AI enforcement moratorium marks a pivotal moment in the evolving landscape of AI regulation in healthcare. Providers are proactive, informed and ready to adapt to evolving legal and regulatory developments.