Nearly 700 AI bills have surfaced in state legislatures in 2024, addressing issues from safety standards to deepfake regulations. Colorado passed a comprehensive bill, while California vetoed a major bill, reflecting the different strategies states have used to fill regulatory gaps created by stalled federal action.
States move to shape AI regulatory landscape in 2024, report finds
CCIA National Policy Center report That means state legislatures are taking an active role in overseeing artificial intelligence. In 2024, nearly every state has introduced AI-related bills and passed several measures.
Momentum at the state level comes as Congress and federal agencies consider national AI standards. California and Colorado exemplify different regulatory approaches. Colorado enacted comprehensive AI legislation through SB 205, but stakeholders expressed concerns about limited opportunities for input. Meanwhile, California Governor Gavin Newsom vetoed SB 1047, citing the need for more sophisticated proposals, while signing other AI-related bills addressing digital replicas and deepfakes.
State law primarily addresses five areas: safety requirements for AI development, watermarking of digital content, deepfake regulation, right to publicity protection, and commissions of inquiry. The CCIA National Policy Center warns that state regulations that are too broad can impede technological progress.
“In the rapidly evolving field of AI, it is important to find a regulatory balance that does not result in rules that are so strict that they stifle innovation,” the report states, and ensures that responsibilities are appropriately divided between AI developers and adopters. There are particular concerns about the allocation of And users.
Looking ahead to 2025, Connecticut Sen. Maloney plans to reintroduce comprehensive AI regulations that could serve as a model for other states. The New York State Legislature is expected to consider bills related to AI liability standards and synthetic media watermarking.
The varying state approaches highlight the challenges of establishing an AI oversight framework without a uniform federal standard.
AI policy faces uncertain changes towards 2025
The future of artificial intelligence regulation in the US faces uncertainty ahead of a potential leadership change in Washington, according to a report new analysis From experts at the Wharton School.
While the Biden administration is emphasizing safety protocols, Trump campaign advisers and donors support loosening AI restrictions, Wharton law professor Kevin Warbach said during a recent panel discussion. But the position of the group, which has criticized big tech companies while opposing regulation, remains complicated. insight appeared From Wharton’s recent “Policies that Work” panel exploring AI governance.
States are not waiting for clarity from the federal government. Approximately 700 AI-related bills are being debated across the country while companies take voluntary safety measures to prevent discrimination and protect users.
The rapid increase in energy demand from this technology poses pressing challenges. AI-related data centers currently consume three times as much electricity as New York City, and usage is expected to triple by 2028. In response, Microsoft partnered with Constellation Energy to revive the Three Mile Island nuclear facility in Pennsylvania through a 20-year power agreement.
Experts have warned that deepfake technology poses a particular threat to democratic stability. Their proposed solutions include compulsory education on how to create deepfakes to better understand the capabilities of the technology.
While the European Union moves forward with comprehensive regulation, U.S. policy remains at a crossroads, creating an uncertain environment for industry leaders and innovators.
Healthcare AI needs smart regulation, new report warns
a new report Paragon Health Institute researchers warn that overregulation of artificial intelligence in healthcare could stifle life-saving innovation while calling for targeted oversight that prioritizes patient safety. There is.
The report comes as state legislatures are significantly ramping up AI-related legislation, with nearly 700 bills introduced in 2024, compared to 191 in 2023.
“Increased awareness of AI among policymakers can sometimes be replaced by a meaningful understanding of its operations,” said Kev Coleman, a visiting researcher at Paragon. “Coupled with occasional predictions of an AI dystopia, this situation risks leading to misregulation that not only increases the cost of the technology but also reduces the very medical advances that policymakers hope for from AI.”
The report recommends that regulators differentiate between different AI systems rather than treating them uniformly. For example, the risk of AI used for back-office medical supplies purchasing is much lower than for patient-facing diagnostic applications.
The study also highlights that FDA’s existing framework for evaluating medical devices provides a strong foundation for AI oversight. Rather than creating a new regulatory body, the report suggests leveraging the expertise of existing health authorities.
Key recommendations include providing an economic pathway to modern approvals for AI systems that improve over time, and ensuring regulations do not overlap with existing protections under HIPAA and other laws. This includes things to do.