The Food and Drug Administration (FDA) has approved roughly 1,000 medical devices (AIs) with the expanded intelligence report (AI) to the market, even if “wide government” policies are not yet provided by Washington lawmakers or regulators.
This gap in government guidance was referenced in the AMA Board Report (PDF) on “Expanded Intelligence Report Development, Deployment and Healthcare Use,” which includes policy recommendations adopted last November at the 2024 AMA Interim Conference in Lake Buena Vista, Florida.
“New policies and guidance are needed to ensure that they (AI-enabled healthcare tools) are designed, developed and deployed in an ethical, fair, responsible, accurate and transparent way,” the report states.
During an AMA webinar discussing the current and future status of healthcare AI policy, AMA President Bruce A. Scott, Maryland said, “Voluntary standards will not be enough. We need to make sure that the principles of AI implementation are regulated.”
Dr. Scott also explained why AMA refers to AI in the way AI does it, and how word choice represents the underlying principles regarding the use of AI in healthcare.
“AMA calls AI “augmented intelligence” rather than “artificial intelligence,” and calls it “augmented intelligence” to highlight this new resource and human element of technology.
He said that AMA’s policies “advance the fact that doctors need to be involved in the development and implementation of AI technology, so we know that wherever we practice, we work in the bedside, at our clinic, at our hospital or at the emergency department.”
According to AMA survey published this year, physicians’ use of health AI for specific tasks has nearly doubled in just a year, and despite some questions remain, there is growing enthusiasm for technology.
From AI implementation to EHR adoption and ease of use, AMA is fighting to make the technology work for physicians, ensuring it is an asset for physicians.
The state steps up
The state steps up
Webinar panelist Jared Augenstein, senior managing director of consulting firm Manatt Health, said state and federal lawmakers and regulators are trying to balance. The aim is to create rules that address concerns about accuracy, bias and privacy, but not at the expense of suppressing the promised benefits that AI innovation expects to generate.
There were very few AI laws coming out of Congress, so the state legislature governs it. This is evidenced by the introduction of 250 health-related bills in 34 states this year, Augenstein said.
Generally, state bills cover these four basic topics.
Transparency. Typically, these bills outline disclosure or information requirements between those who develop AI systems, those who deploy them, and end users.
Consumer protection. These focus on ensuring that AI systems do not unfairly discriminate, adhere to disclosure requirements, and that there is a way for end users of AI systems to challenge AI decisions.
AI payer use. “We’ve seen a lot of work in that area,” Augenstein said, adding that these bills generally establish the surveillance measures needed when payers use AI tools to support clinical decision-making and utilization management.
Clinical use. Invoices have been introduced regarding the use of health AI tools by physicians and non-physical clinicians.
The most meaningful laws were passed by California, Colorado and Utah, Augenstein said, but added that there is a move to postpone the implementation of Colorado laws until differences between the governor and the legislature can be resolved.
Panelist Kimberly Horvath, a senior lawyer at the AMA Advocacy Resource Center, focuses on the important role that nations play in the development of national law and policy.
“States pass many bills in many different areas, much more than you can see at the federal level,” says Horvath. “They move much faster and are often seen as labs of these potential policy solutions, and a lot of things that are kept down to the federal level will start at the state level.”
Flux federal regulations
Flux federal regulations
As the Biden administration approached its end, the beginning of a comprehensive AI policy at the federal level began to form. Release of final rules from Technology Policy Aide (formerly the Office of the National Coordinator of Health Data), updates to the EHR certified program on technology and interoperability and regulations on algorithm transparency and information sharing.
“We were very supportive of these efforts,” said Shannon Curtis, webinar panelist and assistant director of federal affairs at the AMA. “This was the first federal effort to mandate all sorts of transparency from EHR vendors. We thought this was a really important step towards being a truly priority for us. The transparency requirements under federal regulations.”
The Centers for Medicare & Medicaid Services (CMS) has also released important and welcome guidance on the use of algorithms for pre-certification and billing review of Medicare Advantage Plans. It included a provision that when using AI to assist in coverage decisions, the plan must take into account the patient’s medical history and physician recommendations.
“But it was just (FAQ) memo. It’s guidance, there’s no formal law,” Curtis said.
In January, the Biden administration announced proposals for FDA guidance to device manufacturers developing AI-powered tools.
“I saw guidance from the FDA for the first time that companies recommended how to explain the intended use of a product, how performance verification would look, and how it should communicate,” Curtis said.
This included things like “final recommendations on user interface design, cybersecurity, and how to label these products.” “We strongly encourage that draft guidance (PDF) and hope that they will move towards finalizing this new administration.”
The Trump administration has expressed interest in developing its own AI policy and issued a request for information on the health technology ecosystem in May. The new administration is seeking information on the “market of digital health products for Medicare beneficiaries and the state of data interoperability and the broader health technology infrastructure.”
Specifically, the Trump administration requested information on “how key themes and technologies such as artificial intelligence, population health analysis, risk stratification, care coordination, ease of use, quality measurement, and patient engagement can be integrated into APM (alternative payment model) requirements.”
“We’re clearly heading towards power, potentially heading towards Congress, which is much more concerned with deregulation than seeing higher levels of regulation or better regulation in the AI space,” Curtis said.
Curtis expressed concern about the budget bill proposed before Congress, which includes provisions to place a 10-year suspension on state-based regulations for AI.
“If that passes, we won’t be able to pass a new AI law or a new AI REG for 10 years,” she said. “We were very concerned about the fact that the federal government has made any better regulatory schemes for AI, particularly the progress made in healthcare AI, particularly the healthcare AI.”