(Co-author: Tina Watson*)
On September 28, 2024, Governor Gavin Newsom signed California Assembly Bill 3030 (“AB 3030”), known as the Artificial Intelligence in Health Services Act. AB 3030, effective January 1, 2025, is part of a broader effort to reduce the potential harms of generative artificial intelligence (“GenAI”) in California and requires new Introduce requirements.
Overview of AB3030
AB 3030 allows healthcare facilities, clinics, and individual and group physician practices (“regulated entities”) to utilize GenAI to generate written and verbal patient communications regarding patient clinical information, including: is required to do so.
A disclaimer that indicates to the patient that the communication is AI-generated (depending on whether the message is in written, audio, or visual format, there are specific requirements regarding the display and timing of the disclaimer) ). Clear instructions on how patients can contact their healthcare provider, facility employee, or other appropriate person regarding the message.
AB 3030 seeks to increase transparency and patient protection by ensuring that patients are notified when an AI-generated response is used for treatment.
AB 3030 does not apply to AI-generated communications that have been reviewed and approved by a licensed or certified human health care provider. This provision was supported by multiple state medical associations out of concern that it would prevent health care providers from reaping the time-saving benefits of AI to support clinical decisions. Additionally, AB 3030 does not apply to communications related to administrative and business matters such as appointment scheduling, health exam reminders, and billing. Because errors in medical communication can cause greater harm to patients, the law limits the scope of communication to “patient clinical information” (meaning information regarding the patient’s health status).
AB 3030 defines GenAI as “artificial intelligence that can generate derived synthetic content, including images, video, audio, text, and other digital content.” An important aspect of this definition is “synthesis.” This refers to new outputs created by the system, rather than predictions or recommendations on existing datasets. A familiar example is a large-scale language model (“LLM”) that generates the original text.
Physicians who violate AB 3030 are subject to the jurisdiction of the California Medical Board or the California Osteopathic Medical Board. Licensed health care facilities and clinics that violate the law are subject to enforcement under Chapters 2 and 3 of Chapter 1 of the California Health and Safety Code, respectively.
Benefits and risks of using GenAI in healthcare
AB 3030 seeks to balance the competing goals of reducing administrative burden on healthcare professionals, increasing transparency regarding the use of GenAI, and reducing potential harms from the use of GenAI. This law does not directly regulate the specific content of patient clinical information communication. Therefore, regulated entities may use GenAI tools as long as communications related to patient clinical information include the necessary disclaimers and instructions.
There are countless reasons why providers would benefit from tools like this. For example, medical documentation (such as consultation notes and medical summaries) has long burdened clinicians, reducing the time they have to interact with patients. However, the use of GenAI in clinical settings also comes with risks and potential liability for healthcare providers, which AB 3030 does little to eliminate. California regulators said in a Senate floor analysis on Aug. 19, 2024, that AI-generated content is trained on historically inaccurate data, making it biased and substandard for certain patient groups. He pointed out that he was concerned that this could lead to treatment. Another risk is the tendency for GenAI to create “hallucinations,” or outputs that appear consistent but actually have no basis in reality. This phenomenon is common in GenAI models, including LLM, which are known to fabricate reliable facts in response to queries. Finally, data cannot be removed from trained GenAI models without erasing previous training, which raises privacy concerns and potentially leaves large amounts of patient data unnecessarily in these models for long periods of time. It remains.
Considerations for healthcare providers and entities
The passage of AB 3030, along with other recent AI legislation in California, clearly demonstrates that (1) the California Legislature values AI transparency as a necessary industry standard; These regulations are consistent with the American Medical Association’s (“AMA”) Principles for the Development, Deployment, and Use of Augmented Intelligence, which identify transparency as a priority in the implementation of AI tools in healthcare. California’s approach also broadly aligns with the White House’s AI Bill of Rights blueprint, which states that people have the right to know when and how automated systems are used and impact their lives. are.
Healthcare providers operating in California or serving California residents will address new requirements to ensure their use of AI complies with California’s new regulations. We need to start taking measures. These organizations also need to increase oversight of review processes and AI tools to ensure that clinicians’ ongoing documentation and review does not become a rubber stamp of approval. This would help address concerns that AB 3030’s exemption for provider-reviewed AI communications could give patients a false sense of security.
Other states may consider similar AI regulations in the medical industry in the future. Our Sheppard Mullin Healthy AI team will continue to monitor these developments.
*Tina Watson is a law clerk in the firm’s New York office.
footnote
(1) California Restricts Use of AI in Health Plan Utilization Management | Health Law Blog.