Insider Brief
Else’s Kröner Fresenius regulations are geared towards the rise of Healthcare’s autonomous AI agents, a new study funded by the Else Kröner Fresenius Foundation warns that they are not geared towards the rise of autonomous AI agents in healthcare, according to researchers at Tu Dresden’s Digital Health for Digital Health. The study, published in Nature Medicine, calls for regulatory reforms, including voluntary alternative pathways and adaptive monitoring frameworks, to ensure patient safety as an AI agent. Researchers recommend treating advanced AI agents like healthcare professionals over the long term, and grant autonomy only after demonstrating safe and consistent performance in a clinical setting.
Researchers at Tu Dresden’s Else Kröner Fresenius Center for Digital Health (EKFZ) warn that current medical device regulations in the US and Europe do not prepare for autonomous AI agents in healthcare.
The researchers said the study, funded by the Else Kröner Fresenius Foundation and published in Nature Medicine, underscores the need for regulatory reforms to ensure patient safety as AI agents progress.
Unlike previous AI tools focusing on a single task, the new autonomous AI agents can manage the entire clinical workflow and integrate external databases and computational tools under the control of large-scale language models (LLM). These systems can analyze medical images, manage patient data, guide clinical decisions without ongoing human monitoring, and raise concerns about accountability and risk management.
“We’re seeing a fundamental change in how AI tools are implemented in medicine,” said Jakob N. Kather, professor of clinical artificial intelligence at EKFZ, Digital Health, TUD and Oncologists, Dresden University Hospital Dresden. “Unlike previous systems, AI agents can autonomously manage complex clinical workflows. This opens up a huge medical opportunity, but also raises entirely new questions about safety, accountability, and regulations that need to be addressed.”
Researchers say existing regulations were designed for static, narrowly defined technologies that do not evolve after approval. However, AI agents are adaptive, capable of autonomous decision-making, and may present challenges in static regulatory frameworks, the study says.
Researchers have proposed several reforms. In the short term, we propose expanding our enforcement discretionary policy and classifying certain AI systems as non-medical devices to facilitate immediate adoption hurdles. Medium-term solutions include the development of voluntary alternative pathways (VAPs) and adaptive regulatory frameworks that allow for dynamic monitoring based on real-world performance data. In the long term, they propose regulating AI agents like healthcare professionals, and grant autonomy only after demonstrating safe and consistent performance through structured training.
The method of this study included reviewing existing regulatory pathways and analyzing the technical characteristics of AI agents, highlighting the gap between current frameworks and emerging technologies. Although regulatory sandboxes provide flexibility for early testing, the authors argue that resource limitations make them insufficient for widespread deployment.
The author warns that without substantial reforms, meaningful adoption of autonomous AI agents in healthcare could continue to stagnate. Collaboration between Regulators, Healthcare Providers, and developers is essential to designing flexible, safety-focused frameworks that cater to the unique capabilities of AI agents.
“To maximize the potential of AI agents in healthcare, bold and advanced reforms will be needed,” said Stephen Gilbert, professor of medical device regulatory science for digital health at Tu Dresden and the final author of the paper. “Regulators should start preparing now to ensure patient safety and provide clear requirements to enable safe innovation.”