The proposed law introduced in the U.S. House of Representatives allows AI and machine learning technologies to prescribe drugs autonomously approved by the FDA.
bill, HR238, introduced on January 7th, said that “the federal Food, Drugs and Cosmetics Act (FFDCA) must be amended to make it clear that artificial intelligence and machine learning technologies are eligible as eligible practitioners. There is. Approved, cleared or permitted for the Food and Drug Administration and other purposes.”
Prescriptions of drugs using artificial intelligence or machine learning technology, or if approved, “Sound Technical Act 2025” was sponsored by Rep. David Schweikert (R -Az.).
If approved, section 503(b) of the FFDCA will be amended, adding that the term will recognize AI as “a practitioner authorized by law to control such drugs.”
This includes artificial intelligence and machine learning technologies permitted in accordance with “national laws” and has been approved, cleared or approved under sections 510(k), 513, 515 or 564 of the FDA.
The bill was introduced to the Energy and Commerce Committee.
Bigger trends
AI can perform tasks that include: A study published at the NIH National Library of Medicine states that it is “radiation detection of pulmonary nodules in imaging and other applications.”
AI is used in a variety of ways within healthcare, including ambient documents, accelerated discovery of living organisms, and reducing administrative burdens.
Still, some experts believe that AI is not ready to be used in many aspects of care.
“Recent studies have shown that generation pre-trained transformer 4 with visual acuity (GPT-4V) is superior to human physicians in medical challenge tasks. However, these assessments focused primarily on the accuracy of multi-select questions,” according to researchers at the National Institutes of Health.
However, the researchers found that GPT-4V frequently presented flawed rationales as to why it made the correct final choice, particularly related to understanding images.
“Our findings highlight the need for a more detailed assessment of such multimodal AI models before integrating them into clinical workflows,” the researchers wrote.
in An interview with HIMSS TV, CTO of Microsoft’s Health Platforms and Solutions, Harjinder Sandhu discussed high-value and high-risk use cases associated with the use of AI in healthcare.
“If an AI system hallucinates information, it either constitutes information about the patient or omits important information that can lead to catastrophic outcomes for that patient,” Sandhu said.