Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, June 8
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Tools»Developing trusted AI tools for healthcare
Tools

Developing trusted AI tools for healthcare

By January 18, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A new study proposes a system to determine the relative accuracy of predictive AI in virtual medical settings and when the system should defer to the judgment of human clinicians.

Artificial intelligence (AI) has great potential to improve the way people work across a variety of industries. But to integrate AI tools into the workplace in a safe and responsible manner, we need to develop more robust methods for understanding when AI tools are most useful.

So when is AI more accurate? And when are humans more accurate? This question is especially important in healthcare, where predictive AI is increasingly used in high-stakes tasks to assist clinicians.

Today, we published a joint paper with Google Research in Nature Medicine. This paper proposes CoDoC (complementarity-driven clinical deferral workflow), an AI system that learns whether to rely on predictive AI tools or defer to clinicians. The most accurate interpretation of medical images.

CoDoC is exploring ways to leverage human and AI collaboration in virtual medical settings to deliver the best outcomes. In one example scenario, CoDoC reduced the number of false positives by 25% without missing any true positives in a large anonymized UK mammography dataset compared to commonly used clinical workflows. % reduced.

The effort is a collaboration with several health organizations, including the United Nations Office for Project Services’ Partnership to Stop Tuberculosis. To help researchers further our efforts to improve the transparency and security of real-world AI models, we’ve open sourced CoDoC’s code on GitHub.

CoDoC: Add-on tools for human-AI collaboration

Building more reliable AI models often requires redesigning the complex inner workings of predictive AI models. But for many healthcare providers, redesigning predictive AI models is just not possible. CoDoC has the potential to improve predictive AI tools without requiring users to change the underlying AI tools themselves.

There were three criteria when developing CoDoC:

Even people who are not machine learning experts, such as healthcare providers, should be able to deploy the system and run it on a single computer. Training requires a relatively small amount of data, typically only a few hundred samples. The system is compatible with any data. You can use your own AI models without needing access to the inner workings of the model or the data used to train it.

Deciding whether predictive AI or clinicians are more accurate

At CoDoC, we propose a simple, easy-to-use AI system that improves reliability by helping predictive AI systems “know when you don’t know.” We considered a scenario where clinicians have access to AI tools designed to help them interpret images. For example, a chest x-ray may be used to determine if a tuberculosis test is needed.

In any theoretical clinical setting, CoDoC’s system would require only three inputs for each case in the training dataset.

Predictive AI outputs a confidence score between 0 (definitely not present) and 1 (definitely present). Interpretation of medical images by clinicians. For example, the ground truth of whether a disease is present or not is established as follows. Biopsy or other clinical follow-up.

Note: CoDoC does not require access to medical images.

CoDoC learns how to establish the relative accuracy of a predictive AI model compared to a clinician’s interpretation and how that relationship varies by the predictive AI’s confidence score.

Once trained, CoDoC can be incorporated into virtual future clinical workflows involving both AI and clinicians. When a new patient image is evaluated by the predictive AI model, its associated confidence score is entered into the system. The CoDoC will then evaluate whether to accept the AI ​​decision or defer it to the clinician, ultimately resulting in the most accurate interpretation.

Improved accuracy and efficiency

Comprehensive testing of CoDoC using multiple real-world datasets, including only historical anonymized data, shows that combining the best of human expertise and predictive AI is better than the other. We found that it gave higher accuracy than when used alone.

CoDoC not only reduced false positives in mammography datasets by 25%, but also reduced the number of cases clinicians needed to read by 2 in what-if simulations where the AI ​​was allowed to operate autonomously in certain situations. We were able to reduce the number of cases. One third. We also demonstrated how CoDoC could hypothetically improve chest X-ray triage for future TB testing.

Developing AI for healthcare responsibly

Although this research is theoretical, it shows that AI systems have the potential to adapt. CoDoC has been able to improve the performance of medical image interpretation across different demographic populations, clinical settings, medical imaging equipment used, and disease types.

CoDoC is a promising example of how the benefits of AI can be leveraged in combination with human strengths and expertise. We work with external partners to rigorously evaluate the potential benefits of our research and systems. To safely deploy technologies like CoDoC into real-world healthcare settings, healthcare providers and manufacturers must understand how clinicians interact differently with AI and develop specific healthcare AI tools and settings. You should also validate your system using:

For more information about CoDoC, please see below.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHyderabad-based university develops AI model with US scientists
Next Article PerfCodeGen from Salesforce AI Research: A training-free framework that improves the performance of LLM-generated code with execution feedback

Related Posts

Tools

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025
Tools

The most comprehensive evaluation suite for GUI agents!

June 7, 2025
Tools

Humanity launches Claude AI model for US national security

June 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?