Researchers at West Virginia University determined that AI technology could use input from doctors’ exam notes to help diagnose diseases in patients with classic symptoms.
(WVU photo/Greg Ellis)
Artificial intelligence tools can help emergency room doctors accurately predict illness, but only for patients with typical symptoms. West Virginia University Scientists discovered it.
Gangster “Michael” Whoassistant professor WVU School of Medicine Microbiology, Immunology, Cell Biology Director of WVU Bioinformatics Core Facilityled a study comparing the accuracy and accuracy of the four CHATGPT models in conducting medical diagnosis and explaining their inferences.
His discovery, Featured in Journal Scientific ReportsDemonstrate the need to incorporate more and different types of data into AI technology training to help diagnose diseases.
More data could make a difference whether AI gives patients the correct diagnosis of what is called a “challenging case,” but this does not show classic symptoms. By way of example, Hu pointed to a trio scenario from his study involving patients with pneumonia without a typical fever.
“In these three cases, all GPT models were unable to provide accurate diagnosis,” Hu said. “So I jumped in to see doctors’ notes and found these patterns to be challenging cases. ChatGPT tends to get a lot of information from various resources on the Internet, but these may not cover symptoms of atypical disease.”
This study analyzed data from 30 public emergency department cases where demographics were not included due to privacy reasons.
Hu explained that when using ChatGPT to assist with a diagnosis, doctor notes will be uploaded and the tool will be asked to provide the top three diagnoses. Results changed in the tested versions: GPT-3.5, GPT-4, GPT-4O, O1 series.
“When we looked into whether the AI model had the correct diagnosis on any of the top three results, we found no significant improvements between the newer and older versions,” he said. “However, looking at the number one diagnostics for each model, the newer versions are about 15% to 20% more accurate than the older versions.”
Given the current low performance of AI models is a complex and atypical case, Hu said that human surveillance is the need for high quality, patient-centered care when using AI as a support tool.
“We didn’t take this study out of curiosity to see if the new model would produce better results. We wanted to establish the foundation for future research, including additional inputs,” Hu said. “I’m currently only entering doctor’s notes. I hope to improve accuracy in the future by including images and findings from clinical tests.”
Hu will also expand his findings from one of his Recent research So he applied the CHATGPT-4 model to the task of playing physiotherapists, psychologists, nutritionists, artificial intelligence experts and athletes in a simulated panel discussion on sports rehabilitation.
He said he believes that such models can improve the diagnostic accuracy of AI by taking an interactive approach in which multiple AI agents interact.
“I think it’s very important to look at the steps of reasoning from a trustworthy standpoint,” Hu said. “In this case, high-quality data, including both typical and atypical cases, can help build trust.”
Hu emphasized that ChatGpt is promising, but not a certified medical device. He said that when providers include images and other data in their clinical settings, the AI model is an open source system and is installed in hospital clusters to comply with privacy laws.
Other contributors to this study were postdoc fellow Jinge Wang and lab volunteer Kenneth Schwe in Montgomery County, Maryland, in the faculties of Microbiology, Immunology and Cell Biology. Like Li Liu from Arizona State University. This work was supported by funding from the National Institutes of Health and the National Science Foundation.
Hu said future research into using ChatGPT in the emergency department could examine whether increasing the ability to explain AIS inference can contribute to patient treatment triage and decisions.
-wvu-
LS/5/20/25
Media Contact: Linda Skidmore
Health research author
WVU Research Communication
linda.skidmore@hsc.wvu.edu
For the latest West Virginia University news and information from Wvutoday, call 1-855-WVU-News.