Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, June 7
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»Dartmouth study shows AI could be a ‘double-edged sword’ in medical research
Research

Dartmouth study shows AI could be a ‘double-edged sword’ in medical research

By December 23, 2024No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A new study by Dartmouth Health researchers highlights the potential risks of artificial intelligence in medical imaging research, showing that algorithms can be taught to give the correct answer for illogical reasons. .

The study, published in Nature’s Scientific Reports, used a cache of 5,000 X-rays of human knee joints and also took into account dietary surveys completed by patients.

Artificial intelligence software then determines which patients are most likely to drink beer or eat refried beans, based on the X-ray scan, even if there was no visual evidence of either activity on the X-ray. I was asked to identify which is higher. knees.

“We like to assume that we’re seeing what humans see, or what humans would see if they had good vision,” said co-author Brandon Hill, a machine learning researcher at Dartmouth-Hitchcock College. ” said Brandon Hill, co-author of the paper. “And that’s the central issue here is that when it makes these associations, we infer that it must be from something in physiology or medical imaging. That’s not necessarily the case.”

In fact, machine learning tools often accurately determined which knee, and hence which of the x-rayed humans, was more likely to drink beer or eat beans. This was done by also making assumptions about race, gender, and knees. The city where the medical image was taken. The algorithm was also able to identify the model of the X-ray scanning machine that took the original image, and was able to associate the location of the scan with the likelihood of a particular eating habit.

Ultimately, it was these variables that the AI ​​used to determine who drank beer and ate refried beans, a phenomenon researchers call “shortcuts” in food and drink consumption. The image itself contained nothing associated with it.

“Part of what we’re showing is that it’s a double-edged sword. It can see things that humans can’t see,” Hill said. “But they can also see patterns that humans can’t see, which can make it easier to deceive people.”

The study authors said the paper highlights the things medical researchers should be careful about when implementing machine learning tools.

“If we have an AI that detects whether a credit card transaction appears fraudulent, who cares why it thinks so? Let’s make sure we can’t charge your credit card,” says Orthopedic Surgeon said Dr. Peter Schilling, lead author of the paper.

However, Schilling advises clinicians to proceed conservatively with these tools when treating patients to “really optimize the care provided.”

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI can distinguish between Scottish and American whisky | Research
Next Article The U.S. is paying attention to AGI’s progress in China’s intensifying technology competition

Related Posts

Research

JMU Education Professor was awarded for AI Research

June 3, 2025
Research

Intelligent Automation, Nvidia and Enterprise AI

June 2, 2025
Research

Can AI be your therapist? New research reveals major risks

June 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Gemini 2.5 Pro Preview: Even better coding performance

May 13, 20254 Views

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Gemini 2.5 Pro Preview: Even better coding performance

May 13, 20254 Views

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views
Don't Miss

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?