In Elon Musk’s world, AI is the new doctor. X’s CEO is encouraging users to upload medical test results, such as CT and bone scans, to the platform so that Grok, X’s artificial intelligence chatbot, can learn how to interpret them efficiently. Masu.
“Try sending your X-ray, PET, MRI, or other medical images to Grok for analysis,” Musk wrote on X last month. “It’s early days, but it’s already very accurate and will be very good. Please let us know where Grok is getting it right or where we need to improve.”
After all, Grok needs a job.
According to some users, the AI successfully analyzed blood test results and identified breast cancer. But doctors who responded to Musk’s post say other information was also significantly misinterpreted. In one instance, Grok mistook a “textbook case” of tuberculosis for a herniated disc or spinal stenosis. In another case, a bot mistook a mammogram of a benign breast cyst for an image of a testicle.
Musk has long been interested in the relationship between healthcare and AI, and launched brain chip startup Neuralink in 2022. Musk claimed in February that the company had successfully implanted electrodes that allow users to move a computer mouse inside their heads. And Musk’s technology startup xAI, which helped launch Grok, announced in May that it had raised $6 billion in investment funding, giving Musk plenty of money to invest in medical technology. It’s unclear how Grok will be further developed to meet needs.
“We know they have the technical capabilities,” Dr. Laura Heacock, associate professor of radiology at New York University Langone Health, wrote about X. ) It is up to them to include medical images. For now, non-generative AI methods continue to perform well in medical image processing. ”
X did not respond to Fortune’s request for comment.
Dr. Glock’s problem
Experts say Mr. Musk’s lofty goal of training AI to perform medical diagnoses also comes with risks. AI is increasingly being used as a means to make complex science more accessible and create assistive technology, but teaching Grok how to use data from social media platforms requires a Both privacy concerns arise.
Ryan Tarsey, CEO of health technology company Avandra Imaging, said in an interview with Fast Company that rather than obtain data from a secure database containing anonymized patient data, Asking for direct data input is Musk’s way of trying to speed up Grok’s development, he said. Additionally, the information is derived from a limited sample of those willing to upload images and tests, and the AI is not collecting data from sources representative of a broader, more diverse healthcare practice. means.
Medical information shared on social media is not bound by the Health Insurance Portability and Accountability Act (HIPAA), a federal law that protects patients’ personal information from being shared without their consent. This means that once you choose to share your information, you have less control over where that information goes.
“This approach comes with a myriad of risks, including inadvertently sharing a patient’s identity,” Tarsey said. “Personal health information is ‘baked’ into too many images, such as CT scans, that this plan will inevitably make public.”
The privacy risks posed by Grok are not well known, as X may have privacy protection features that are not generally known, said Matthew McCoy, assistant professor of medical ethics and policy at the University of Pennsylvania. That’s what it means. He said users share medical information at their own risk.
“As an individual user, are you comfortable providing your health data?” he told The New York Times. “Absolutely not.”