AI predicts patient's race from medical imaging
A new investigation in medical imaging sparks a discussion on AI bias in medicine.
-
AI can accurately predict patients' race from medical images alone.
MIT scientists have investigated a new modality that allows Artificial Intelligence to accurately predict patients' self-reported race: medical images.
Using imaging data from chest X-rays, limb X-rays, chest CT scans, and mammograms, the MIT team of 22 authors, whose research was published in Lancet Digital Health on May 11, trained an AI to identify patients' race as white, black, or Asian, despite the images containing no mention of the patients' race.
However, it's not yet clear how the AI model was able to do this.
In an attempt to understand how this was done, the team investigated possible mechanisms of race detection, but the AI still managed to accurately detect the patients' race from chest X-rays.
“These results were initially confusing, because the members of our research team could not come anywhere close to identifying a good proxy for this task,” said paper co-author Marzyeh Ghassemi, an assistant professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES) and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and of the MIT Jameel Clinic.
Celi, the paper's co-author, said that in order to better understand the implications of using AI in medicine and the ramifications of an AI having racial and social biases in the field, social scientists must also be brought into the picture.
“The fact that algorithms 'see' race, as the authors convincingly document, can be dangerous. But an important and related fact is that, when used carefully, algorithms can also work to counter bias,” said Ziad Obermeyer, associate professor at the University of California at Berkeley, whose research focuses on AI applied to health.
The fears stem from the possible miseducation of algorithms, which can be brought on by the humans who generate the algorithms unconsciously feeding them their own biases. One example lies in computer programs wrongly flagging black defendants as twice as likely to commit crime again as someone who is white.