May 15, 2023 – Everywhere you look, machine learning applications in artificial intelligence are getting used to vary the establishment. This is very true in healthcare, where technological advances are accelerating drug discovery and identifying potential latest cures.
But these advances aren't abruptly signs. They have also addressed avoidable disparities within the burden of disease, injury, violence and opportunities to attain optimal health, all of which disproportionately influence People of color and other underserved communities.
The query is whether or not AI applications will further increase healthcare inequalities or help to cut back them. This is especially true for the event of clinical algorithms that help doctors detect and diagnose diseases, predict outcomes and develop treatment strategies.
“One of the problems that has emerged with AI in general and in medicine in particular is that these algorithms can be biased, meaning they perform differently in different groups of people,” said Paul Yi, MD, assistant professor of diagnostic radiology and nuclear medicine on the University of Maryland School of Medicine and director of the University of Maryland Medical Intelligent Imaging (UM2ii) Center.
“In medicine, a wrong diagnosis is literally a matter of life or death, depending on the situation,” Yi said.
Yi is co-author of a study published last month within the journal Natural medicine In this project, he and his colleagues sought to find out whether medical image datasets utilized in data science competitions help or hinder the flexibility to detect bias in AI models. These competitions involve computer scientists and physicians who crowdsource data from around the globe. Teams compete to provide you with the perfect clinical algorithms, lots of that are put into practice.
The researchers used a well-liked data science competition site called Kaggle for medical imaging competitions held between 2010 and 2022. They then evaluated the info sets to seek out out if demographic variables were reported. Finally, they examined whether the competition included demographic-based performance as a part of the evaluation criteria for the algorithms.
Yi said that of the 23 data sets included within the study, “the majority – 61% – did not include any demographic data at all.” Nine contests included demographic data (primarily age and gender) and one reported on race and ethnicity.
“None of these data science competitions have examined these biases, whether demographic data was provided or not, so the accuracy of responses in men versus women, or in white, black and Asian patients,” Yi said. The conclusion? “If we don't have demographic data, we can't measure bias,” he explained.
Algorithmic hygiene, checks and balances
“To reduce bias in AI, developers, inventors and researchers of AI-based medical technologies must consciously prepare to avoid it by proactively improving the representation of certain populations in their datasets,” said Bertalan Meskó, MD, PhD, director of the Medical Futurist Institute in Budapest, Hungary.
An approach that Meskó called “algorithmic hygiene” is analogous to what a bunch of researchers at Emory University in Atlanta took after they created a racially diverse, granular dataset – the EMory BrEast Imaging Dataset (EMBED) – that's 3.4 million mammography images for the prevention and diagnosis of breast cancer. 42 percent of the 11,910 patients reported themselves to be African-American.
“The fact that our database is diverse is, in a sense, a direct byproduct of our patient population,” said Hari Trivedi, MD, assistant professor within the departments of radiology and imaging sciences and biomedical informatics at Emory University School of Medicine and co-director of the Health Innovation and Translational Informatics (HITI) Laboratory.
“Even now, this demographic information is not included in most datasets used in developing deep learning models,” Trivedi said. “But it was really important to make this information available in EMBED and any future datasets we develop, because without it, it's impossible to know how and when your model might be biased, or that the model you're testing might be biased.”
“You just can’t close your eyes to it,” he said.
It is very important to notice that biases can occur not only firstly but at any point within the AI development cycle.
“Developers could use statistical tests to determine whether the data used to train the algorithm differs significantly from the actual data they face in real-world situations,” Meskó said. “This could indicate biases due to the training data.”
Another approach is “de-biasing,” which helps eliminate differences between groups or individuals based on individual characteristics. Meskó referred to the IBM Open Source AI Fairness 360 Toolkita comprehensive set of metrics and algorithms that researchers and developers can access to cut back bias in their very own datasets and AIs.
Checks and balances are equally necessary. This could include, for instance, “reviewing the decisions of the algorithms by humans and vice versa. In this way, they can hold each other accountable and help reduce bias,” said Meskó.
Keep people informed
Speaking of checks and balances: Do patients must fear that a machine will replace a physician’s judgment or potentially make dangerous decisions because necessary data is missing?
Trevedi mentioned that guidelines for AI research are currently being developed that focus specifically on rules to follow when testing and evaluating models, especially open source models. In addition, the FDA and the Department of Health are attempting to Algorithm development and validation with the aim of improving accuracy, transparency and fairness.
Like medicine itself, AI shouldn't be a one-size-fits-all solution. Perhaps checks and balances, consistent evaluations, and concerted efforts to construct diverse, comprehensive data sets might help address and ultimately overcome widespread health inequities.
At the identical time, I feel we're a good distance from completely eliminating the human factor and now not involving clinicians in the method,” says Kelly Michelson, MD, MPH, director of the Center for Bioethics and Medical Humanities at Northwestern University Feinberg School of Medicine and chief medical officer at Ann & Robert H. Lurie Children's Hospital of Chicago.
“There are actually great opportunities to use AI to reduce inequalities,” she said, also noting that AI is not just “this one big thing.”
“AI means many various things in many various places,” Michelson says. “And the ways it's used are different. It's necessary to acknowledge that the problems around bias and the impact on health disparities are going to be different depending on the sort of AI you're talking about.”
Leave a Reply