Organisations that are considering the use of artificial intelligence (AI) systems have been warned they run the risk of biased outcomes if the data used is not carefully considered.
It comes after researchers from the School of Biomedical Engineering & Imaging Sciences at King’s College London have found that AI models can be racially biased if they are trained on unbalanced databases, meaning where AI models are used, misdiagnoses would occur for under or non-represented races.
The study found in the majority of cardiovascular diseases (CVDs), there are known associations between sex/race and epidemiology, pathophysiology, clinical manifestations, effects of therapy, and outcomes. Although these differences do not have proven causative links with race and gender, their presence remains a potential concern about the performance of AI models in cardiovascular imaging.
The paper, looked at the performance of AI models based on cardiac MR imaging that are used to derive biomarkers of the heart.
It was shown that if those biomarkers are used for the diagnosis of heart failure, for instance, there would be more misdiagnoses in minority races than there would be for majority races.
The researchers found statistically significant differences in segmentation performance scores between races as well as in absolute/relative errors in volumetric and functional biomarkers, showing that the AI model was biased against minority racial groups, even after correction for possible confounders.
Lead researcher Dr Andrew King, reader in medical image analysis, School of Biomedical Engineering & Imaging Sciences, said researchers needed to consider the training data when they are deploying these models into clinical practice to ensure that there is adequate representation of racial groups.
“If we deploy models into the real world that have not been trained on ‘inclusive’ data, then we are effectively creating more healthcare inequalities in the system,” he said. “But with advancements in AI we now have the opportunity to address these inequalities.”
“The AI models have a lot of potential but they are in a preliminary stage where they need to be taking into account the differences between racial groups or even gender, and there is a need to improve in the sense of how the models are trained before being able to use them in clinic. If we are not careful how we train these models, there are potential dangers that mean we won’t realise the benefits in an equitable way,” added Dr Esther Puyol, research associate, School of Biomedical Engineering & Imaging Sciences
In an earlier work, the researchers identified three methods that can use the same data but develop a model which is fairer and has a more equal performance for different racial groups.
These methods take into account that the databases used for training are unbalanced, for instance, the white group accounts for 80 percent of the data and the other racial groups for the remaining 20 percent. The first method aims to modify the training sampling strategy to remove the discrimination. Effectively, the method fools the AI model into thinking that the database is balanced when in reality it is not.
The second method aims to combine the segmentation task with a classification task that will aim to predict the race of the subject based on the images. By trying to combine these two tasks the model learns to segment the heart in a less biased way.
The final strategy aims to train a separate model per race group. The main disadvantage of this strategy is it requires race knowledge to apply the model, and this is not always available in all clinical settings.
King said: “This is an important time for the future of AI. Techniques are starting to be used in the real world including in high-stakes applications like medicine. If we don’t make sure that AI techniques are fair then it may erode public trust in their use. Future research should bear this in mind and ensure that all sectors of society benefit equally from AI.”