Healthcare’s silent battle: how do AI algorithms affect minorities?

Unless you have been living under a rock, you may be aware that artificial intelligence is growing at a tremendous rate! On the surface level, it seems to be an exciting leap for humanity. I mean, who could criticise improved productivity for the masses, or more specifically for students, chat GPT? However, with such innovative new algorithms to speed up daily, mundane tasks, there are always flaws being hushed up by the media.

For instance, artificial intelligence is said to take over the healthcare industry in the foreseeable future. This seems like fantastic news for the likes of the NHS, as this should hopefully reduce waiting lists and increase the number of patients seen to on a daily basis. However, this is all sounding too good to be true, and inevitably after some worrying research it came to my conclusion that artificial intelligence in the healthcare industry will do more harm than good.

Even without the use of artificial intelligence, the healthcare industry is already discriminating against women and racial minority groups. It is important to note that these algorithms run by picking out patterns in data and thus repeating them in like-wise situations in the future. Therefore, if programmed with already biased tendencies, these algorithms will only grow in their subject biases as the patterns will become incessant.

Here at the University of Edinburgh, researchers have found a new algorithm, CoDE-ACS that uses AI technology to be able to rule out the possibility of a heart attack in patients, with an accuracy of 99.6%. The algorithm takes into consideration a patient’s age, sex, medical history and protein troponin levels in the blood. On one hand, this will significantly reduce the number of hospital admissions as it will help doctors struggling to make informed diagnosis’s. CoDE-ACS will hopefully make an appearance in hospitals around Scotland in 5-10 years time after all trials have been completed.

As mentioned prior, racial biases in artificial intelligence are only strengthened through algorithmic decision making. Since, typically, black patients have to be deemed much sicker than white patients in order to be recommended the same treatments. Therefore, this means that artificial intelligence, such as the new algorithm here developed at the university, will utilise these pre existing biases to make medical judgements on patients. This is terrifying for those facing the discrimination.

It gets worse: researchers at Emory University found that during race agnostic tasks (medical scans that require no background information of patients ethnicity or medical history) AI systems could detect a patients race with alarming accuracy. And what is even more terrifying, says Judy Gichoya who leads this research at the university, is that there is no possible explanation for how these systems are able to detect ethnicity from a medical scan of the bones or lungs alone. This shows that racial biases seem to be ingrained in machine learning, which is no easy fix to eliminate…

This all seems extremely worrying for those who would be subject to the biases of artificial intelligence, as it could determine between them receiving life saving treatment or not. I, for one, did not expect artificial intelligence to impact healthcare in this way, and believe it needs to be spoken about more in the industry.

Intubation & Heart Monitor Training” by Greenville, NC is marked with Public Domain Mark 1.0.