May 18, 2022

Addressing AI and Implicit Bias in Healthcare

Written by
Why is TechnologyAdvice Free?

Artificial intelligence (AI) is gaining more traction in the healthcare market, with use growing approximately 167 percent between 2019 and 2021. However, implicit and contextual biases are causing incorrect diagnoses and care disparities, leading many healthcare organizations to look for solutions. Healthcare facilities need their healthcare providers to examine their own implicit biases, ensure they’re using inclusive data training sets, and work with experienced data scientists to create unbiased algorithms.

Consequences of Bias in Healthcare

When healthcare is biased, it means patients aren’t always getting the care they need. Doctors may automatically attribute symptoms to an issue related to weight, race, or gender when in reality, there are true underlying health problems that need to be addressed. However, this doesn’t mean healthcare providers are actively changing the way they address patients based on some implicit bias.

Janice Huckaby, MD, chief medical officer of Maternal Health at Optum, explains, “The scary thing about implicit bias is that oftentimes people are unaware that it’s shaping some of their reactions.” The point is that medical providers have to work at removing bias from their treatments.

“The scary thing about implicit bias is that oftentimes people are unaware that it’s shaping some of their reactions.”

Janice Huckaby, MD, Chieft Medical Officer of Maternal Health at Optum

Additionally, patients can typically tell if a provider has an implicit bias based on the provider’s body language or word choice. If a patient picks up on the implicit bias, it’s going to negatively impact their relationship with the healthcare provider. Once that happens, they’ll either search out a new provider or disengage from treatment altogether, keeping them from getting the care they need.

Real Examples of AI Bias in Healthcare

The presence of bias in healthcare isn’t subjective. There are proven examples that different people experience different care, depending on their gender, race, or even weight.

Gender Disparity in Chest X-Rays

Artificial intelligence (AI) is gaining traction in healthcare circles, especially when it comes to analyzing medical images, like x-rays or MRI scans. However, these systems tend to take on the implicit biases of their trainers, leading to those biases continuing and solidifying. A 2020 PNAS study found that gender imbalances in the training data sets of computer-aided diagnoses (CAD) systems led to the CAD system displaying lower accuracy with the underrepresented group. 

In other words, when men’s x-rays were predominantly uploaded into the CAD system for training analysis, the accuracy of women’s diagnoses was dramatically lower. In order to improve accuracy across the board, AI algorithms have to be trained with large datasets of diverse and balanced information.

Skin Color Disparity in Diagnosing Skin Cancer

In a similar study, the JAMA Dermatology Network identified disparities in how skin cancer is diagnosed across people of different skin colors. The models that dermatologists use to identify skin cancer or potentially cancerous spots are mostly trained with light-skinned subjects, meaning they’re less likely to accurately identify skin cancer in dark-skinned patients. 

And while dark-skinned people typically are less at risk for skin cancer, according to the American Academy of Dermatology Association, “when skin cancer develops in people of color, it is often diagnosed at a more advanced stage – making it more difficult to treat.” Because the AI models aren’t trained with diversity in mind, it takes them much longer to be able to diagnose skin cancer in patients with darker skin because they can’t identify the contrast as quickly, or they might miss it completely or diagnose it incorrectly. 

Steps for Eliminating Bias in Healthcare

Bias and assumptions are part of human nature, but we can’t afford to keep them in our healthcare system. Here are some steps you can implement in your facility to reduce and eliminate bias from your treatments.

Engage in Diversity and Cultural Competency Training

Without training, healthcare providers may not even realize when they’re allowing implicit bias to affect their diagnoses. Diversity and cultural competency training can help healthcare providers examine and identify their own implicit biases. 

“A lot of implicit bias training could just start with the awareness that we all have some kind of implicit bias for many, many different reasons from how we grew up [to] what we’re supposed to do,” says Stacy Millett, the director for the Health Impact Project at the Pew Charitable Trusts. Healthcare providers need to be aware of how their implicit biases affect their patients, so they can make the effort to squash those biases.

Use Larger Datasets with More Diversity

According to Scientific American, “Since the early days of clinical trials, women and minority groups have been underrepresented as study participants; evidence mounted that these groups experienced fewer benefits and more side effects from approved medications.” 

“Since the early days of clinical trials, women and minority groups have been underrepresented as study participants…”

Scientific American

And even “typical” symptoms of common healthcare problems are based on what men experience. For example, chest pain is the leading indicator of a heart attack, but women actually are more likely to experience dizziness, shortness of breath, or nausea. Because of this, healthcare professionals may be slower to diagnose women with a heart attack, or they might miss it completely.

Additionally, artificial intelligence can only handle the activities it’s been trained to do. This means if its datasets only include men or white people, it’s going to have a very hard time accurately diagnosing women or people of color, especially if their symptoms don’t manifest in the same way.  

Don’t Use AI Without Human Oversight

While artificial intelligence is a great tool for healthcare providers, it’s not perfect and may provide inaccurate results. Healthcare facilities that use AI in their diagnoses have to incorporate human oversight in order to ensure they’re providing the best possible care for their patients. AI can’t replace doctors, but it can help them get more insights into their patient’s health, allowing them to improve patient outcomes.

Read more: New York City Council Passes Bill Requiring Oversight of AI Hiring Platforms

AI Bias Often Reflects Provider Bias

While largely unintentional, AI bias in healthcare often reflects the bias of the healthcare provider because the AI model is learning based on the diagnoses the provider gives. Therefore, if bias plays a role in a healthcare professional’s decision, it will later play a role in the output an AI algorithm provides. Doctors and nurses have to address their own implicit biases before they can expect AI models to be free of them.

Read next:Lessons About AI Algorithms from the Facebook Hearings

Technology Advice is able to offer our services for free because some vendors may pay us for web traffic or other sales opportunities. Our mission is to help technology buyers make better purchasing decisions, so we provide you with information for all vendors — even those that don't pay us.