Skip to content Skip to navigation

When Algorithmic Fairness Fixes Fail: The Case for Keeping Humans in the Loop

Monday, November 2, 2020

Posted In:

News

As healthcare systems increasingly rely on predictive algorithms to make decisions about patient care, they are bumping up against issues of fairness. 

For example, a hospital might use its electronic healthcare records to predict which patients are at risk of cardiovascular disease, diabetes or depression and then offer high-risk patients special attention. But women, Black people, and other ethnic or racial minority groups might have a history of being misdiagnosed or untreated for these problems. That means a predictive model trained on historic data could reproduce historical mistreatment or have a much higher error rate for these subgroups than it does for white male patients. And when the hospital uses that algorithm to decide who should receive special care, that can make matters worse.  

Some researchers have been hoping to address model fairness issues algorithmically – by recalibrating the model for different groups or developing ways to reduce systematic differences in the rate and distribution of errors across groups. 

But Nigam Shah, associate professor of medicine (biomedical informatics) and of biomedical data science at Stanford University and an affiliated faculty member of the Center for Artificial Intelligence in Medicine and Imaging (AIMI) and Stanford Institute for Human-Centered Artificial Intelligence (HAI), and graduate students Stephen Pfohl and Agata Foryciarz wondered whether algorithmic fixes were really the answer. Read full blog post here »