The healthcare algorithm makes black patients significantly less than their white counterparts receive important medical treatment. The major drawback affects millions of patients and has just been revealed in studies published this week in the journal Science .
The study does not name the creators of the algorithm, but Ziad Obermeyer, an associate professor at the University of California at Berkeley who worked on the study, says "almost every major health system" uses it as well. as institutions as insurers. Similar algorithms are produced by several different companies. "It's a systematic feature of how almost everyone in space approaches this problem," he says.
To make this prediction, the algorithm relies on data on how much a care provider costs to treat a patient. In theory, this can act as a substitute for how sick the patient is. However, studying the patient data set, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on treatment than similarly ill white patients. The algorithm does not account for this discrepancy, leading to strikingly large racial biases in the treatment of black patients.
The effect was drastic. Currently, 17.7 percent of black patients are receiving additional attention, the researchers found. If the gap is eliminated, that number will increase to 46.5 percent of patients.
"Cost is a reasonable proxy for health, but it is biased, and this choice is actually what introduces bias into the algorithm," says Obermeier. Historical racial inequalities are reflected in how much society spends on black and white patients. Patients may need work leave, such as treatment. Because black patients are disproportionately living in poverty, on average, it may be more difficult for them to call for the day and reduce wages. "There are only one million ways that poverty hinders access to health care," says Obermeier. Other differences, such as biases in how doctors treat patients, can also contribute to the gap.
This is a classic example of algorithmic biases in action. Researchers often point out that a biased data source produces biased results in automated systems. The good news, says Obermayer, is that there are ways to limit the problem in the system.
"This bias is correctable, not with new data, not with a new, more fantastic kind of neural network, but in fact only by changing things that the algorithm has to predict," he says. The researchers found that by focusing only on a set of specific costs, such as trips to the emergency room, they were able to reduce bias. An algorithm that directly predicts health outcomes, not costs, also improved the system.
"With this careful attention to how we train algorithms," says Obermayer, "we can reap many of their benefits but minimize the risk of bias."