How can we keep algorithmic racism out of Canadian health care's AI toolkit?
In health care, the promise of artificial intelligence is alluring: With the help of big data sets and algorithms, AI can aid difficult decisions, like triaging patients and determining diagnoses. And since AI leans on statistics rather than human interpretation, the idea is that it's neutral – it treats everyone in a given data set equally. In October 2019, a study published in the prestigious journal Science showed that a widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimated the health needs of the sickest Black patients. The algorithm, sold by a health services company called Optum, embodied "significant racial bias," the authors concluded, suggesting that tools used by health systems to manage the care of about 200 million Americans could incorporate similar biases. The problem was fundamental: The commercial algorithm focused on costs, not illness. In looking at which patients would benefit from additional health care services, it underestimated the needs of Black patients because they had cost the system less. But Black patients' costs weren't lower because the patients were healthier; they were lower because they had unequal access to care.
Mar-19-2021, 17:43:15 GMT