Avoiding bias and increasing diversity in AI and health research - Part 1 - Bristows
This article is part 1 of our bias in AI series, an update to the original article in our Biotech Review of the year – issue 8. Read part 2 here. During the COVID-19 pandemic, the notion of different health outcomes for different populations has gained increased profile in the public consciousness, particularly in light of the varying effect of COVID-19 on different community groups. Varying outcomes can arise for a variety of reasons, one of which is bias (whether conscious or unconscious) in the healthcare system. But surely this isn't something that needs to be considered in relation to AI in health research, as AI systems are inanimate and can't display human faults…right? There is often a misconception that medical devices and AI systems can't produce biased results, as they work using logic and process, rather than being tainted by flawed assumptions based on human error or prejudice. However, ultimately it is humans that design medical devices, which are tested on human collected datasets.
May-19-2021, 10:05:27 GMT