Thinking Outside the Box: Orthogonal Approach to Equalizing Protected Attributes

Liu, Jiahui, Cai, Xiaohao, Niranjan, Mahesan

arXiv.org Artificial Intelligence 

Machine/deep learning (ML) has earned significant attention in the medical field, offering state-of-the-art solutions in enhancing disease diagnosis and treatment management and broadening healthcare accessibility. As AI systems gain traction in medical imaging diagnosis, there is a growing awareness about the imperative need for fairness guarantee in the systems' prediction and the investigation of latent biases which may emerge in intricate real-world scenarios [1, 7]. Unfortunately, AI models often inadvertently encode sensitive attributes (such as race and gender) when processing medical images, thereby influencing their discriminatory behaviour [6, 13, 2]. This issue becomes particularly noticeable when models are trained on data sourced from external repositories but are evaluated on data from internal ones. Therefore, while the diagnosis remains consistent across datasets, differences in protected attributes can lead to suboptimal model performance on the internal datasets [3].