Examining Imbalance Effects on Performance and Demographic Fairness of Clinical Language Models
Jones, Precious, Liu, Weisi, Huang, I-Chan, Huang, Xiaolei
–arXiv.org Artificial Intelligence
Data imbalance is a fundamental challenge in applying language models to biomedical applications, particularly in ICD code prediction tasks where label and demographic distributions are uneven. While state-of-the-art language models have been increasingly adopted in biomedical tasks, few studies have systematically examined how data imbalance affects model performance and fairness across demographic groups. This study fills the gap by statistically probing the relationship between data imbalance and model performance in ICD code prediction. We analyze imbalances in a standard benchmark data across gender, age, ethnicity, and social determinants of health by state-of-the-art biomedical language models. By deploying diverse performance metrics and statistical analyses, we explore the influence of data imbalance on performance variations and demographic fairness. Our study shows that data imbalance significantly impacts model performance and fairness, but feature similarity to the majority class may be a more critical factor. We believe this study provides valuable insights for developing more equitable and robust language models in healthcare applications.
arXiv.org Artificial Intelligence
Dec-23-2024
- Country:
- Asia > Middle East
- Israel (0.04)
- Europe > Croatia
- Dubrovnik-Neretva County > Dubrovnik (0.04)
- North America > United States
- Massachusetts > Suffolk County
- Boston (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York (0.04)
- Massachusetts > Suffolk County
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (1.00)
- Technology: