AI Biases as Asymmetries: A Review to Guide Practice

Waters, Gabriella, Honenberger, Phillip

arXiv.org Artificial Intelligence 

AI Biases as Asymmetries: A Review to Guide Practice Gabriella Waters (CEAMLS, Morgan State University)* Phillip Honenberger (CEAMLS, Morgan State University)* *Equal contribution [Preprint - Nov. 21, 2024] Abstract The understanding of bias in AI is currently undergoing a revolution. Initially understood as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. In this paper we review the reasons for this changed understanding and provide new guidance on two questions: First, how should we think about and measure biases in AI systems, consistent with the new understanding? Second, what kinds of bias in an AI system should we accept or even amplify, and what kinds should we minimize or eliminate, and why? The key to answering both questions, we argue, is to understand biases as "violations of a symmetry standard" (following Kelly). We distinguish three main types of asymmetry in AI systems - error biases, inequality biases, and process biases - and highlight places in the pipeline of AI development and application where bias of each type is likely to be good, bad, or inevitable. Introduction The understanding of bias in AI is currently undergoing a revolution. Initially perceived as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. Cognitive psychology and statistics have informed this shift, highlighting the benefits and costs of biases in decision-making processes. Cognitive psychology presents biases as often helpful in making decisions under conditions of uncertainty. Similarly, statistical methods acknowledge biases as often useful and sometimes necessary for making inferences from data. These insights have been instrumental in redefining biases as not inherently negative, but as sometimes essential components that can and should be harnessed to improve AI systems.