Dancing in the Shadows: Harnessing Ambiguity for Fairer Classifiers
Barrainkua, Ainhize, Gordaliza, Paula, Lozano, Jose A., Quadrianto, Novi
–arXiv.org Artificial Intelligence
Algorithmic systems, designed to streamline decision processes and enhance efficiency, have permeated virtually every aspect of our lives. From credit approvals to hiring decisions, from predictive policing to healthcare recommendations, algorithms wield significant influence. Yet, this influence is not neutral, and the consequences could be disproportionate for diverse communities. Subtle biases embedded in training data, the choices made during model development, and the very nature of algorithmic decision-making are some potential reasons for inequitable treatment of certain demographic groups, perpetuating and, in some instances, exacerbating societal disparities. Consider, for instance, the use of predictive policing algorithms, where certain communities are subjected to heightened surveillance based on historical crime data, perpetuating a cycle of over-policing [9]. Similarly, in hiring practices, algorithms may inadvertently favor certain demographics, leading to underrepresentation and reinforcing existing inequalities in the workplace [6, 5]. Therefore, it is crucial to acknowledge the inherent biases and disparities that have emerged within these systems and propose innovative solutions to enhance their fairness guarantees.
arXiv.org Artificial Intelligence
Jun-27-2024
- Country:
- Asia > Indonesia (0.04)
- Europe > Spain
- Basque Country (0.04)
- North America > United States
- Florida > Broward County (0.04)
- South America > Paraguay
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Information Technology > Security & Privacy (0.52)
- Technology: