On preserving non-discrimination when combining expert advice

Blum, Avrim, Gunasekar, Suriya, Lykouris, Thodoris, Srebro, Nathan

arXiv.org Machine Learning 

The emergence of machine learning in the last decade has given rise to an important debate regarding the ethical and societal responsibility of its offspring. Machine learning has provided a universal toolbox enhancing the decision making in many disciplines from advertising and recommender systems to education and criminal justice. Unfortunately, both the data and their processing can be biased against specific population groups (even inadvertently) in every single step of the process [BS16]. This has generated societal and policy interest in understanding the sources of this discrimination and interdisciplinary research has attempted to mitigate its shortcomings. Discrimination is commonly an issue in applications where decisions need to be made sequentially. The most prominent such application is online advertising where platforms need to sequentially select which ad to display in response to particular query searches. This process can introduce discrimination against protected groups in many ways such as filtering particular alternatives [DTD15, APJ16] and reinforcing existing stereotypes through search results [Swe13, KMM15]. Another canonical example of sequential decision making is medical trials where underexploration on female groups often leads to significantly worse treatments for them [LDM16]. Similar issues occur in image classification as stressed by "gender shades" [BG18].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found