A Snapshot of the Frontiers of Fairness in Machine Learning
The last decade has seen a vast increase both in the diversity of applications to which machine learning is applied, and to the import of those applications. Machine learning is no longer just the engine behind ad placements and spam filters; it is now used to filter loan applicants, deploy police officers, and inform bail and parole decisions, among other things. The result has been a major concern for the potential for data-driven methods to introduce and perpetuate discriminatory practices, and to otherwise be unfair. And this concern has not been without reason: a steady stream of empirical findings has shown that data-driven methods can unintentionally both encode existing human biases and introduce new ones.7,9,11,60 At the same time, the last two years have seen an unprecedented explosion in interest from the academic community in studying fairness and machine learning. "Fairness and transparency" transformed from a niche topic with a trickle of papers produced every year (at least since the work of Pedresh56 to a major subfield of machine learning, complete with a dedicated archival conference--ACM FAT*). But despite the volume and velocity of published work, our understanding of the fundamental questions related to fairness and machine learning remain in its infancy.
Apr-22-2020, 00:40:12 GMT
- AI-Alerts:
- 2020 > 2020-04 > AAAI AI-Alert for Apr 28, 2020 (1.00)
- Country:
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Pennsylvania
- Allegheny County > Pittsburgh (0.14)
- Philadelphia County > Philadelphia (0.14)
- Illinois > Cook County
- North America > United States
- Genre:
- Research Report (0.46)
- Industry:
- Education (0.46)
- Information Technology > Security & Privacy (0.46)
- Law > Civil Rights & Constitutional Law (0.34)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.49)
- Technology: