Machine Learning and Discrimination
Most of the time, machine learning does not touch on particularly sensitive social, moral, or ethical issues. Someone gives us a data set and asks us to predict house prices based on given attributes, classifying pictures into different categories, or teaching a computer the best way to play PAC-MAN -- what do we do when we are asked to base predictions of protected attributes according to anti-discrimination laws? How do we ensure that we do not embed racist, sexist, or other potential biases into our algorithms, be it explicitly or implicitly? It may not surprise you that there have been several important lawsuits in the United States on this topic, possibly the most notably one involving Northpointe's controversial COMPAS -- Correctional Offender Management Profiling for Alternative Sanctions -- software, which predicts the risk that a defendant will commit another crime. The proprietary algorithm considers some of the answers from a 137-item questionnaire to predict this risk.
Apr-1-2019, 10:35:12 GMT
- Country:
- North America > United States > Wisconsin (0.07)
- Industry:
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: