10 Ways Machine Learning Practitioners Can Build Fairer Systems

#artificialintelligence 

My opinions are my own. An introduction to the harm that ML systems cause and to the power imbalance that exists between ML system developers and ML system participants …and 10 concrete ways for machine learning practitioners to help build fairer ML systems. Image description: Photo of Black Lives Matter protesters in Washington, D.C. -- 2 signs say "Black Lives Matter" and "White Silence is Violence." Machine learning systems are increasingly used as tools of oppression. All too often, they're used in high-stakes processes without participants' consent and with no reasonable opportunity for participants to contest the system's decisions -- like when risk assessment systems are used by child welfare services to identify at-risk children; when a machine learning (or "ML") model decides who sees which online ads for employment, housing, or credit opportunities; or when facial recognition systems are used to surveil neighborhoods where Black and Brown people live. In reality though, machine learning systems reflect the beliefs and biases of those who design and develop them.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found