10 Ways Machine Learning Practitioners Can Build Fairer Systems
My opinions are my own. An introduction to the harm that ML systems cause and to the power imbalance that exists between ML system developers and ML system participants …and 10 concrete ways for machine learning practitioners to help build fairer ML systems. Image description: Photo of Black Lives Matter protesters in Washington, D.C. -- 2 signs say "Black Lives Matter" and "White Silence is Violence." Machine learning systems are increasingly used as tools of oppression. All too often, they're used in high-stakes processes without participants' consent and with no reasonable opportunity for participants to contest the system's decisions -- like when risk assessment systems are used by child welfare services to identify at-risk children; when a machine learning (or "ML") model decides who sees which online ads for employment, housing, or credit opportunities; or when facial recognition systems are used to surveil neighborhoods where Black and Brown people live. In reality though, machine learning systems reflect the beliefs and biases of those who design and develop them.
Oct-21-2020, 22:15:15 GMT
- Country:
- Africa (0.04)
- Asia > Southeast Asia (0.04)
- Europe > United Kingdom (0.04)
- North America > United States
- California
- Alameda County > Oakland (0.04)
- San Francisco County > San Francisco (0.05)
- District of Columbia > Washington (0.24)
- Massachusetts
- Middlesex County > Somerville (0.04)
- Suffolk County > Boston (0.04)
- California
- South America (0.04)
- Industry:
- Education (1.00)
- Government > Voting & Elections (0.94)
- Information Technology (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: