How to Make Artificial Intelligence Less Biased
How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities.
Dec-4-2020, 08:05:05 GMT
- Country:
- North America > United States
- California
- Los Angeles County > Los Angeles (0.04)
- San Francisco County > San Francisco (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- Pennsylvania (0.04)
- California
- North America > United States
- Industry:
- Banking & Finance (1.00)
- Information Technology > Services (0.95)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: