A.I.'s Hidden Biases Are Continuing to Bedevil Businesses. Can They Be Stopped?

#artificialintelligence 

Bias will continue to be a fundamental concern for businesses hoping to adopt artificial intelligence software, according to senior executives from IBM and Salesforce, two of the leading companies selling such A.I.-enabled tools. Companies have become increasingly wary that hidden biases in the data used to train A.I. systems may result in outcomes that unfairly--and in some cases illegally--discriminate against protected groups, such as women and minorities. For instance, some facial recognition systems have been found to be less accurate at differentiating between dark-skinned faces as opposed to lighter-skinned ones, because the data used to train such systems contained far fewer examples of dark-skinned people. In one of the most notorious examples, a system used by some state judicial systems to help decide whether to grant bail or parole was more likely to rate black prisoners as having a higher risk of re-offending than white prisoners with similar criminal records. "Bias is going to be one of the fundamental issues of A.I. in the future," Richard Socher, the chief scientist at software company Salesforce, said.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found