racial bias


Seeing the World Through Google's Eyes – The Ringer

#artificialintelligence

Another woman discovered that the search "unprofessional hairstyles for work" yielded images of black women while "professional hairstyles for work" brought up images of white women. In 2015, users discovered that searching for "n*gga house" in Google Maps directed users to the White House. That same year, a tool that automatically categorizes images in the Google Photos app tagged a black user and his friend as gorillas, a particularly egregious error considering that comparison is often used by white supremacists as a deliberately racist insult. Camera companies like Kodak sold film that photographed white skin better than black skin, and companies like Nikon have also shown racial bias toward Caucasian features in their facial-recognition technology.


AI Learns Gender and Racial Biases from Language

IEEE Spectrum Robotics Channel

In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine learning AI on a "Common Crawl" body of text--2.2 million different words--collected from the Internet. To reveal the biases that can arise in natural language learning, Narayanan and his colleagues created new statistical tests based on the Implicit Association Test (IAT) used by psychologists to reveal human biases. Their work detailed in the 14 April 2017 issue of the journal Science is the first to show such human biases in "word embedding"--a statistical modeling technique commonly used in machine learning and natural language processing. Narayanan and his colleagues at Princeton University and University of Bath in the U.K. first developed a Word-Embedding Association Test (WEAT) to replicate the earlier examples of race and gender bias found in past psychology studies.


The Foundations of Algorithmic Bias

#artificialintelligence

Courts even deploy computerized algorithms to predict "risk of recidivism", the probability that an individual relapses into criminal behavior. Given years of credit history and other side information, a machine learning algorithm might then output a probability that the applicant will default. Machine learning refers to powerful set of techniques for building algorithms that improve as a function of experience. When Facebook recognizes your face in a photograph, when your mailbox filters spam, and when your bank predicts default risk – these are all examples of supervised machine learning in action.


Machine Learning Needs Bias Training to Overcome Stereotypes

#artificialintelligence

There are many potential reasons why machine learning systems can learn discriminatory biases. It might seem possible to avoid biased machine learning algorithms by making sure you don't feed in data that could lead to such problems in the first place. Accomplishing this goal requires raising awareness about social biases in machine learning and the serious, negative consequences that it can have. It also requires companies to explicitly test machine learning models for discriminatory biases and publish their results.


Machine Learning Needs Bias Training to Overcome Stereotypes

#artificialintelligence

There are many potential reasons why machine learning systems can learn discriminatory biases. It might seem possible to avoid biased machine learning algorithms by making sure you don't feed in data that could lead to such problems in the first place. Accomplishing this goal requires raising awareness about social biases in machine learning and the serious, negative consequences that it can have. It also requires companies to explicitly test machine learning models for discriminatory biases and publish their results.