racial bias


Google Expands Troll-Fighting Tool Amid Concerns of Racial Bias

#artificialintelligence

That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The company, meanwhile, put out a statement to address the alleged bias issue in its AI tool, which is called Perspective. "Perspective is an early-stage machine learning technology that will naturally improve over time. Jigsaw on Thursday also unveiled a new blog called "The False Positive" that is dedicated to describing the challenges of developing machine learning tools.


Seeing the World Through Google's Eyes – The Ringer

#artificialintelligence

Another woman discovered that the search "unprofessional hairstyles for work" yielded images of black women while "professional hairstyles for work" brought up images of white women. In 2015, users discovered that searching for "n*gga house" in Google Maps directed users to the White House. That same year, a tool that automatically categorizes images in the Google Photos app tagged a black user and his friend as gorillas, a particularly egregious error considering that comparison is often used by white supremacists as a deliberately racist insult. Camera companies like Kodak sold film that photographed white skin better than black skin, and companies like Nikon have also shown racial bias toward Caucasian features in their facial-recognition technology.


AI Learns Gender and Racial Biases from Language

IEEE Spectrum Robotics Channel

Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets. A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts. The results have huge implications given machine learning AI's popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests.


The Foundations of Algorithmic Bias

#artificialintelligence

Courts even deploy computerized algorithms to predict "risk of recidivism", the probability that an individual relapses into criminal behavior. Given years of credit history and other side information, a machine learning algorithm might then output a probability that the applicant will default. Machine learning refers to powerful set of techniques for building algorithms that improve as a function of experience. When Facebook recognizes your face in a photograph, when your mailbox filters spam, and when your bank predicts default risk – these are all examples of supervised machine learning in action.


Machine Learning Needs Bias Training to Overcome Stereotypes

#artificialintelligence

There are many potential reasons why machine learning systems can learn discriminatory biases. It might seem possible to avoid biased machine learning algorithms by making sure you don't feed in data that could lead to such problems in the first place. Accomplishing this goal requires raising awareness about social biases in machine learning and the serious, negative consequences that it can have. It also requires companies to explicitly test machine learning models for discriminatory biases and publish their results.


Machine Learning Needs Bias Training to Overcome Stereotypes

#artificialintelligence

There are many potential reasons why machine learning systems can learn discriminatory biases. It might seem possible to avoid biased machine learning algorithms by making sure you don't feed in data that could lead to such problems in the first place. Accomplishing this goal requires raising awareness about social biases in machine learning and the serious, negative consequences that it can have. It also requires companies to explicitly test machine learning models for discriminatory biases and publish their results.


'I think my blackness is interfering': does facial recognition show racial bias?

#artificialintelligence

Working with law fellow Clare Garvie, Frankle has requested public information from more than 100 police departments across the country. HP's MediaSmart webcam included facial recognition software so that the camera could move to follow the position of the user. One group found that software made in east Asia is better at identifying east Asian faces, while software made in North America is better at identifying white faces. Phillips hopes tech teams think about how they are training their artificial intelligence: "When people develop technology they think of the immediate problems they're solving, not who the user community is.