racial bias


Study finds popular face ID systems may have racial bias

Daily Mail

Tech giants have made some major strides in advancing facial recognition technology. It's now popping up in smartphones, laptops and tablets, all with the goal of making our lives easier. But a new study, called 'Gender Shades,' has found that it may not be working for all users, especially those ...


Racial Bias in Facial Recognition Software - Algorithmia Blog

#artificialintelligence

We've all heard about racial bias in artificial intelligence via the media, whether it's found in recidivism software or object detection that mislabels African American people as Gorillas. Due to the increase in the media attention, people have grown more aware that implicit bias occurring in peopl...


A Law Enforcement A.I. Is No More or Less Biased Than People

#artificialintelligence

Some people champion artificial intelligence as a solution to the kinds of biases that humans fall prey to. Even simple statistical tools can outperform people at tasks in business, medicine, academia, and crime reduction. Others chide AI for systematizing bias, which it can do even when bias is not programmed in. In 2016, ProPublica released a much-cited report arguing that a common algorithm for predicting criminal risk showed racial bias. Now a new research paper reveals that, at least in the case of the algorithm covered by ProPublica, neither side has much to get worked up about.


Mathwashing: How Algorithms Can Hide Gender and Racial Biases - The New Stack

#artificialintelligence

Scholars have long pointed out that the way languages are structured and used can say a lot about the worldview of their speakers: what they believe, what they hold sacred, and what their biases are. We know humans have their biases, but in contrast, many of us might have the impression that machines are somehow inherently objective. But does that assumption apply to a new generation of intelligent, algorithmically driven machines that are learning our languages and training from human-generated datasets? By virtue of being designed by humans, and by learning natural human languages, might these artificially intelligent machines also pick up on some of those same human biases too? It seems that machines can and do indeed assimilate human prejudices, whether they are based on race, gender, age or aesthetics.


Researchers combat gender and racial bias in Artificial Intelligence

#artificialintelligence

When Timnit Gebru was a student at Stanford University's prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the US. While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias--racial, gender, socio-economic. She was also horrified by a report that found a computer programme widely used to predict whether a criminal will re-offend discriminated against people of colour. So earlier this year, Gebru, 34, joined a Microsoft Corp team called FATE--for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.


Researchers Combat Gender and Racial Bias in Artificial Intelligence

#artificialintelligence

When Timnit Gebru was a student at Stanford University's prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S. While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias--racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color. So earlier this year, Gebru, 34, joined a Microsoft Corp. team called FATE--for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.


Everything that's wrong with that study which used AI to 'identify sexual orientation'

Mashable

A study from Stanford University, first reported in the Economist, has raised a controversy after claiming AI can deduce whether people are gay or straight by analysing images of a gay person and a straight person side by side. "Technology cannot identify someone's sexual orientation," said Jim Halloran, Chief Digital Officer at GLAAD, the world's largest LGBTQ media advocacy organization which along with HRC called on Stanford University and the media to debunk the research. However, if our results are correct, GLAAD and HRC representatives' knee-jerk dismissal of the scientific findings puts at risk the very people for whom their organizations strive to advocate. GLAAD's Halloran said the research "isn't science or news, but it's a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don't want to post photos on dating sites."


Google Expands Troll-Fighting Tool Amid Concerns of Racial Bias

#artificialintelligence

That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The company, meanwhile, put out a statement to address the alleged bias issue in its AI tool, which is called Perspective. "Perspective is an early-stage machine learning technology that will naturally improve over time. Jigsaw on Thursday also unveiled a new blog called "The False Positive" that is dedicated to describing the challenges of developing machine learning tools.


Seeing the World Through Google's Eyes – The Ringer

#artificialintelligence

Another woman discovered that the search "unprofessional hairstyles for work" yielded images of black women while "professional hairstyles for work" brought up images of white women. In 2015, users discovered that searching for "n*gga house" in Google Maps directed users to the White House. That same year, a tool that automatically categorizes images in the Google Photos app tagged a black user and his friend as gorillas, a particularly egregious error considering that comparison is often used by white supremacists as a deliberately racist insult. Camera companies like Kodak sold film that photographed white skin better than black skin, and companies like Nikon have also shown racial bias toward Caucasian features in their facial-recognition technology.


AI Learns Gender and Racial Biases from Language

IEEE Spectrum Robotics Channel

Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets. A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts. The results have huge implications given machine learning AI's popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests.