Her research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. Facial recognition technology was already seeping into everyday life -- from your photos on Facebook to police scans of mugshots -- when Joy Buolamwini noticed a serious glitch: Some of the software couldn't detect dark-skinned faces like hers. That revelation sparked the Massachusetts Institute of Technology researcher to launch a project that's having an outsize influence on the debate over how artificial intelligence should be deployed in the real world. Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.
Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.
Facial-recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise -- up to nearly 35 percent for images of darker-skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender. These disparate results, calculated by Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.
A few years ago, Amazon employed a new automated hiring tool to review the resumes of job applicants. Shortly after launch, the company realized that resumes for technical posts that included the word "women's" (such as "women's chess club captain"), or contained reference to women's colleges, were downgraded. The answer to why this was the case was down to the data used to teach Amazon's system. Based on 10 years of predominantly male resumes submitted to the company, the "new" automated system in fact perpetuated "old" situations, giving preferential scores to those applicants it was more "familiar" with. Defined by AI4ALL as the branch of computer science that allows computers to make predictions and decisions to solve problems, artificial intelligence (AI) has already made an impact on the world, from advances in medicine, to language translation apps.
Artificial intelligence (AI) algorithms are complex packets of code that strive to learn on given training data. But when this training data is flawed, not well-rounded, or biased, the algorithm quickly spirals into discrimination too. For women and minorities, these systemic AI issues can quickly become harmful. Bias in AI algorithms doesn't only occur because of problems in training data. When you dig deeper, it becomes readily apparent that bias often comes from how an AI developer frames a scenario or problem.