Amazon receives challenge from face recognition researcher over biased AI

USATODAY - Tech Top Stories

Her research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. Facial recognition technology was already seeping into everyday life -- from your photos on Facebook to police scans of mugshots -- when Joy Buolamwini noticed a serious glitch: Some of the software couldn't detect dark-skinned faces like hers. That revelation sparked the Massachusetts Institute of Technology researcher to launch a project that's having an outsize influence on the debate over how artificial intelligence should be deployed in the real world. Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.


Facial-recognition technology works best if you're a white guy, study says

#artificialintelligence

Facial-recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise -- up to nearly 35 percent for images of darker-skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender. These disparate results, calculated by Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.


Making face recognition less biased doesn't make it less scary

MIT Technology Review

In the past few years, there's been a dramatic rise in the adoption of face recognition, detection, and analysis technology. You're probably most familiar with recognition systems, like Facebook's photo-tagging recommender and Apple's FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance. Many people believe that such systems are both highly accurate and impartial.


Artificial intelligence has a racial bias problem. Google is funding summer camps to try to change that

USATODAY - Tech Top Stories

On a sunny Monday afternoon in Oakland, AI4All alum Ananya Karthik gathered a few dozen girls to show them how to use the Deep Dream Generator program to fuse images together and create a unique piece of art. OAKLAND -- Through connections made at summer camp, high school students Aarvu Gupta and Lili Sun used artificial intelligence to create a drone program that aims to detect wildfires before they spread too far. Rebekah Agwunobi, a rising high school senior, learned enough to nab an internship at the Massachusetts Institute of Technology's Media Lab, working on using artificial intelligence to evaluate the court system, including collecting data on how judges set bail. Both projects stemmed from the Oakland, Calif.-based nonprofit AI4All, which will expand its outreach to young under-represented minorities and women with a $1 million grant from Google.org, the technology giant's philanthropic arm announced Friday. Artificial intelligence is becoming increasingly commonplace in daily life, found in everything from Facebook's face detection feature for photos to Apple's iPhone X facial recognition.


'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

The Guardian

Lack of diversity in the artificial intelligence field has reached "a moment of reckoning", according to new findings published by a New York University research center. A "diversity disaster" has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports. The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.