Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).
Let's start with some comments about a recent ACLU blog in which they run a facial recognition trial. Using Rekognition, the ACLU built a face database using 25,000 publicly available arrest photos and then performed facial similarity searches of that database using public photos of all current members of Congress. They found 28 incorrect matches out of 535, using an 80% confidence level; this is a 5% misidentification (sometimes called'false positive') rate and a 95% accuracy rate. The ACLU has not published its data set, methodology, or results in detail, so we can only go on what they've publicly said. To illustrate the impact of confidence threshold on false positives, we ran a test where we created a face collection using a dataset of over 850,000 faces commonly used in academia.
Researchers at the State University of New York in Korea have recently explored new ways to detect both machine and human-created fake images of faces. In their paper, published in ACM Digital Library, the researchers used ensemble methods to detect images created by generative adversarial networks (GANs) and employed pre-processing techniques to improve the detection of images created by humans using Photoshop. Over the past few years, significant advancements in image processing and machine learning have enabled the generation of fake, yet highly realistic, images. However, these images could also be used to create fake identities, make fake news more convincing, bypass image detection algorithms, or fool image recognition tools. "Fake face images have been a topic of research for quite some time now, but studies have mainly focused on photos made by humans, using Photoshop tools," Shahroz Tariq, one of the researchers who carried out the study told Tech Xplore.
Facebook Inc is opening up its face recognition technology to all users with an option to opt out, the social media company said, as it discontinued a related feature called "Tag Suggestions." The old feature enabled users to choose whether Facebook could suggest that their friends tag them in photos, without giving them control over other uses of the technology. The face recognition setting, available to some Facebook users since December 2017, has additional functions such as notifying account holders if their profile photo is used by someone else. People who opt in to the new setting will still have tag suggestions automatically generated about them. Facebook's face recognition technology has been at the center of a privacy related lawsuit since 2015.
An image from the product page of Amazon's Rekognition service, which provides image and video facial and item recognition and analysis. SAN FRANCISCO – Two years ago, Amazon built a facial and image recognition product that allows customers to cheaply and quickly search a database of images and look for matches. One of the groups it targeted as potential users of this service was law enforcement. At least two signed on: the Washington County Sheriff's Office outside of Portland, Ore., and the Orlando Police Department in Florida. Now the ACLU and civil rights groups are demanding that Amazon stop selling the software tool, called Rekognition, to police and other government entities because they fear it could be used to unfairly target protesters, immigrants and any person just going about their daily business.