Researchers at the State University of New York in Korea have recently explored new ways to detect both machine and human-created fake images of faces. In their paper, published in ACM Digital Library, the researchers used ensemble methods to detect images created by generative adversarial networks (GANs) and employed pre-processing techniques to improve the detection of images created by humans using Photoshop. Over the past few years, significant advancements in image processing and machine learning have enabled the generation of fake, yet highly realistic, images. However, these images could also be used to create fake identities, make fake news more convincing, bypass image detection algorithms, or fool image recognition tools. "Fake face images have been a topic of research for quite some time now, but studies have mainly focused on photos made by humans, using Photoshop tools," Shahroz Tariq, one of the researchers who carried out the study told Tech Xplore.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).
Amazon investors are turning up the heat on CEO Jeff Bezos with a new letter demanding he stop selling the company's controversial facial recognition technology to police. The shareholder proposal calls for Amazon to stop offering the product, called Rekognition, to government agencies until it undergoes a civil and human rights review. It follow similar criticisms voiced by 450 Amazon employees, as well as civil liberties groups and members of Congress, over the past several months. 'Rekognition contradicts Amazon's opposition to facilitating surveillance,' the letter states. '...Shareholders have little evidence our company is effectively restricting the use of Rekognition to protect privacy and civil rights.
Cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the U.S. Capitol in Washington as he testified before a Senate panel last week. Cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the U.S. Capitol in Washington as he testified before a Senate panel last week. A federal judge in California has ruled that Facebook can be sued in a class-action lawsuit brought by users in Illinois who say the social network improperly used facial recognition technology on their uploaded photographs. The plaintiffs are three Illinois Facebook users who sued under a state law that says a private entity such as Facebook can't collect and store a person's biometric facial information without their written consent. The law, known as the Biometric Information Privacy Act, also says that information that uniquely identifies an individual is, in essence, their property.
Facebook users who felt that their privacy was violated by the website's use of facial recognition software -- which it uses to help identify and tag people in photographs -- won an early legal victory Thursday when a San Francisco federal judge rejected a request by the internet company to dismiss a lawsuit challenging its collection of biometric information. "The court accepts as true plaintiffs' allegations that Facebook's face recognition technology involves a scan of face geometry that was done without plaintiffs' consent," U.S. District Judge James Donato ruled. Three Illinois residents filed separate lawsuits -- that were later combined -- under the state's Biometric Information Privacy Act of 2008, which allows companies to be sued for failing to get consumers' consent before collecting or storing their biometric information, which includes "faceprints" used by Facebook (and also Google) for identifying people in photographs. Facebook introduced its face-recognition feature in 2010. California, where Facebook is based, does not have a law regulating the use of biometrics.