Goto

Collaborating Authors

Detecting fake face images created by both humans and machines

#artificialintelligence

Researchers at the State University of New York in Korea have recently explored new ways to detect both machine and human-created fake images of faces. In their paper, published in ACM Digital Library, the researchers used ensemble methods to detect images created by generative adversarial networks (GANs) and employed pre-processing techniques to improve the detection of images created by humans using Photoshop. Over the past few years, significant advancements in image processing and machine learning have enabled the generation of fake, yet highly realistic, images. However, these images could also be used to create fake identities, make fake news more convincing, bypass image detection algorithms, or fool image recognition tools. "Fake face images have been a topic of research for quite some time now, but studies have mainly focused on photos made by humans, using Photoshop tools," Shahroz Tariq, one of the researchers who carried out the study told Tech Xplore.


Amazon facial recognition falsely matches more than 100 politicians to arrested criminals

The Independent - Tech

Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.


Facebook users could get up to $5,000 compensation for EVERY picture used without their consent

Daily Mail - Science & tech

Facebook will face a class action law suit in the wake of its privacy scandal, a US federal judge has ruled. Allegations of privacy violations emerged when it was revealed the app used a photo-scanning tool on users' images without their explicit consent. The facial recognition tool, launched in 2010, suggests names for people it identifies in photos uploaded by users. Under Illinois state law, the company could be fined $1,000 to $5,000 (£700 - £3,500) each time a person's image was used without consent. The technology was suspended for users in Europe in 2012 over privacy fears but is still live in the US and other regions worldwide.


Is Facial Recognition Technology Racist? The Tech Connoisseur

#artificialintelligence

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).


Amazon shareholders demand firm stop selling controversial facial recognition tech to police

Daily Mail - Science & tech

Amazon investors are turning up the heat on CEO Jeff Bezos with a new letter demanding he stop selling the company's controversial facial recognition technology to police. The shareholder proposal calls for Amazon to stop offering the product, called Rekognition, to government agencies until it undergoes a civil and human rights review. It follow similar criticisms voiced by 450 Amazon employees, as well as civil liberties groups and members of Congress, over the past several months. 'Rekognition contradicts Amazon's opposition to facilitating surveillance,' the letter states. '...Shareholders have little evidence our company is effectively restricting the use of Rekognition to protect privacy and civil rights.