An A.I. Pioneer Wants an FDA for Facial Recognition

#artificialintelligence

Erik Learned-Miller is one reason we talk about facial recognition at all. In 2007, years before the current A.I. boom made "deep learning" and "neural networks" common phrases in Silicon Valley, Learned-Miller and three colleagues at the University of Massachusetts Amherst released a dataset of faces titled Labelled Faces in the Wild. To you or me, Labelled Faces in the Wild just looks like folders of unremarkable images. You can download them and look for yourself. There's boxer Joe Gatti, gloves raised mid-fight.


Bayesian Modeling of Facial Similarity

Neural Information Processing Systems

In previous work [6, 9, 10], we advanced a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity based primarily on a Bayesian (MAP) analysis of image differences, leadingto a "dual" basis similar to eigenfaces [13]. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was recently demonstrated using results from DARPA's 1996 "FERET" face recognition competition, in which this probabilistic matching algorithm was found to be the top performer. We have further developed a simple method of replacing the costly compution of nonlinear (online) Bayesian similarity measures by the relatively inexpensive computation of linear (offline) subspace projections and simple (online) Euclidean norms, thus resulting in a significant computational speedup for implementation with very large image databases as typically encountered in real-world applications.


Politicians fume after Amazon's face-recog AI fingers dozens of them as suspected crooks

#artificialintelligence

Amazon's online facial recognition system incorrectly matched pictures of US Congress members to mugshots of suspected criminals in a study by the American Civil Liberties Union. The ACLU, a nonprofit headquartered in New York, has called for Congress to ban cops and Feds from using any sort of computer-powered facial recognition technology due to the fact that, well, it sucks. Amazon's AI-powered Rekognition service was previously criticized by the ACLU when it revealed the web giant was aggressively marketing its face-matching tech to police in Washington County, Oregon, and Orlando, Florida. Rekognition is touted by the Bezos Bunch as, among other applications, a way to identify people in real time from surveillance camera footage or from officers' body cameras. The results from the ACLU's latest probing showed that Rekognition mistook images of 28 members of Congress for mugshots of cuffed people suspected of crimes.


Academia's Facial Recognition Datasets Illustrate The Globalization Of Today's Data

#artificialintelligence

This week's furor over FaceApp has largely centered on concerns that its Russian developers might be compelled to share the app's data with the Russian government, much as the Snowden disclosures illustrated the myriad ways in which American companies were compelled to disclose their private user data to the US government. Yet the reality is that this represents a mistaken understanding of just how the modern data trade works today and the simple fact that American universities and companies routinely make their data available to companies all across the world, including in Russia and China. In today's globalized world, data is just as globalized, with national borders no longer restricting the flow of our personal information - trend made worse by the data-hungry world of deep learning. Data brokers have long bought and sold our personal data in a shadowy world of international trade involving our most intimate and private information. The digital era has upended this explicit trade through the interlocking world of passive exchange through analytics services.


Facebook users could get up to $5,000 compensation for EVERY picture used without their consent

Daily Mail - Science & tech

Facebook will face a class action law suit in the wake of its privacy scandal, a US federal judge has ruled. Allegations of privacy violations emerged when it was revealed the app used a photo-scanning tool on users' images without their explicit consent. The facial recognition tool, launched in 2010, suggests names for people it identifies in photos uploaded by users. Under Illinois state law, the company could be fined $1,000 to $5,000 (£700 - £3,500) each time a person's image was used without consent. The technology was suspended for users in Europe in 2012 over privacy fears but is still live in the US and other regions worldwide.