The use of face recognition software by governments is a current topic of controversy around the globe. The world's major powers, primarily the United States and China, have made major advances in both development and deployment of this technology in the past decade. Both the US and China have been exporting this technology to other countries. The rapid spread of facial recognition systems has alarmed privacy advocates concerned about the increased ability of governments to profile and track people, as well as private companies like Facebook tying it to intimately detailed personal profiles. A recent study by the US National Institute of Standards and Technology (NIST) that examines facial recognition software vendors has found that there is definitely some merit to claims of racial bias and poor levels of accuracy in specific demographics.
Experts at recognizing faces often play a crucial role in criminal cases. A photo from a security camera can mean prison or freedom for a defendant--and testimony from highly trained forensic face examiners informs the jury whether that image actually depicts the accused. Just how good are facial recognition experts? In work that combines forensic science with psychology and computer vision research, a team of scientists from the National Institute of Standards and Technology (NIST) and three universities has tested the accuracy of professional face identifiers, providing at least one revelation that surprised even the researchers: Trained human beings perform best with a computer as a partner, not another person. "This is the first study to measure face identification accuracy for professional forensic facial examiners, working under circumstances that apply in real-world casework," said NIST electronic engineer P. Jonathon Phillips.
WASHINGTON – Facial recognition systems can produce wildly inaccurate results, especially for nonwhites, according to a U.S. government study released Thursday that is likely to raise fresh doubts on deployment of the artificial intelligence technology. The study of dozens of facial recognition algorithms showed "false positives" rates for Asians and African-Americans as much as 100 times higher than for whites. The researchers from the National Institute of Standards and Technology (NIST), a government research center, also found two algorithms assigned the wrong gender to black females almost 35 percent of the time. The study comes amid widespread deployment of facial recognition for law enforcement, airports, border security, banking, retailing, schools and for personal technology such as unlocking smartphones. Some activists and researchers have claimed the potential for errors is too great and that mistakes could result in the jailing of innocent people, and that the technology could be used to create databases that may be hacked or inappropriately used.
Facial recognition technology has advanced swiftly in the last five years. As University of Texas at Dallas researchers try to determine how computers have gotten as good as people at the task, they are also shedding light on how the human brain sorts information. UT Dallas scientists have analyzed the performance of the latest echelon of facial recognition algorithms, revealing the surprising way these programs -- which are based on machine learning -- work. Their study, published online Nov. 12 in Nature Machine Intelligence, shows that these sophisticated computer programs -- called deep convolutional neural networks (DCNNs) -- figured out how to identify faces differently than the researchers expected. "For the last 30 years, people have presumed that computer-based visual systems get rid of all the image-specific information -- angle, lighting, expression and so on," said Dr. Alice O'Toole, senior author of the study and the Aage and Margareta Møller Professor in the School of Behavioral and Brain Sciences.
Deeb-Swihart, Julia (Georgia Institute of Technology) | Polack, Christopher (Georgia Institute of Technology) | Gilbert, Eric (Georgia Institute of Technology) | Essa, Irfan (Georgia Institute of Technology)
Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us full-circle to Goffman — blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram — an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.