In previous work [6, 9, 10], we advanced a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity based primarily on a Bayesian (MAP) analysis of image differences, leadingto a "dual" basis similar to eigenfaces . The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was recently demonstrated using results from DARPA's 1996 "FERET" face recognition competition, in which this probabilistic matching algorithm was found to be the top performer. We have further developed a simple method of replacing the costly compution of nonlinear (online) Bayesian similarity measures by the relatively inexpensive computation of linear (offline) subspace projections and simple (online) Euclidean norms, thus resulting in a significant computational speedup for implementation with very large image databases as typically encountered in real-world applications.
US army researchers have developed a convolutional neural network and a range of algorithms to recognise faces in the dark. "This technology enables matching between thermal face images and existing biometric face databases or watch lists that only contain visible face imagery," explained Benjamin Riggan on Monday, co-author of the study and an electronics engineer at the US army laboratory. "The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis." The thermal images are processed and passed to a convolutional neural network to extract facial features using landmarks that mark the corners of the eyes, nose and lips to determine its overall shape. The system, dubbed "multi-region synthesis" is trained with a loss function so that the error between the thermal images and the visible ones is minimized, creating an accurate portrayal of what someone's face looks like despite only glimpsing it in the dark.
This week's furor over FaceApp has largely centered on concerns that its Russian developers might be compelled to share the app's data with the Russian government, much as the Snowden disclosures illustrated the myriad ways in which American companies were compelled to disclose their private user data to the US government. Yet the reality is that this represents a mistaken understanding of just how the modern data trade works today and the simple fact that American universities and companies routinely make their data available to companies all across the world, including in Russia and China. In today's globalized world, data is just as globalized, with national borders no longer restricting the flow of our personal information - trend made worse by the data-hungry world of deep learning. Data brokers have long bought and sold our personal data in a shadowy world of international trade involving our most intimate and private information. The digital era has upended this explicit trade through the interlocking world of passive exchange through analytics services.
The use of face recognition software by governments is a current topic of controversy around the globe. The world's major powers, primarily the United States and China, have made major advances in both development and deployment of this technology in the past decade. Both the US and China have been exporting this technology to other countries. The rapid spread of facial recognition systems has alarmed privacy advocates concerned about the increased ability of governments to profile and track people, as well as private companies like Facebook tying it to intimately detailed personal profiles. A recent study by the US National Institute of Standards and Technology (NIST) that examines facial recognition software vendors has found that there is definitely some merit to claims of racial bias and poor levels of accuracy in specific demographics.
A creepy new AI that transfers facial expressions between people in videos could make fake clips even more realistic. The neural network manipulates facial movements in real footage to produce deceptive videos that can make people appear to say something they didn't. In one example, scientists accurately mapped Barack Obama's facial movements onto Vladimir Putin, making it appear as if the Russian President was reading Obama's speech. A new AI that transfers facial expressions between people in videos could make fake clips even more realistic. Created by an international group of engineers led by Stanford University researchers, the AI only needs a few minutes of video to perfect each imitation.