US army researchers have developed a convolutional neural network and a range of algorithms to recognise faces in the dark. "This technology enables matching between thermal face images and existing biometric face databases or watch lists that only contain visible face imagery," explained Benjamin Riggan on Monday, co-author of the study and an electronics engineer at the US army laboratory. "The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis." The thermal images are processed and passed to a convolutional neural network to extract facial features using landmarks that mark the corners of the eyes, nose and lips to determine its overall shape. The system, dubbed "multi-region synthesis" is trained with a loss function so that the error between the thermal images and the visible ones is minimized, creating an accurate portrayal of what someone's face looks like despite only glimpsing it in the dark.
This blog is syndicated from The New Rules of Privacy: Building Loyalty with Connected Consumers in the Age of Face Recognition and AI. To learn more click here. Since the invention of face recognition in the 1960s, has any single technology sparked more fascination for public safety officials, companies, journalists and Hollywood? When people learn that I'm the CEO of a face recognition company, they commonly reference its fictional use in shows like CSI, Black Mirror or even films such as the 1980s James Bond movie A View to a Kill. Most often, however, they mention Minority Report starring Tom Cruise.
CNL Software has entered into a technology partnership with Herta Security under the CNL Software Technology Alliance Program. Herta develops user-friendly software solutions that enable the integration of facial recognition in security applications. According to the announcement, Herta's deep learning algorithms encode faces directly into small templates, which are very fast to compare and yield more accurate results. This provides a technological advantage when working with partners, as it allows the development of more robust, safer and efficient solutions. IPSecurityCenter PSIM takes a vendor agnostic approach to implement flexible and scalable security management software.
Current UAV-recorded datasets are mostly limited to action recognition and object tracking, whereas the gesture signals datasets were mostly recorded in indoor spaces. Currently, there is no outdoor recorded public video dataset for UAV commanding signals. Gesture signals can be effectively used with UAVs by leveraging the UAVs visual sensors and operational simplicity. To fill this gap and enable research in wider application areas, we present a UAV gesture signals dataset recorded in an outdoor setting. We selected 13 gestures suitable for basic UAV navigation and command from general aircraft handling and helicopter handling signals. We provide 119 high-definition video clips consisting of 37151 frames. The overall baseline gesture recognition performance computed using Pose-based Convolutional Neural Network (P-CNN) is 91.9 %. All the frames are annotated with body joints and gesture classes in order to extend the dataset's applicability to a wider research area including gesture recognition, action recognition, human pose recognition and situation awareness.
Deep learning is a technology with a lot of promise: helping computers "see" the world, understand speech, and make sense of language. But away from the headlines about computers challenging humans at everything from spotting faces in a crowd to transcribing speech -- real-world performance has been more mixed. One deep-learning technology whose real-world results have often disappointed has been facial-recognition. In the UK, police in Cardiff and London used facial-recognition systems on multiple occasions in 2017 to flag persons of interest captured on video at major events. Unfortunately, more than 90% of people picked out by these systems were false matches.