Goto

Collaborating Authors

Ashes spectators in Sydney scanned by facial recognition tech

ZDNet

A sell-out crowd of 45,000 spectators watching the fifth and final Ashes Test in Sydney this week is, in turn, being watched by a team of security professionals at the Sydney Cricket Ground (SCG). For the first time, the SCG's security team is utilising 820 new cameras equipped with facial recognition technology to scrutinise the crowd for safety threats. The cameras, which feed into an upgraded operations centre inside the ground, allow security personnel to monitor patrons as they approach the ground and while they're inside the venue, an SCG Trust spokesman said on Thursday. The AU$3.5 million upgrade to security includes a new video analytics system that can detect and zoom in on unattended bags, suspicious vehicles, and strange behaviour. A trial in 2017 allowed police and security to intercept six banned spectators as they tried to enter the SCG.


On effective human robot interaction based on recognition and association

arXiv.org Artificial Intelligence

Faces play a magnificent role in human robot interaction, as they do in our daily life. The inherent ability of the human mind facilitates us to recognize a person by exploiting various challenges such as bad illumination, occlusions, pose variation etc. which are involved in face recognition. But it is a very complex task in nature to identify a human face by humanoid robots. The recent literatures on face biometric recognition are extremely rich in its application on structured environment for solving human identification problem. But the application of face biometric on mobile robotics is limited for its inability to produce accurate identification in uneven circumstances. The existing face recognition problem has been tackled with our proposed component based fragmented face recognition framework. The proposed framework uses only a subset of the full face such as eyes, nose and mouth to recognize a person. It's less searching cost, encouraging accuracy and ability to handle various challenges of face recognition offers its applicability on humanoid robots. The second problem in face recognition is the face spoofing, in which a face recognition system is not able to distinguish between a person and an imposter (photo/video of the genuine user). The problem will become more detrimental when robots are used as an authenticator. A depth analysis method has been investigated in our research work to test the liveness of imposters to discriminate them from the legitimate users. The implication of the previous earned techniques has been used with respect to criminal identification with NAO robot. An eyewitness can interact with NAO through a user interface. NAO asks several questions about the suspect, such as age, height, her/his facial shape and size etc., and then making a guess about her/his face.


Machine Learning Systems for Highly-Distributed and Rapidly-Growing Data

arXiv.org Machine Learning

The usability and practicality of any machine learning (ML) applications are largely influenced by two critical but hard-to-attain factors: low latency and low cost. Unfortunately, achieving low latency and low cost is very challenging when ML depends on real-world data that are highly distributed and rapidly growing (e.g., data collected by mobile phones and video cameras all over the world). Such real-world data pose many challenges in communication and computation. For example, when training data are distributed across data centers that span multiple continents, communication among data centers can easily overwhelm the limited wide-area network bandwidth, leading to prohibitively high latency and high cost. In this dissertation, we demonstrate that the latency and cost of ML on highly-distributed and rapidly-growing data can be improved by one to two orders of magnitude by designing ML systems that exploit the characteristics of ML algorithms, ML model structures, and ML training/serving data. We support this thesis statement with three contributions. First, we design a system that provides both low-latency and low-cost ML serving (inferencing) over large-scale and continuously-growing datasets, such as videos. Second, we build a system that makes ML training over geo-distributed datasets as fast as training within a single data center. Third, we present a first detailed study and a system-level solution on a fundamental and largely overlooked problem: ML training over non-IID (i.e., not independent and identically distributed) data partitions (e.g., facial images collected by cameras varies according to the demographics of each camera's location).


Iterative Grassmannian Optimization for Robust Image Alignment

arXiv.org Machine Learning

Robust high-dimensional data processing has witnessed an exciting development in recent years, as theoretical results have shown that it is possible using convex programming to optimize data fit to a low-rank component plus a sparse outlier component. This problem is also known as Robust PCA, and it has found application in many areas of computer vision. In image and video processing and face recognition, the opportunity to process massive image databases is emerging as people upload photo and video data online in unprecedented volumes. However, data quality and consistency is not controlled in any way, and the massiveness of the data poses a serious computational challenge. In this paper we present t-GRASTA, or "Transformed GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm)". t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image. We show that t-GRASTA is 4 $\times$ faster than state-of-the-art algorithms, has half the memory requirement, and can achieve alignment for face images as well as jittered camera surveillance images.


CNL Software expands IPSecurityCenter to support Herta face detection software

#artificialintelligence

CNL Software has entered into a technology partnership with Herta Security under the CNL Software Technology Alliance Program. Herta develops user-friendly software solutions that enable the integration of facial recognition in security applications. According to the announcement, Herta's deep learning algorithms encode faces directly into small templates, which are very fast to compare and yield more accurate results. This provides a technological advantage when working with partners, as it allows the development of more robust, safer and efficient solutions. IPSecurityCenter PSIM takes a vendor agnostic approach to implement flexible and scalable security management software.