Goto

Collaborating Authors

On effective human robot interaction based on recognition and association

arXiv.org Artificial Intelligence

Faces play a magnificent role in human robot interaction, as they do in our daily life. The inherent ability of the human mind facilitates us to recognize a person by exploiting various challenges such as bad illumination, occlusions, pose variation etc. which are involved in face recognition. But it is a very complex task in nature to identify a human face by humanoid robots. The recent literatures on face biometric recognition are extremely rich in its application on structured environment for solving human identification problem. But the application of face biometric on mobile robotics is limited for its inability to produce accurate identification in uneven circumstances. The existing face recognition problem has been tackled with our proposed component based fragmented face recognition framework. The proposed framework uses only a subset of the full face such as eyes, nose and mouth to recognize a person. It's less searching cost, encouraging accuracy and ability to handle various challenges of face recognition offers its applicability on humanoid robots. The second problem in face recognition is the face spoofing, in which a face recognition system is not able to distinguish between a person and an imposter (photo/video of the genuine user). The problem will become more detrimental when robots are used as an authenticator. A depth analysis method has been investigated in our research work to test the liveness of imposters to discriminate them from the legitimate users. The implication of the previous earned techniques has been used with respect to criminal identification with NAO robot. An eyewitness can interact with NAO through a user interface. NAO asks several questions about the suspect, such as age, height, her/his facial shape and size etc., and then making a guess about her/his face.


A Self-Adaptive Network Protection System

arXiv.org Artificial Intelligence

In this treatise we aim to build a hybrid network automated (self-adaptive) security threats discovery and prevention system; by using unconventional techniques and methods, including fuzzy logic and biological inspired algorithms under the context of soft computing.


New Research Claims to Have Found a Solution to Machine Learning Attacks

#artificialintelligence

AI has been making some major strides in the computing world in recent years. But that also means they have become increasingly vulnerable to security concerns. Just by examining the power usage patterns or signatures during operations, one may able to gain access to sensitive information housed by a computer system. And in AI, machine learning algorithms are more prone to such attacks. The same algorithms are employed in smart home devices, cars to identify different forms of images and sounds that are embedded with specialized computing chips.


Emotion Recognition From Gait Analyses: Current Research and Future Directions

arXiv.org Machine Learning

Human gait refers to a daily motion that represents not only mobility, but it can also be used to identify the walker by either human observers or computers. Recent studies reveal that gait even conveys information about the walker's emotion. Individuals in different emotion states may show different gait patterns. The mapping between various emotions and gait patterns provides a new source for automated emotion recognition. Compared to traditional emotion detection biometrics, such as facial expression, speech and physiological parameters, gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject. These advantages make gait a promising source for emotion detection. This article reviews current research on gait-based emotion detection, particularly on how gait parameters can be affected by different emotion states and how the emotion states can be recognized through distinct gait patterns. We focus on the detailed methods and techniques applied in the whole process of emotion recognition: data collection, preprocessing, and classification. At last, we discuss possible future developments of efficient and effective gait-based emotion recognition using the state of the art techniques on intelligent computation and big data.


AI distinguishes living eyeballs from dead ones

#artificialintelligence

It's a plot straight out of science fiction: Bad guys dispose of an unlucky security guard, scoop out one of the guy's (or gal's) eyeballs, and hold it up to an iris scanner, fooling it into disarming a security system. As it turns out, post-mortem eyes can be used for biometric identification hours or even days after death, studies show. But if researchers at Warsaw University of Technology in Poland have their way, that might not be the case for much longer. In a paper ("Presentation Attack Detection for Cadaver Irises") published on the preprint server Arxiv.org, the team proposed a neural network that can tell the difference between living irises and dead ones with 99 percent accuracy. "With increasing importance that biometric authentication gains in our daily lives, fears are increasingly common among users, regarding the possibility of unauthorized access to our data, identity, or assets after our demise," the researchers wrote.