Goto

Collaborating Authors

Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

arXiv.org Machine Learning

Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be predicted. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers which combine multiple modalities outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.


Conflict Detection and Resolution in Table Top Scenarios for Human-Robot Interaction

arXiv.org Artificial Intelligence

As in any interaction process, misunderstandings, ambiguity, and failures to correctly understand the interaction partner are bound to happen in human-robot interaction. We term these failures 'conflicts' and are interested in both conflict detection and conflict resolution. In that, we focus on the robot's perspective. For the robot, conflicts may occur because of errors in its perceptual processes or because of ambiguity stemming from human input. This poster presents a brief system overview, and details Here, we briefly outline the project's motivation and setting, introduce the general processing framework, and then present two kinds of conflicts in some more detail: 1) a failure to identify a relevant object at all; 2) ambiguity emerging from multiple matches in scene perception.


Gesture-Based Interaction with a Pet Robot Milyn C. Moy

AAAI Conferences

As pet robots become more integrated into our everyday lives, it will become essential for them to perceive and understand our intentions and actions. We will also want to communicate with them as we do with other human beings. Yet, to communicate and interact with robots, we are still required to use specialized input devices such as keyboards, mice, trackers, or data gloves (Zimmerman & Lanier 1987). Thus, a more natural, contact-less interface would be desirable to avoid the need for external devices. An example of such an interface is speech (Huang, Ariki, & Jack 1990). However, when we communicate with each other, we also use gestures, facial expressions, and poses as supplements or substitutes for speech.


On effective human robot interaction based on recognition and association

arXiv.org Artificial Intelligence

Faces play a magnificent role in human robot interaction, as they do in our daily life. The inherent ability of the human mind facilitates us to recognize a person by exploiting various challenges such as bad illumination, occlusions, pose variation etc. which are involved in face recognition. But it is a very complex task in nature to identify a human face by humanoid robots. The recent literatures on face biometric recognition are extremely rich in its application on structured environment for solving human identification problem. But the application of face biometric on mobile robotics is limited for its inability to produce accurate identification in uneven circumstances. The existing face recognition problem has been tackled with our proposed component based fragmented face recognition framework. The proposed framework uses only a subset of the full face such as eyes, nose and mouth to recognize a person. It's less searching cost, encouraging accuracy and ability to handle various challenges of face recognition offers its applicability on humanoid robots. The second problem in face recognition is the face spoofing, in which a face recognition system is not able to distinguish between a person and an imposter (photo/video of the genuine user). The problem will become more detrimental when robots are used as an authenticator. A depth analysis method has been investigated in our research work to test the liveness of imposters to discriminate them from the legitimate users. The implication of the previous earned techniques has been used with respect to criminal identification with NAO robot. An eyewitness can interact with NAO through a user interface. NAO asks several questions about the suspect, such as age, height, her/his facial shape and size etc., and then making a guess about her/his face.


SLIRS: Sign Language Interpreting System for Human-Robot Interaction

AAAI Conferences

Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for finger spelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected a depth data set of ten people and applied it to a learning-based method for gesture recognition by modeling motion data. We report our results that show an average accuracy of 77.2% for a complete alphabet recognition consisting of 33 letters.