Goto

Collaborating Authors

On effective human robot interaction based on recognition and association

arXiv.org Artificial Intelligence

Faces play a magnificent role in human robot interaction, as they do in our daily life. The inherent ability of the human mind facilitates us to recognize a person by exploiting various challenges such as bad illumination, occlusions, pose variation etc. which are involved in face recognition. But it is a very complex task in nature to identify a human face by humanoid robots. The recent literatures on face biometric recognition are extremely rich in its application on structured environment for solving human identification problem. But the application of face biometric on mobile robotics is limited for its inability to produce accurate identification in uneven circumstances. The existing face recognition problem has been tackled with our proposed component based fragmented face recognition framework. The proposed framework uses only a subset of the full face such as eyes, nose and mouth to recognize a person. It's less searching cost, encouraging accuracy and ability to handle various challenges of face recognition offers its applicability on humanoid robots. The second problem in face recognition is the face spoofing, in which a face recognition system is not able to distinguish between a person and an imposter (photo/video of the genuine user). The problem will become more detrimental when robots are used as an authenticator. A depth analysis method has been investigated in our research work to test the liveness of imposters to discriminate them from the legitimate users. The implication of the previous earned techniques has been used with respect to criminal identification with NAO robot. An eyewitness can interact with NAO through a user interface. NAO asks several questions about the suspect, such as age, height, her/his facial shape and size etc., and then making a guess about her/his face.


Automated Pain Detection from Facial Expressions using FACS: A Review

arXiv.org Machine Learning

Facial pain expression is an important modality for assessing pain, especially when the patient's verbal ability to communicate is impaired. The facial muscle-based action units (AUs), which are defined by the Facial Action Coding System (FACS), have been widely studied and are highly reliable as a method for detecting facial expressions (FE) including valid detection of pain. Unfortunately, FACS coding by humans is a very time-consuming task that makes its clinical use prohibitive. Significant progress on automated facial expression recognition (AFER) has led to its numerous successful applications in FACS-based affective computing problems. However, only a handful of studies have been reported on automated pain detection (APD), and its application in clinical settings is still far from a reality. In this paper, we review the progress in research that has contributed to automated pain detection, with focus on 1) the framework-level similarity between spontaneous AFER and APD problems; 2) the evolution of system design including the recent development of deep learning methods; 3) the strategies and considerations in developing a FACS-based pain detection framework from existing research; and 4) introduction of the most relevant databases that are available for AFER and APD studies. We attempt to present key considerations in extending a general AFER framework to an APD framework in clinical settings. In addition, the performance metrics are also highlighted in evaluating an AFER or an APD system.


Supporting Feedback and Assessment of Digital Ink Answers to In-Class Exercises

AAAI Conferences

Effective teaching involves treating the presentation of new material and the assessment of students' mastery of this material as part of a seamless and continuous feedback cycle. We have developed a computer system, called Classroom Learning Partner (CLP), that supports this methodology, and we have used it in teaching an introductory computer science course at MIT over the past year. Through evaluation of controlled classroom experiments, we have demonstrated that this approach reaches students who would have otherwise been left behind, and that it leads to greater attentiveness in class, greater student satisfaction, and better interactions between the instructor and student. The current CLP system consists of a network of Tablet PCs, and software for posing questions to students, interpreting their handwritten answers, and aggregating those answers into equivalence classes, each of which represents a particular level of understanding or misconception of the material. The current system supports a useful set of recognizers for specific types of answers, and employs AI techniques in the knowledge representation and reasoning necessary to support interpretation and aggregation of digital ink answers.


Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.


Diagnosis of liver disease using computer-assisted imaging techniques: A Review

arXiv.org Machine Learning

The evidence says that liver disease detection using CAD is one of the most efficient techniques but the presence of better organization of studies and the performance parameters to represent the result analysis of the proposed techniques are pointedly missing in most of the recent studies. Few benchmarked studies have been found in some of the papers as benchmarking makes a reader understand that under which circumstances their experimental results or outcomes are better and useful for the future implementation and adoption of the work. Liver diseases and image processing algorithms, especially in medicine, are the most important and important topics of the day. Unfortunately, the necessary data and data, as they are invoked in the articles, are low in this area and require the revision and implementation of policies in order to gather and do more research in this field. Detection with ultrasound is quite normal in liver diseases and depends on the physician's experience and skills. CAD systems are very important for doctors to understand medical images and improve the accuracy of diagnosing various diseases. In the following, we describe the techniques used in the various stages of a CAD system, namely: extracting features, selecting features, and classifying them. Although there are many techniques that are used to classify medical images, it is still a challenging issue for creating a universally accepted approach.