Goto

Collaborating Authors

Automated Pain Detection from Facial Expressions using FACS: A Review

arXiv.org Machine Learning

Facial pain expression is an important modality for assessing pain, especially when the patient's verbal ability to communicate is impaired. The facial muscle-based action units (AUs), which are defined by the Facial Action Coding System (FACS), have been widely studied and are highly reliable as a method for detecting facial expressions (FE) including valid detection of pain. Unfortunately, FACS coding by humans is a very time-consuming task that makes its clinical use prohibitive. Significant progress on automated facial expression recognition (AFER) has led to its numerous successful applications in FACS-based affective computing problems. However, only a handful of studies have been reported on automated pain detection (APD), and its application in clinical settings is still far from a reality. In this paper, we review the progress in research that has contributed to automated pain detection, with focus on 1) the framework-level similarity between spontaneous AFER and APD problems; 2) the evolution of system design including the recent development of deep learning methods; 3) the strategies and considerations in developing a FACS-based pain detection framework from existing research; and 4) introduction of the most relevant databases that are available for AFER and APD studies. We attempt to present key considerations in extending a general AFER framework to an APD framework in clinical settings. In addition, the performance metrics are also highlighted in evaluating an AFER or an APD system.


Inferring Sentiment from Web Images with Joint Inference on Visual and Social Cues: A Regulated Matrix Factorization Approach

AAAI Conferences

In this paper, we study the problem of understanding human sentiments from large scale collection of Internet images based on both image features and contextual social network information (such as friend comments and user description). Despite the great strides in analyzing user sentiment based on text information, the analysis of sentiment behind the image content has largely been ignored. Thus, we extend the significant advances in text-based sentiment prediction tasks to the higher-level challenge of predicting the underlying sentiments behind the images. We show that neither visual features nor the textual features are by themselves sufficient for accurate sentiment labeling. Thus, we provide a way of using both of them. We leverage the low-level visual features and mid-level attributes of an image, and formulate sentiment prediction problem as a non-negative matrix tri-factorization framework, which has the flexibility to incorporate multiple modalities of information and the capability to learn from heterogeneous features jointly. We develop an optimization algorithm for finding a local-optima solution under the proposed framework. With experiments on two large-scale datasets, we show that the proposed method improves significantly over existing state-of-the-art methods.


Recent Advances in Zero-shot Recognition

arXiv.org Machine Learning

With the recent renaissance of deep convolution neural networks, encouraging breakthroughs have been achieved on the supervised recognition tasks, where each class has sufficient training data and fully annotated training data. However, to scale the recognition to a large number of classes with few or now training samples for each class remains an unsolved problem. One approach to scaling up the recognition is to develop models capable of recognizing unseen categories without any training instances, or zero-shot recognition/ learning. This article provides a comprehensive review of existing zero-shot recognition techniques covering various aspects ranging from representations of models, and from datasets and evaluation settings. We also overview related recognition tasks including one-shot and open set recognition which can be used as natural extensions of zero-shot recognition when limited number of class samples become available or when zero-shot recognition is implemented in a real-world setting. Importantly, we highlight the limitations of existing approaches and point out future research directions in this existing new research area.


Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP

#artificialintelligence

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services. At the company's AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services -- Amazon Lex, Amazon Polly, Amazon Rekognition -- to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others. The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps. AWS CEO Andy Jassy noted that Amazon has been building AI and machine learning technology for 20 years and said that there are now thousands of people "dedicated to AI in our business."


Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP

#artificialintelligence

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services. At the company's AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services -- Amazon Lex, Amazon Polly, Amazon Rekognition -- to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others. The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps. AWS CEO Andy Jassy noted that Amazon has been building AI and machine learning technology for 20 years and said that there are now thousands of people "dedicated to AI in our business."