Goto

Collaborating Authors

Emotion


Artificial Intelligence knows when you feel lonely

#artificialintelligence

Researchers at the University of California San Diego have devised an artificial intelligence (AI) tool to predict the level of loneliness in adults, with 94% accuracy. The tool used Natural Language Processing (NLP) developed by IBM to process large amounts of unstructured natural speech and text data. It analysed factors like cognition, mobility, sleep and physical activity to understand the process of aging. This tool is an example of how AI can be used in devices to detect mental health conditions. Market research firm Gartner predicts, by 2022, your personal device will know more about your emotional state than your own family members.


Develop emotional intelligence with this heavily discounted bundle

Mashable

TL;DR: The Emotional Intelligence and Decision-Making Bundle is on sale for £25.85 as of Jan. 1, saving you 96% on list price. With emotions running high, empathy, social skills, and self-awareness (some of the main areas of emotional intelligence) have seemingly gone out the window. But there are ways to get back in touch with your feelings and become a better human, like with this Emotional Intelligence and Decision-Making Bundle. Coined as a concept in 1995 by psychologist and science journalist Daniel Goleman, emotional intelligence centres around the ability to manage and monitor one's own as well as other's emotions and use them to guide one's thinking and actions. An emotionally intelligent person will have a higher chance of success and a stronger ability to effectively lead.


DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition

arXiv.org Artificial Intelligence

This paper presents our pioneering effort for emotion recognition in conversation (ERC) with pre-trained language models. Unlike regular documents, conversational utterances appear alternately from different parties and are usually organized as hierarchical structures in previous work. Such structures are not conducive to the application of pre-trained language models such as XLNet. To address this issue, we propose an all-in-one XLNet model, namely DialogXL, with enhanced memory to store longer historical context and dialog-aware self-attention to deal with the multi-party structures. Specifically, we first modify the recurrence mechanism of XLNet from segment-level to utterance-level in order to better model the conversational data. Second, we introduce dialog-aware self-attention in replacement of the vanilla self-attention in XLNet to capture useful intra- and inter-speaker dependencies. Extensive experiments are conducted on four ERC benchmarks with mainstream models presented for comparison. The experimental results show that the proposed model outperforms the baselines on all the datasets. Several other experiments such as ablation study and error analysis are also conducted and the results confirm the role of the critical modules of DialogXL.


Multi-Classifier Interactive Learning for Ambiguous Speech Emotion Recognition

arXiv.org Artificial Intelligence

In recent years, speech emotion recognition technology is of great significance in industrial applications such as call centers, social robots and health care. The combination of speech recognition and speech emotion recognition can improve the feedback efficiency and the quality of service. Thus, the speech emotion recognition has been attracted much attention in both industry and academic. Since emotions existing in an entire utterance may have varied probabilities, speech emotion is likely to be ambiguous, which poses great challenges to recognition tasks. However, previous studies commonly assigned a single-label or multi-label to each utterance in certain. Therefore, their algorithms result in low accuracies because of the inappropriate representation. Inspired by the optimally interacting theory, we address the ambiguous speech emotions by proposing a novel multi-classifier interactive learning (MCIL) method. In MCIL, multiple different classifiers first mimic several individuals, who have inconsistent cognitions of ambiguous emotions, and construct new ambiguous labels (the emotion probability distribution). Then, they are retrained with the new labels to interact with their cognitions. This procedure enables each classifier to learn better representations of ambiguous data from others, and further improves the recognition ability. The experiments on three benchmark corpora (MAS, IEMOCAP, and FAU-AIBO) demonstrate that MCIL does not only improve each classifier's performance, but also raises their recognition consistency from moderate to substantial.


Conversational AI Bots with Emotions

#artificialintelligence

Can we add some Emotions to Bots? According to the collins dictionary, "An emotion is a feeling such as happiness, love, fear, anger, or hatred, which can be caused by the situation that you are in or the people you are with." So, when I speak with my Robot, I expect good responses with emotions based on the conversation that happened between me and the Bot. To achieve this, either we should create an Artificial Brain or identify and respond with appropriate words. Artificial brain research is going on in most of the world-famous AI labs.


Emotion AI – the future of artificial intelligence?

#artificialintelligence

Kai works as a sales manager at DMEXCO. Holding a degree in business studies, he had been managing his own start-up for several years. No wonder that at DMEXCO, he is now responsible for everything that has to do with start-ups. Besides his blog stories on the digital start-up scene, Kai's texts focus on future topics such as smart devices, IoT and innovations in the digital economy.


Women in Robotics Update: Sarah Bergbreiter, Aude Billard, Cynthia Breazeal

Robohub

In spite of the amazing contributions of women in the field of robotics, it's still possible to attend robotics conferences or see panels that don't have a single female face. Let alone seeing people of color represented! Civil rights activist Marian Wright Edelman said that "You can't be what you don't see". Women in Robotics was formed to show that there were wonderful female role models in robotics, as well as providing an online professional network for women working in robotics and women who'd like to work in robotics. We're facing an incredible skill shortage in the rapidly growing robotics industry, so we'd like to attract newcomers from other industries, as well as inspiring the next generation of girls.


Artificial intelligence starts to gain empathy for humans - EDN

#artificialintelligence

Artificial intelligence (AI) is finding a home in many applications, from industrial automation to autonomous vehicles. Perhaps its most personal impact, however, is when the AI must interact with humans in providing information and services. For human interaction, the next trend for AI to embrace is the addition of emotional intelligence. When the Amazon Echo first came out in 2014, I viewed it as a perfect example of an IoT device. It required a minimum of onboard hardware capability, achieving most of its impressive capability in understanding and responding to human speech by using cloud-connected resources.


Generative Adversarial Networks in Human Emotion Synthesis:A Review

arXiv.org Artificial Intelligence

Synthesizing realistic data samples is of great value for both academic and industrial communities. Deep generative models have become an emerging topic in various research areas like computer vision and signal processing. Affective computing, a topic of a broad interest in computer vision society, has been no exception and has benefited from generative models. In fact, affective computing observed a rapid derivation of generative models during the last two decades. Applications of such models include but are not limited to emotion recognition and classification, unimodal emotion synthesis, and cross-modal emotion synthesis. As a result, we conducted a review of recent advances in human emotion synthesis by studying available databases, advantages, and disadvantages of the generative models along with the related training strategies considering two principal human communication modalities, namely audio and video. In this context, facial expression synthesis, speech emotion synthesis, and the audio-visual (cross-modal) emotion synthesis is reviewed extensively under different application scenarios. Gradually, we discuss open research problems to push the boundaries of this research area for future works.


EEGS: A Transparent Model of Emotions

arXiv.org Artificial Intelligence

This paper presents the computational details of our emotion model, EEGS, and also provides an overview of a three-stage validation methodology used for the evaluation of our model, which can also be applicable for other computational models of emotion. A major gap in existing emotion modelling literature has been the lack of computational/technical details of the implemented models, which not only makes it difficult for early-stage researchers to understand the area but also prevents benchmarking of the developed models for expert researchers. We partly addressed these issues by presenting technical details for the computation of appraisal variables in our previous work. In this paper, we present mathematical formulas for the calculation of emotion intensities based on the theoretical premises of appraisal theory. Moreover, we will discuss how we enable our emotion model to reach to a regulated emotional state for social acceptability of autonomous agents. We hope this paper will allow a better transparency of knowledge, accurate benchmarking and further evolution of the field of emotion modelling.