Goto

Collaborating Authors

Results


Why is AI Considered a Misfit to Read Human Emotions?

#artificialintelligence

AI has been reigning in the industries and business ecosystems with its unending capabilities to accelerate automation and provide business intelligence. Disruptive technologies like artificial intelligence, machine learning, blockchain, etc. have enabled companies to create better user experiences and advance business growth. Emotional AI is a rather recent development in the field of modern technology, and it claims that AI systems can read facial expressions and analyze human emotions. This method is also known as affect recognition technology. Recently Article 19, a British human rights organization published a report stating the increasing use of AI-based emotion recognition technology in China by the law enforcement authorities, corporate bodies, and the state itself.


Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions

arXiv.org Artificial Intelligence

Computational modelling of political discourse tasks has become an increasingly important area of research in natural language processing. Populist rhetoric has risen across the political sphere in recent years; however, computational approaches to it have been scarce due to its complex nature. In this paper, we present the new $\textit{Us vs. Them}$ dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. We investigate the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.


Suspect AI: Vibraimage, Emotion Recognition Technology, and Algorithmic Opacity

arXiv.org Artificial Intelligence

Vibraimage is a digital system that quantifies a subject's mental and emotional state by analysing video footage of the movements of their head. Vibraimage is used by police, nuclear power station operators, airport security and psychiatrists in Russia, China, Japan and South Korea, and has been deployed at an Olympic Games, FIFA World Cup, and G7 Summit. Yet there is no reliable evidence that the technology is actually effective; indeed, many claims made about its effects seem unprovable. What exactly does vibraimage measure, and how has it acquired the power to penetrate the highest profile and most sensitive security infrastructure across Russia and Asia? I first trace the development of the emotion recognition industry, before examining attempts by vibraimage's developers and affiliates scientifically to legitimate the technology, concluding that the disciplining power and corporate value of vibraimage is generated through its very opacity, in contrast to increasing demands across the social sciences for transparency. I propose the term 'suspect AI' to describe the growing number of systems like vibraimage that algorithmically classify suspects / non-suspects, yet are themselves deeply suspect. Popularising this term may help resist such technologies' reductivist approaches to 'reading' -- and exerting authority over -- emotion, intentionality and agency.


AI in Pursuit of Happiness, Finding Only Sadness: Multi-Modal Facial Emotion Recognition Challenge

arXiv.org Machine Learning

The importance of automated Facial Emotion Recognition (FER) grows the more common human-machine interactions become, which will only continue to increase dramatically with time. A common method to describe human sentiment or feeling is the categorical model the `7 basic emotions', consisting of `Angry', `Disgust', `Fear', `Happiness', `Sadness', `Surprise' and `Neutral'. The `Emotion Recognition in the Wild' (EmotiW) competition is now in its 7th year and has become the standard benchmark for measuring FER performance. The focus of this paper is the EmotiW sub-challenge of classifying videos in the `Acted Facial Expression in the Wild' (AFEW) dataset, consisting of both visual and audio modalities, into one of the above classes. Machine learning has exploded as a research topic in recent years, with advancements in `Deep Learning' a key part of this. Although Deep Learning techniques have been widely applied to the FER task by entrants in previous years, this paper has two main contributions: (i) to apply the latest `state-of-the-art' visual and temporal networks and (ii) exploring various methods of fusing features extracted from the visual and audio elements to enrich the information available to the final model making the prediction. There are a number of complex issues that arise when trying to classify emotions for `in-the-wild' video sequences, which the above two approaches attempt to directly address. There are some positive findings when comparing the results of this paper to past submissions, indicating that further research into the proposed methods and fine-tuning of the models deployed, could result in another step forwards in the field of automated FER.


Improving speech emotion recognition via Transformer-based Predictive Coding through transfer learning

arXiv.org Machine Learning

Speech emotion recognition is an important aspect of human-computer interaction. Prior works propose various transfer learning approaches to deal with limited samples in speech emotion recognition. However, they require labeled data for the source task, which cost much effort to collect them. To solve this problem, we focus on the unsupervised task, predictive coding. Nearly unlimited data for most domains can be utilized. In this paper, we utilize the multi-layer Transformer model for the predictive coding, followed with transfer learning approaches to share knowledge of the pre-trained predictive model for speech emotion recognition. We conduct experiments on IEMOCAP, and experimental results reveal the advantages of the proposed method. Our method reaches 65.03% in the weighted accuracy, which also outperforms some currently advanced approaches.