Goto

Collaborating Authors

 Emotion


Boundary-aware Prototype-driven Adversarial Alignment for Cross-Corpus EEG Emotion Recognition

Li, Guangli, Wu, Canbiao, Tian, Na, Zhang, Li, Liang, Zhen

arXiv.org Machine Learning

Electroencephalography (EEG)-based emotion recognition suffers from severe performance degradation when models are transferred across heterogeneous datasets due to physiological variability, experimental paradigm differences, and device inconsistencies. Existing domain adversarial methods primarily enforce global marginal alignment and often overlook class-conditional mismatch and decision boundary distortion, limiting cross-corpus generalization. In this work, we propose a unified Prototype-driven Adversarial Alignment (PAA) framework for cross-corpus EEG emotion recognition. The framework is progressively instantiated in three configurations: PAA-L, which performs prototype-guided local class-conditional alignment; PAA-C, which further incorporates contrastive semantic regularization to enhance intra-class compactness and inter-class separability; and PAA-M, the full boundary-aware configuration that integrates dual relation-aware classifiers within a three-stage adversarial optimization scheme to explicitly refine controversial samples near decision boundaries. By combining prototype-guided subdomain alignment, contrastive discriminative enhancement, and boundary-aware aggregation within a coherent adversarial architecture, the proposed framework reformulates emotion recognition as a relation-driven representation learning problem, reducing sensitivity to label noise and improving cross-domain stability. Extensive experiments on SEED, SEED-IV, and SEED-V demonstrate state-of-the-art performance under four cross-corpus evaluation protocols, with average improvements of 6.72\%, 5.59\%, 6.69\%, and 4.83\%, respectively. Furthermore, the proposed framework generalizes effectively to clinical depression identification scenarios, validating its robustness in real-world heterogeneous settings. The source code is available at \textit{https://github.com/WuCB-BCI/PAA}


'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI

TIME - Tech

'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support. Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders--and the companies building AI. That's according to data from 70 countries, gathered by the Collective Intelligence Project (CIP).





Supplementary Materials for Incomplete Multimodality-Diffused Emotion Recognition

Neural Information Processing Systems

In this supplementary material, we first present the details of the conditional score network in Sec. 2. Sec. 4. Finally, we conduct experiments on Chinese MER dataset CH-SIMS [ I) which is subsequently fixed for the model (i.e., not learnable). Table 1: Hyperparameter settings in IMDer.Hyperparameter CMU-MOSI CMU-MOSEI Optimizer Adam Adam Batch size 32 128 Learning rate 0.001 0.002 σ used in our stochastic differential equation 25 25 Number of iterations for Euler-Maruyama solver 500 500 Shallow Feature Extractor Kernel size for E CH-SIMS contains 2281 refined video segments with fine-grained annotations of modalities. For vision modality, we use MultiComp OpenFace2.0 The experimental results are listed in the Tab. 3. Obviously, our proposed IMDer consistently achieves better results than MMIN or GCNet under random missing protocol.




To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation

Neural Information Processing Systems

Accuracy is a commonly adopted performance metric in various classification tasks, which measures the proportion of correctly classified samples among all samples. It assumes equal importance for all classes, hence equal severity for misclassifications. However, in the task of emotional classification, due to the psychological similarities between emotions, misclassifying a certain emotion into one class may be more severe than another, e.g., misclassifying'excitement' as'anger' apparently is more severe than as'awe'. Albeit high meaningful for many applications, metrics capable of measuring these cases of misclassifications in visual emotion recognition tasks have yet to be explored. In this paper, based on Mikel's emotion wheel from psychology, we propose a novel approach for evaluating the performance in visual emotion recognition, which takes into account the distance on the emotion wheel between different emotions to mimic the psychological nuances of emotions. Experimental results in semi-supervised learning on emotion recognition and user study have shown that our proposed metrics is more effective than the accuracy to assess the performance and conforms to the cognitive laws of human emotions.


E 3 : Exploring Embodied Emotion Through A Large-Scale Egocentric Video Dataset

Neural Information Processing Systems

Understanding human emotions is fundamental to enhancing human-computer interaction, especially for embodied agents that mimic human behavior. Traditional emotion analysis often takes a third-person perspective, limiting the ability of agents to interact naturally and empathetically. To address this gap, this paper presents $E^3$ for Exploring Embodied Emotion, the first massive first-person view video dataset.