LibEMER: A novel benchmark and algorithms library for EEG-based Multimodal Emotion Recognition

Liu, Zejun, Chen, Yunshan, Xie, Chengxi, Xie, Yugui, Liu, Huan

arXiv.org Artificial Intelligence 

ABSTRACT EEG-based multimodal emotion recognition(EMER) has gained significant attention and witnessed notable advancements, the inherent complexity of human neural systems has motivated substantial efforts toward multimodal approaches. However, this field currently suffers from three critical limitations: (i) the absence of open-source implementations. To address these challenges, we introduce LibEMER, a unified evaluation framework that provides fully reproducible Py-Torch implementations of curated deep learning methods alongside standardized protocols for data preprocessing, model realization, and experimental setups. This framework enables unbiased performance assessment on three widely-used public datasets across two learning tasks. The open-source library is publicly accessible at: LibEMERIndex T erms-- multimodal learning, emotion recognition, benchmark, electroencephalography(EEG), open-source library 1. INTRODUCTION EEG-based multimodal emotion recognition (EMER) represents a critical research domain within affective computing, focusing on the development of computational models for precise identification of human emotional states. As electroencephalography (EEG) provides direct measurements of cortical neural activity, it has received continuous attention.