MVRS: The Multimodal Virtual Reality Stimuli-based Emotion Recognition Dataset
Mousavi, Seyed Muhammad Hossein, Ilanloo, Atiye
–arXiv.org Artificial Intelligence
Automatic emotion recognition has become increasingly important with the rise of AI, especially in fields like healthcare, education, and automotive systems. However, there is a lack of multimodal datasets, particularly involving body motion and physiological signals, which limits progress in the field. To address this, the MVRS dataset is introduced, featuring synchronized recordings from 13 participants aged 12 to 60 exposed to VR based emotional stimuli (relaxation, fear, stress, sadness, joy). Data were collected using eye tracking (via webcam in a VR headset), body motion (Kinect v2), and EMG and GSR signals (Arduino UNO), all timestamp aligned. Participants followed a unified protocol with consent and questionnaires. Features from each modality were extracted, fused using early and late fusion techniques, and evaluated with classifiers to confirm the datasets quality and emotion separability, making MVRS a valuable contribution to multimodal affective computing.
arXiv.org Artificial Intelligence
Sep-9-2025
- Country:
- Asia
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- South Korea > Daegu
- Daegu (0.04)
- Middle East > Iran
- Europe
- Finland > Northern Ostrobothnia
- Oulu (0.04)
- Switzerland (0.04)
- United Kingdom > England
- Finland > Northern Ostrobothnia
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- California > Santa Clara County
- South America > Chile
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Health & Medicine
- Consumer Health (1.00)
- Therapeutic Area
- Neurology (0.93)
- Psychiatry/Psychology > Mental Health (0.93)
- Leisure & Entertainment > Games
- Computer Games (0.67)
- Technology: