Emotion-Guided Image to Music Generation
Kundu, Souraja, Singh, Saket, Iwahori, Yuji
–arXiv.org Artificial Intelligence
Generating music from images can enhance various applications, including background music for photo slideshows, social media experiences, and video creation. This paper presents an emotion-guided image-to-music generation framework that leverages the Valence-Arousal (VA) emotional space to produce music that aligns with the emotional tone of a given image. Unlike previous models that rely on contrastive learning for emotional consistency, the proposed approach directly integrates a VA loss function to enable accurate emotional alignment. The model employs a CNN-Transformer architecture, featuring pre-trained CNN image feature extractors and three Transformer encoders to capture complex, high-level emotional features from MIDI music. Three Transformer decoders refine these features to generate musically and emotionally consistent MIDI sequences. Experimental results on a newly curated emotionally paired image-MIDI dataset demonstrate the proposed model's superior performance across metrics such as Polyphony Rate, Pitch Entropy, Groove Consistency, and loss convergence.
arXiv.org Artificial Intelligence
Oct-29-2024
- Country:
- Asia
- India > Assam
- Guwahati (0.04)
- Japan > Honshū
- Chūbu (0.04)
- Malaysia > Kuala Lumpur
- Kuala Lumpur (0.04)
- Sri Lanka > Western Province
- India > Assam
- Europe
- Czechia > Prague (0.04)
- Germany > Lower Saxony
- Hanover (0.04)
- United Kingdom > England
- East Sussex > Brighton (0.04)
- Greater London > London (0.04)
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- New York > New York County
- New York City (0.04)
- Texas > Comal County
- New Braunfels (0.04)
- Massachusetts > Middlesex County
- Asia
- Genre:
- Research Report (0.50)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Music (1.00)
- Technology: