AudioMAE++: learning better masked audio representations with SwiGLU FFNs
Yadav, Sarthak, Theodoridis, Sergios, Tan, Zheng-Hua
–arXiv.org Artificial Intelligence
ABSTRACT Masked Autoencoders (MAEs) trained on audio spectrogram patches have emerged as a prominent approach for learning self-supervised audio representations. While several recent papers have evaluated key aspects of training MAEs on audio data, the majority of these approaches still leverage vanilla transformer building blocks, whereas the transformer community has seen steady integration of newer architectural advancements. In this work, we propose AudioMAE++, a revamped audio masked autoencoder with two such enhancements, namely macaron-style transformer blocks with gated linear units. When pretrained on the AudioSet dataset, the proposed AudioMAE++ models outperform existing MAE based approaches on 10 diverse downstream tasks, demonstrating excellent performance on audio classification and speech-based benchmarks. The proposed AudioMAE++ models also demonstrate excellent scaling charecteristics, outperforming directly comparable standard MAE baselines with up to 4 more parameters.
arXiv.org Artificial Intelligence
Jul-15-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.40)
- China > Beijing
- Europe
- Denmark > North Jutland
- Aalborg (0.04)
- Greece > Attica
- Athens (0.04)
- Italy > Tuscany
- Florence (0.04)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.40)
- Denmark > North Jutland
- North America > United States
- Colorado (0.04)
- Asia
- Genre:
- Research Report (0.50)
- Technology: