STAR-Bench: Probing Deep Spatio-Temporal Reasoning as Audio 4D Intelligence
Liu, Zihan, Niu, Zhikang, Xiao, Qiuyang, Zheng, Zhisheng, Yuan, Ruoqi, Zang, Yuhang, Cao, Yuhang, Dong, Xiaoyi, Liang, Jianze, Chen, Xie, Sun, Leilei, Lin, Dahua, Wang, Jiaqi
–arXiv.org Artificial Intelligence
Despite rapid progress in Multi-modal Large Language Models and Large Audio-Language Models, existing audio benchmarks largely test semantics that can be recovered from text captions, masking deficits in fine-grained perceptual reasoning. We formalize audio 4D intelligence that is defined as reasoning over sound dynamics in time and 3D space, and introduce ST AR-Bench to measure it. ST AR-Bench combines a Foundational Acoustic Perception setting (six attributes under absolute and relative regimes) with a Holistic Spatio-Temporal Reasoning setting that includes segment reordering for continuous and discrete processes and spatial tasks spanning static localization, multi-source relations, and dynamic trajectories. Our data curation pipeline uses two methods to ensure high-quality samples. For foundational tasks, we use procedurally synthesized and physics-simulated audio. For holistic data, we follow a four-stage process that includes human annotation and final selection based on human performance. Unlike prior benchmarks where caption-only answering reduces accuracy slightly, ST AR-Bench induces far larger drops (-31.5% temporal, -35.2% spatial), evidencing its focus on linguistically hard-to-describe cues. Evaluating 19 models reveals substantial gaps compared with humans and a capability hierarchy: closed-source models are bottlenecked by fine-grained perception, while open-source models lag across perception, knowledge, and reasoning. Our ST AR-Bench provides critical insights and a clear path forward for developing future models with a more robust understanding of the physical world. As a fundamental modality of human perception, audio serves a pivotal role in communication, aesthetic appreciation, and situational awareness, complementing the limitations of visual perception. With the rise of Multimodal Large Language Models (MLLMs) (Comanici et al., 2025; Achiam et al., 2023) and especially Large Audio-Language Models (LALMs) (Chu et al., 2024; Goel et al., 2025), these models have shown impressive capabilities in understanding audio, representing a crucial step toward diverse applications such as embodied intelligence (Paul et al., 2022). To drive progress, a series of audio benchmarks has been introduced (Y ang et al., 2024; Sakshi et al., 2025), covering traditional tasks like Automatic Speech Recognition (ASR) and sound event classification.
arXiv.org Artificial Intelligence
Dec-1-2025