dspast
DSpAST: Disentangled Representations for Spatial Audio Reasoning with Large Language Models
Wilkinghoff, Kevin, Tan, Zheng-Hua
ABSTRACT Reasoning about spatial audio with large language models requires a spatial audio encoder as an acoustic front-end to obtain audio em-beddings for further processing. Such an encoder needs to capture all information required to detect the type of sound events, as well as the direction and distance of their corresponding sources. Accomplishing this with a single audio encoder is demanding as the information required for each of these tasks is mostly independent of each other. As a result, the performance obtained with a single encoder is often worse than when using task-specific audio encoders. In this work, we present DSpAST, a novel audio encoder based on SpatialAST that learns disentangled representations of spatial audio while having only 0.2% additional parameters. Experiments on Spa-tialSoundQA with the spatial audio reasoning system BA T demonstrate that DSpAST significantly outperforms SpatialAST.