Goto

Collaborating Authors

 few-shot audio-visual learning


Few-Shot Audio-Visual Learning of Environment Acoustics Supplementary Material

Neural Information Processing Systems

Moreover, we qualitatively demonstrate our model's prediction quality by Please use headphones to hear the spatial audio correctly. As we can see, the prediction error tends to be small when the source is relatively close to the receiver, or there are no major obstacles along the path connecting them. We show two scenes and two examples per scene. For our experiment with ambient environment sounds (Sec. We will publish the link to our datasets on our project page. Here, we provide our architecture and additional training details for reproducibility.


Few-Shot Audio-Visual Learning of Environment Acoustics

Neural Information Processing Systems

Room impulse response (RIR) functions capture how the surrounding physical environment transforms the sounds heard by a listener, with implications for various applications in AR, VR, and robotics. Whereas traditional methods to estimate RIRs assume dense geometry and/or sound measurements throughout the environment, we explore how to infer RIRs based on a sparse set of images and echoes observed in the space. Towards that goal, we introduce a transformer-based method that uses self-attention to build a rich acoustic context, then predicts RIRs of arbitrary query source-receiver locations through cross-attention. Additionally, we design a novel training objective that improves the match in the acoustic signature between the RIR predictions and the targets. In experiments using a state-of-the-art audio-visual simulator for 3D environments, we demonstrate that our method successfully generates arbitrary RIRs, outperforming state-of-the-art methods and---in a major departure from traditional methods---generalizing to novel environments in a few-shot manner.