Self-Supervised Generation of Spatial Audio for 360° Video
Morgado, Pedro, Nvasconcelos, Nuno, Langlois, Timothy, Wang, Oliver
–Neural Information Processing Systems
We introduce an approach to convert mono audio recorded by a 360° video camera into spatial audio, a representation of the distribution of sound over the full viewing sphere. Spatial audio is an important component of immersive 360° video viewing, but spatial audio microphones are still rare in current 360° video production. Our system consists of end-to-end trainable neural networks that separate individual sound sources and localize them on the viewing sphere, conditioned on multi-modal analysis from the audio and 360° video frames. We introduce several datasets, including one filmed ourselves, and one collected in-the-wild from YouTube, consisting of 360° videos uploaded with spatial audio. During training, ground truth spatial audio serves as self-supervision and a mixed down mono track forms the input to our network. Using our approach we show that it is possible to infer the spatial localization of sounds based only on a synchronized 360° video and the mono audio track.
Neural Information Processing Systems
Dec-31-2018
- Country:
- North America
- Canada (0.14)
- United States > California (0.14)
- North America
- Industry:
- Leisure & Entertainment (0.93)
- Media > Music (0.68)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Vision (1.00)
- Communications (1.00)
- Artificial Intelligence
- Information Technology