Introducing AI-driven acoustic synthesis for AR and VR
Whether it's mingling at a party in the metaverse or watching a home movie in your living room while wearing augmented reality (AR) glasses, acoustics play a role in how these moments will be experienced. We are building for mixed reality and virtual reality experiences like these, and we believe AI will be core to delivering sound quality that realistically matches the settings people are immersed in. Today, Meta AI researchers, in collaboration with an audio specialist from Meta's Reality Labs and researchers from the University of Texas at Austin, are open-sourcing three new models for audio-visual understanding of human speech and sounds in video that are designed to push us toward this reality at a faster rate. We need AI models that understand a person's physical surroundings based on both how they look and how things sound. For example, there's a big difference between how a concert would sound in a large venue versus in your living room.
Jul-2-2022, 06:50:10 GMT