Google reveals AI tricks behind new augmented reality animations

#artificialintelligence 

The animated masks, glasses, and hats that apps like YouTube Stories overlay on faces are pretty nifty, but how on earth do they look so realistic? Well, thanks to a deep dive published this morning by Google's AI research division, it's less of a mystery than before. In the blog post, engineers at the Mountain View company describe the AI tech at the core of Stories and ARCore's Augmented Faces API, which they say can simulate light reflections, model face occlusions, model specular reflection, and more -- all in real time with a single camera. "One of the key challenges in making these AR features possible is proper anchoring of the virtual content to the real world," Google AI's Artsiom Ablavatski and Ivan Grishchenko explain, adding "a process that requires a unique set of perceptive technologies able to track the highly dynamic surface geometry across every smile, frown, or smirk." Google's augmented reality (AR) pipeline, which taps TensorFlow Lite -- a lightweight, mobile, and embedded implementation of Google's TensorFlow machine learning framework -- for hardware-accelerated processing where available, comprises two neural networks (i.e., layers of math functions modeled after biological neurons).