Facebook's AI extracts playable characters from real-world videos
Using these and combined pose data, Pose2Frame separates between character-dependent changes in the scene like shadows, held items, and reflections and those that are character-independent, and returns a pair of outputs that are linearly blended with any desired background. To train the AI system, the researchers sourced three videos, each between five and eight minutes long, of a tennis player outdoors, a person swinging a sword indoors, and a person walking. Compared with a neural network model fed three-minute video of a dancer, they report that their approach managed to successfully field dynamic elements, such as other people and differences in camera angle, in addition to variations in character clothing and camera angle. "Each network addresses a computational problem not previously fully met, together paving the way for the generation of video games with realistic graphics," they wrote. "In addition, controllable characters extracted from YouTube-like videos can find their place in the virtual worlds and augmented realities." Facebook isn't the only company investigating AI systems that might aid in game design. Startup Promethean AI employs machine learning to help human artists create art for video games, and Nvidia researchers recently demonstrated a generative model that can create virtual environments using video snippets. Machine learning has also been used to rescue old game textures in retro titles like Final Fantasy VII and The Legend of Zelda: Twilight Princess, and to generate thousands of levels in games like Doom from scratch.
Apr-19-2019, 20:50:01 GMT
- Industry:
- Information Technology > Services (0.87)
- Leisure & Entertainment
- Games > Computer Games (1.00)
- Sports > Tennis (0.93)
- Technology: