Watch Real Football Matches in Miniature Played on Your Desk

#artificialintelligence 

Researchers at the University of Washington have used a machine learning neural network algorithm to render two-dimensional video clips in three dimensions. University of Washington researchers led by Konstantinos Rematas have taught a machine learning neural network algorithm to render two-dimensional (2D) video clips posted on YouTube as three-dimensional (3D) images. The researchers collected footage from the FIFA football video game as a training dataset, since the game estimates the position in three dimensions of each player, yielding data about their actual location as well as how they are displayed in 2D. Once that training was complete, the researchers were able to use the algorithm to transform YouTube clip imagery into three dimensions. Viewers using an augmented reality headset can see the enhanced versions of the clips as though it were positioned on a flat surface in front of them.