How AI will disrupt sports entertainment networks

#artificialintelligence

Viewers are more likely to be paying attention, and the clips ultimately reach a larger audience due to highlight reel replays and social media shares. First, by tuning an algorithm to look for specific entities -- in this case sponsors' logos -- cognitive technology can find and quantify brand placements in a video. With AI technology, production teams could efficiently source relevant content to integrate past segments into the current broadcast. To beat the competition, sports networks can utilize AI technology to provide an engaging viewer experience.


How AI will disrupt sports entertainment networks

#artificialintelligence

Whether you're training for a marathon race or gearing up for a marathon of binge-watching TV, both athletes and casual sports fans can benefit from advances in sports video. Due to its widespread appeal, high demand, and abundance of related data, sports video is a prime candidate for innovation. Cognitive technology is teed up to enhance the viewer experience and maximize advertising revenue. What's more, AI technology can disrupt the game itself. Here are the three main players in sports broadcasting that stand to gain the most from cognitive advancements in video technology.


Learning Generalizable Device Placement Algorithms for Distributed Machine Learning

Neural Information Processing Systems

We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches.


Placeto: Learning Generalizable Device Placement Algorithms for Distributed Machine Learning

arXiv.org Machine Learning

We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.


Xu

AAAI Conferences

Enemy observers, such as cameras and guards, are common elements that provide challenge to many stealth and combat games. Defining the exact placement and movement of such entities, however, is a non-trivial process, requiring a designer balance level-difficulty, coverage, and representation of realistic behaviours. In this work we explore systems for procedurally generating both camera and guard placement in a stealth game context. For the former we use an approach based on weakening theoretical results for optimal camera placement, while for the latter we perform automatic roadmap construction, generating more specific patrol behaviours through a grammar-based technique. We evaluate both approaches with a non-trivial implementation in Unity3D, and apply quantitative metrics to demonstrate how different parametrizations can be used to control level difficulty without sacrificing believability.