Viewers are more likely to be paying attention, and the clips ultimately reach a larger audience due to highlight reel replays and social media shares. First, by tuning an algorithm to look for specific entities -- in this case sponsors' logos -- cognitive technology can find and quantify brand placements in a video. With AI technology, production teams could efficiently source relevant content to integrate past segments into the current broadcast. To beat the competition, sports networks can utilize AI technology to provide an engaging viewer experience.
Whether you're training for a marathon race or gearing up for a marathon of binge-watching TV, both athletes and casual sports fans can benefit from advances in sports video. Due to its widespread appeal, high demand, and abundance of related data, sports video is a prime candidate for innovation. Cognitive technology is teed up to enhance the viewer experience and maximize advertising revenue. What's more, AI technology can disrupt the game itself. Here are the three main players in sports broadcasting that stand to gain the most from cognitive advancements in video technology.
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches.
Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15 times faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.