Sports


Bringing deep learning to life

#artificialintelligence

Gaby Ecanow loves listening to music, but never considered writing her own until taking 6.S191 (Introduction to Deep Learning). By her second class, the second-year MIT student had composed an original Irish folk song with the help of a recurrent neural network, and was considering how to adapt the model to create her own Louis the Child-inspired dance beats. "It was cool," she says. "It didn't sound at all like a machine had made it." This year, 6.S191 kicked off as usual, with students spilling into the aisles of Stata Center's Kirsch Auditorium during Independent Activities Period (IAP).


Analyze a Soccer game using Tensorflow Object Detection and OpenCV

#artificialintelligence

The API provides pre-trained object detection models that have been trained on the COCO dataset. COCO dataset is a set of 90 commonly found objects. See image below of objects that are part of COCO dataset. In this case we care about classes -- persons and soccer ball which are both part of COCO dataset. The API also has a big set of models it supports. See table below for reference. The models have a trade off between speed and accuracy. Since I was interested in real time analysis, I chose SSDLite mobilenet v2. Once we identify the players using the object detection API, to predict which team they are in we can use OpenCV which is powerful library for image processing.


An Introduction to Unity ML-Agents

#artificialintelligence

The past few years have witnessed breakthroughs in reinforcement learning (RL). From the first successful use of RL by a deep learning model for learning a policy from pixel input in 2013 to the OpenAI Dexterity program in 2019, we live in an exciting moment in RL research. Consequently, we need, as RL researchers, to create more and more complex environments and Unity helps us to do that. Unity ML-Agents toolkit is a new plugin based on the game engine Unity that allows us to use the Unity Game Engine as an environment builder to train agents. From playing football, learning to walk, to jump big walls, to train a cute doggy to catch sticks, Unity ML-Agents Toolkit provides a ton of amazing pre-made environment.


This app is going to help the NBA find the next Giannis Antetokounmpo

#artificialintelligence

On Friday at the NBA All-Star Tech Summit, the league unveiled NBA Global Scout, a mobile, AI-powered app that allows players from India to Indiana, China to Chi-town, Senegal to San Diego to record their measurements--such as wingspan, height, vertical leap, and agility--then build and show off their skills through development drills created to help NBA scouts evaluate their on-court proficiencies. NBA chief innovation officer Amy Brooks says this is a tool to help democratize the process of trying to be an elite basketball player. "We see the possibilities here as essentially creating the LinkedIn for elite basketball," says Brooks. "In the short term, it starts with profile and anthropometric and agility metrics. In the long term, there's even more possibilities when it comes to game video from players, tracking data, highlights, and more, just aggregated profiles of complete basketball players. Scouting is resource-intensive, and it will be fantastic both for the NBA and elite players globally to make the discovery process more seamless using technology."


South Sudan's Olympians in love with Japanese language -- as well as real track in Gunma

The Japan Times

They are trying to get a head start, and unlike most of the 11,000 athletes who will be in Tokyo for the games, and thousands more for the Paralympics, they will be able to speak Japanese. "Just the language itself, I love it," said Abraham Majok, a runner who arrived in Japan in November with three other South Sudanese athletes and a coach. "And it's nice and since we started learning it. But, you know, we are moving well with it and we just love it." They are training northwest of Tokyo in Maebashi, Gunma Prefecture, supported mainly by donations from the public.


Movement extraction by detecting dynamics switches and repetitions

Neural Information Processing Systems

Many time-series such as human movement data consist of a sequence of basic actions, e.g., forehands and backhands in tennis. Automatically extracting and characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which an observed time-series is modeled as a concatenation of segments corresponding to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement, with possible time re-scaling. We analyze three different approximation methods for dealing with model intractability, and demonstrate how the proposed approach can successfully segment table tennis movements recorded using a robot arm as haptic input device.


Persistent Homology for Learning Densities with Bounded Support

Neural Information Processing Systems

We present a novel method for learning densities with bounded support which enables us to incorporate hard' topological constraints. In particular, we show how emerging techniques from computational algebraic topology and the notion of Persistent Homology can be combined with kernel based methods from Machine Learning for the purpose of density estimation. The proposed formalism facilitates learning of models with bounded support in a principled way, and -- by incorporating Persistent Homology techniques in our approach -- we are able to encode algebraic-topological constraints which are not addressed in current state-of the art probabilistic models. We study the behaviour of our method on two synthetic examples for various sample sizes and exemplify the benefits of the proposed approach on a real-world data-set by learning a motion model for a racecar. We show how to learn a model which respects the underlying topological structure of the racetrack, constraining the trajectories of the car.


Backpropagation with Callbacks: Foundations for Efficient and Expressive Differentiable Programming

Neural Information Processing Systems

Training of deep learning models depends on gradient descent and end-to-end differentiation. Under the slogan of differentiable programming, there is an increasing demand for efficient automatic gradient computation for emerging network architectures that incorporate dynamic control flow, especially in NLP. In this paper we propose an implementation of backpropagation using functions with callbacks, where the forward pass is executed as a sequence of function calls, and the backward pass as a corresponding sequence of function returns. A key realization is that this technique of chaining callbacks is well known in the programming languages community as continuation-passing style (CPS). Any program can be converted to this form using standard techniques, and hence, any program can be mechanically converted to compute gradients.


Adaptive Skills Adaptive Partitions (ASAP)

Neural Information Processing Systems

We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework is also able to solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.


Generating Long-term Trajectories Using Deep Hierarchical Networks

Neural Information Processing Systems

We study the problem of modeling spatiotemporal trajectories over long time horizons using expert demonstrations. For instance, in sports, agents often choose action sequences with long-term goals in mind, such as achieving a certain strategic position. Conventional policy learning approaches, such as those based on Markov decision processes, generally fail at learning cohesive long-term behavior in such high-dimensional state spaces, and are only effective when fairly myopic decision-making yields the desired behavior. The key difficulty is that conventional models are single-scale'' and only learn a single state-action policy. We instead propose a hierarchical policy class that automatically reasons about both long-term and short-term goals, which we instantiate as a hierarchical neural network.