Soccer


Google has made a virtual soccer pitch to train AIs to play football

New Scientist

Many people have been inspired by the football World Cup in France and now artificial intelligence is learning to play too. Karol Kurach and colleagues at Google Research in Zurich, Switzerland have made a virtual football training pitch for AIs to use to learn how to play. Because football requires a balance between short-term control and high-level strategy, it is challenging for AIs to master, says Kurach.


North Korea university to teach artificial intelligence, state media says

#artificialintelligence

North Korea is reforming education at universities to place greater emphasis on artificial intelligence, according to state media. Pyongyang's Workers' Party newspaper Rodong Sinmun reported Sunday Pyongyang University of Computer Science is changing its computer-programming department into a department for the study of AI. The goal is to improve the quality of school courses so classes on AI are more readily available in the department, according to the report. PyongyangUniversity of Computer Science has decided to improve artificial intelligence education because AI is a "key technology in the information industry," the Rodong article said. The university is developing the new program following directives from Kim Jong Un, issued at the fourth plenum of the seventh party central committee meeting in April, state media said.


RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

arXiv.org Machine Learning

Imitation learning has long been an approach to alleviate the tractability issues that arise in reinforcement learning. However, most literature makes several assumptions such as access to the expert's actions, availability of many expert demonstrations, and injection of task-specific domain knowledge into the learning process. We propose reinforced inverse dynamics modeling (RIDM), a method of combining reinforcement learning and imitation from observation (IfO) to perform imitation using a single expert demonstration, with no access to the expert's actions, and with little task-specific domain knowledge. Given only a single set of the expert's raw states, such as joint angles in a robot control task, at each time-step, we learn an inverse dynamics model to produce the necessary low-level actions, such as torques, to transition from one state to the next such that the reward from the environment is maximized. We demonstrate that RIDM outperforms other techniques when we apply the same constraints on the other methods on six domains of the MuJoCo simulator and for two different robot soccer tasks for two experts from the RoboCup 3D simulation league on the SimSpark simulator.


Who Will Win It? An In-game Win Probability Model for Football

arXiv.org Machine Learning

In-game win probability is a statistical metric that provides a sports team's likelihood of winning at any given point in a game, based on the performance of historical teams in the same situation. In-game win-probability models have been extensively studied in baseball, basketball and American football. These models serve as a tool to enhance the fan experience, evaluate in game-decision making and measure the risk-reward balance for coaching decisions. In contrast, they have received less attention in association football, because its low-scoring nature makes it far more challenging to analyze. In this paper, we build an in-game win probability model for football. Specifically, we first show that porting existing approaches, both in terms of the predictive models employed and the features considered, does not yield good in-game win-probability estimates for football. Second, we introduce our own Bayesian statistical model that utilizes a set of eight variables to predict the running win, tie and loss probabilities for the home team. We train our model using event data from the last four seasons of the major European football competitions. Our results indicate that our model provides well-calibrated probabilities. Finally, we elaborate on two use cases for our win probability metric: enhancing the fan experience and evaluating performance in crucial situations.


Facial Expression Recognition on FIFA videos using Deep Learning: World Cup Edition

#artificialintelligence

Few Hearts were broken few still live. No matter who wins, the game will still make me thrill. Fifa world cup 2018 has become one of the highest goal scoring world cups in history. No matter which country is playing, the moment those 11 players step on the field, people get connected to them emotionally. While watching them we share their joy, fear, and excitement through the expression conveyed by them.


Hybrid Machine Learning Forecasts for the FIFA Women's World Cup 2019

arXiv.org Machine Learning

In this work, we combine two different ranking methods together with several other predictors in a joint random forest approach for the scores of soccer matches. The first ranking method is based on the bookmaker consensus, the second ranking method estimates adequate ability parameters that reflect the current strength of the teams best. The proposed combined approach is then applied to the data from the two previous FIFA Women's World Cups 2011 and 2015. Finally, based on the resulting estimates, the FIFA Women's World Cup 2019 is simulated repeatedly and winning probabilities are obtained for all teams. The model clearly favors the defending champion USA before the host France.


The Ancient Rites That Gave Birth to Religion - Issue 72: Quandary

Nautilus

The invention of religion is a big bang in human history. Gods and spirits helped explain the unexplainable, and religious belief gave meaning and purpose to people struggling to survive. But what if everything we thought we knew about religion was wrong? What if belief in the supernatural is window dressing on what really matters--elaborate rituals that foster group cohesion, creating personal bonds that people are willing to die for. Anthropologist Harvey Whitehouse thinks too much talk about religion is based on loose conjecture and simplistic explanations. Whitehouse directs the Institute of Cognitive and Evolutionary Anthropology at Oxford University. For years he's been collaborating with scholars around the world to build a massive body of data that grounds the study of religion in science. Whitehouse draws on an array of disciplines--archeology, ethnography, history, evolutionary psychology, cognitive science--to construct a profile of religious practices.


MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

arXiv.org Machine Learning

Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents to have this capability, they must be able to extract reusable skills from past experience that can be recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, tasks often require coordination of multiple skills simultaneously. Learning discrete primitives for every combination of skills quickly becomes prohibitive. Composable primitives that can be recombined to create a large variety of behaviors can be more suitable for modeling this combinatorial explosion. In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors. Our method factorizes an agent's skills into a collection of primitives, where multiple primitives can be activated simultaneously via multiplicative composition. This flexibility allows the primitives to be transferred and recombined to elicit new behaviors as necessary for novel tasks. We demonstrate that MCP is able to extract composable skills for highly complex simulated characters from pre-training tasks, such as motion imitation, and then reuse these skills to solve challenging continuous control tasks, such as dribbling a soccer ball to a goal, and picking up an object and transporting it to a target location.


From semantics to execution: Integrating action planning with reinforcement learning for robotic tool use

arXiv.org Artificial Intelligence

Reinforcement learning is an appropriate and successful method to robustly perform low-level robot control under noisy conditions. Symbolic action planning is useful to resolve causal dependencies and to break a causally complex problem down into a sequence of simpler high-level actions. A problem with the integration of both approaches is that action planning is based on discrete high-level action- and state spaces, whereas reinforcement learning is usually driven by a continuous reward function. However, recent advances in reinforcement learning, specifically, universal value function approximators and hindsight experience replay, have focused on goal-independent methods based on sparse rewards. In this article, we build on these novel methods to facilitate the integration of action planning with reinforcement learning by exploiting the reward-sparsity as a bridge between the high-level and low-level state- and control spaces. As a result, we demonstrate that the integrated neuro-symbolic method is able to solve object manipulation problems that involve tool use and non-trivial causal dependencies under noisy conditions, exploiting both data and knowledge.


Multi-Pass Q-Networks for Deep Reinforcement Learning with Parameterised Action Spaces

arXiv.org Machine Learning

Parameterised actions in reinforcement learning are composed of discrete actions with continuous action-parameters. This provides a framework for solving complex domains that require combining high-level actions with flexible control. The recent P-DQN algorithm extends deep Q-networks to learn over such action spaces. However, it treats all action-parameters as a single joint input to the Q-network, invalidating its theoretical foundations. We analyse the issues with this approach and propose a novel method, multi-pass deep Q-networks, or MP-DQN, to address them. We empirically demonstrate that MP-DQN significantly outperforms P-DQN and other previous algorithms in terms of data efficiency and converged policy performance on the Platform, Robot Soccer Goal, and Half Field Offense domains.