play football
Forget chess, DeepMind's training its new AI to play football
Researchers from DeepMind, the UK's juggernaut AI lab, have forsaken the noble games of chess and Go for a more plebeian delight: football. The Google sister company yesterday published a research paper and accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) -- a method by which artificial intelligence agents can learn to operate physical bodies. An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it's trained offline or via RL by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest. Up front: Essentially, the DeepMind team created an AI system that can learn how to do things inside of a physics simulator by watching videos of other agents performing those tasks. And, of course, if you've got a giant physics engine and an endless supply of curious robots, the only rational thing to do is to teach it how to dribble and shoot: We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data. Background: In order to train AI to operate and control robots in the world, researchers have to prepare the machines for reality.
- Leisure & Entertainment > Sports (1.00)
- Leisure & Entertainment > Games > Chess (0.93)
Forget chess, DeepMind's training its new AI to play football
Researchers from DeepMind, the UK's juggernaut AI lab, have forsaken the noble games of chess and Go for a more plebeian delight: football. The Google sister company yesterday published a research paper and accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) -- a method by which artificial intelligence agents can learn to operate physical bodies. An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it's trained offline or via RL by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest. Up front: Essentially, the DeepMind team created an AI system that can learn how to do things inside of a physics simulator by watching videos of other agents performing those tasks. And, of course, if you've got a giant physics engine and an endless supply of curious robots, the only rational thing to do is to teach it how to dribble and shoot: We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data. Background: In order to train AI to operate and control robots in the world, researchers have to prepare the machines for reality.
- Leisure & Entertainment > Sports (1.00)
- Leisure & Entertainment > Games > Chess (0.93)
Forget chess, DeepMind's training its new AI to play football
Researchers from DeepMind, the UK's juggernaut AI lab, have forsaken the noble games of chess and Go for a more plebeian delight: football. The Google sister company yesterday published a research paper and accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) -- a method by which artificial intelligence agents can learn to operate physical bodies. An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it's trained offline or via RL by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest. And be the first in line for ticket offers, event news, and more! Up front: Essentially, the DeepMind team created an AI system that can learn how to do things inside of a physics simulator by watching videos of other agents performing those tasks. And, of course, if you've got a giant physics engine and an endless supply of curious robots, the only rational thing to do is to teach it how to dribble and shoot: We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data.
- Leisure & Entertainment > Sports (1.00)
- Leisure & Entertainment > Games > Chess (0.93)
Watch out, Messi: artificial intelligence has finally learned to play football
DeepMind, Google's artificial intelligence division, taught AI humanoids how to work as a team in order to play football together, turning them from flailing tots to proficient players. Researchers ran a computer simulation through an athletic curriculum, giving AI control over humanoids with realistic body masses and movements. It's not the first time DeepMind tried its hand at games. The AI previously mastered chess and Go, a feat that researchers thought was nigh impossible at one point. Then, the group focused on other games, like Mario or Starcraft.
Forget chess, DeepMind's training its new AI to play football
Researchers from DeepMind, the UK's juggernaut AI lab, have forsaken the noble games of chess and Go for a more plebeian delight: football. The Google sister company yesterday published a research paper and accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) -- a method by which artificial intelligence agents can learn to operate physical bodies. An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it's trained offline or via RL by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest. Up front: Essentially, the DeepMind team created an AI system that can learn how to do things inside of a physics simulator by watching videos of other agents performing those tasks. And, of course, if you've got a giant physics engine and an endless supply of curious robots, the only rational thing to do is to teach it how to dribble and shoot: We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data. Background: In order to train AI to operate and control robots in the world, researchers have to prepare the machines for reality.
- Leisure & Entertainment > Sports (1.00)
- Leisure & Entertainment > Games > Chess (0.93)
Why Choose Random Forest and Not Decision Trees
A decision tree is a simple tree-like structure constituting nodes and branches. At each node, data is split based on any of the input features, generating two or more branches as output. This iterative process increases the numbers of generated branches and partitions the original data. This continues until a node is generated where all or almost all of the data belong to the same class and further splits -- or branched -- are no longer possible. This whole process generates a tree-like structure.
Google has made a virtual soccer pitch to train AIs to play football
Many people have been inspired by the football World Cup in France and now artificial intelligence is learning to play too. Karol Kurach and colleagues at Google Research in Zurich, Switzerland have made a virtual football training pitch for AIs to use to learn how to play. Because football requires a balance between short-term control and high-level strategy, it is challenging for AIs to master, says Kurach.
Space Invaders at 40: What the game says about the 1970s – and today
The Space Invaders arcade video game, celebrating its 40th anniversary, is a classic piece of software credited as one of the earliest digital shooting games. As a game designer and teacher of games, I know how meaning is carried from designer to the mechanics of play. As a game studies researcher, I also know how games reveal myth, meaning and culture. An analysis of Pac-Man, for instance, shows how that game embodies many values of its day – including consumerism, drug use and gender politics. The message in Space Invaders is as basic as the graphics: when faced with conflict, players have no option except to blast it away.
- North America > United States (0.29)
- Asia > Bhutan > Thimphu District > Thimphu (0.14)
- Asia > Bhutan > Punakha District > Punakha (0.06)
- (2 more...)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Education > Health & Safety > School Safety & Security > School Violence (0.30)
From the NFL to MIT: The Double Life of John Urschel
A set diagram depicts the places where different groups of objects overlap. Example: in the United States, there are about 1,700 professional football players, and thousands of people pursuing PhDs in math. On an overcast day in late winter, that place is the Norbert Wiener Common Room in MIT's Department of Mathematics, where John Urschel is sitting at a table, chatting. Urschel is an offensive lineman with the NFL's Baltimore Ravens, a three-year pro with 40 regular- season games played and a couple of playoff starts on his football résumé. He is also a doctoral candidate in math at MIT who has passed his qualifying exams and has nine published or accepted research papers on his academic résumé.