Talking About the World: Cooperative Robots that Learn to Communicate

AAAI Conferences

Models of the world can take many shapes. In this paper, we will discuss how groups of autonomous robots learn languages that can be used as a means for modeling the environment. The robots have already learned simple languages for communication of task instructions. These languages are adaptable under changing situations; i.e. once the robots learn a language, they are able to learn new concepts and update old concepts. In this prior work, reinforcement ]earning using a human instructor provides the motivation for communication. In current work, the world wiU be the motivation for learning languages. Since the languages are grounded in the world, they can be used to talk about the world; in effect, the language is the means the robots use to model the world. This paper will explore the issues of learning to communicate solely through environment motivation. Additionally, we will discuss the possible uses of these languages for interacting with the world.


Good Robot! Elon Musk's AI Nonprofit Shows Where AI Is Going

#artificialintelligence

The next big trend in AI looks likely to be computers and robots that teach themselves through trial and error. Elon Musk and Sam Altman (of Y Combinator) caused a stir last December by luring several high-profile researchers to join OpenAI, a billion-dollar nonprofit dedicated to releasing cutting-edge artificial intelligence research for free. Today the nonprofit released the first fruits of its work, and it suggests that kind of learning will be important for the future of AI. The nonprofit has released a tool called OpenAI Gym for developing and comparing different so-called reinforcement learning algorithms, which provide a way for a machine to learn through positive and negative feedback. This week OpenAI also announced two new recruits, including Pieter Abbeel, an associate professor at Berkeley and a leading expert on applying reinforcement learning to robots.


OpenAI Releases Algorithm That Helps Robots Learn from Hindsight

IEEE Spectrum Robotics

Being able to learn from mistakes is a powerful ability that humans (being mistake-prone) take advantage of all the time. Even if we screw something up that we're trying to do, we probably got parts of it at least a little bit correct, and we can build off of the things that we did not to do better next time.


Building smart robots using AI ROS: Part 1

#artificialintelligence

The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. ROS is used to create application for a physical robot without depending on the actual machine, thus saving cost and time. These applications can be transferred onto the physical robot without modifications. The decision making capability of the robots can be aided with AI.


From Games to Assembly Lines, Robots Learn Faster Than Ever

#artificialintelligence

A new artificial intelligence startup called Osaro aims to give industrial robots the same turbocharge that DeepMind Technologies gave Atari-playing computer programs. In December 2013, DeepMind showcased a type of artificial intelligence that had mastered seven Atari 2600 games from scratch in a matter of hours, and could outperform some of the best human players. Google swiftly snapped up the London-based company, and the deep-reinforcement learning technology behind it, for a reported $400 million. Now Osaro, with $3.3 million in investments from the likes of Peter Thiel and Jerry Yang, claims to have taken deep-reinforcement learning to the next level, delivering the same superhuman AI performance but over 100 times as fast. Deep-reinforcement learning arose from deep learning, a method of using multiple layers of neural networks to efficiently process and organize mountains of raw data (see "10 Breakthrough Technologies 2013: Deep Learning").