Full Story: http://newscenter.berkeley.edu/2015/0... UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. In their experiments, the PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, used "deep learning" techniques to complete various tasks without pre-programmed details about its surroundings.
The robot was perched over a bin filled with random objects, from a box of instant oatmeal to a small toy shark. This two-armed automaton did not recognize any of this stuff, but that did not matter. It reached into the pile and started picking things up, one after another after another. "It figures out the best way to grab each object, right from the middle of the clutter," said Jeff Mahler, one of the researchers developing the robot inside a lab at UC Berkeley. For humans, that is an easy task.
Taking inspiration from the way that children instinctively learn and adapt to a wide range of unpredictable environments, Abbeel and assistant professor Sergey Levine are developing algorithms that enable robots to learn from past experiences -- and even from other robots. Based on a principle called deep reinforcement learning, their work is bringing robots past a crucial threshold in demonstrating human-like intelligence, with the ability to independently solve problems and master new tasks in a quicker, more efficient manner.
The new technology comes from computer scientists based at University of California - Berkeley. Taking'machine learning' principles and creating specialized'robotic learning' systems, the researchers have given robots a degree of precognition. This new way of thinking will, one day, help to advance self-driving cars and to develop more intelligent robotic assistants for business operations. As things stand currently, the new technology has been tested out through an initial prototype which focuses on learning simple manual skills entirely from autonomous play. This is the foundation for more advanced applications with robotics.
As a rule, robots have to learn through explicit instruction, whether it's through new programming, watching videos or holding their hands. The machine uses neural network-based deep learning algorithms to master tasks through trial and error, much like humans do. Ask it to assemble a toy and it'll keep trying until it understands what works. In theory, you'd rarely need to give the robot new code -- you'd just make requests and give the automaton enough time to figure things out. As you might suspect, though, this brain-like'bot isn't ready for the real world yet.