Researchers help robots think and plan in the abstract

#artificialintelligence 

When we see demonstrations of robots planning for and performing multistep tasks, "it's almost always the case that a programmer has explicitly told the robot how to think about the world in order for it to make a plan," Konidaris said. "But if we want robots that can act more autonomously, they're going to need the ability to learn abstractions on their own." In computer science terms, these kinds of abstractions fall into two categories: "procedural abstractions" and "perceptual abstractions." Procedural abstractions are programs made out of low-level movements composed into higher-level skills. An example would be bundling all the little movements needed to open a door -- all the motor movements involved in reaching for the knob, turning it and pulling the door open -- into a single "open the door" skill. Once such a skill is built, you don't need to worry about how it works. All you need to know is when to run it. Roboticists -- including Konidaris himself -- have been studying how to make robots learn procedural abstractions for years, he says. But according to Konidaris, there's been less progress in perceptual abstraction, which has to do with helping a robot make sense of its pixelated surroundings.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found