Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning
Saito, Namiko, Kim, Kitae, Murata, Shingo, Ogata, Tetsuya, Sugano, Shigeki
Abstract--We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task. I. Introduction In recent years, robots have become part of human living space and have been expected to perform various tasks in complex environments. If robots could use tools as humans do, they could improve in versatility, overcome some physical limitations and adapt to the environment.
Sep-23-2018