Goto

Collaborating Authors

 Inamura, Tetsunari


Learning multimodal representations for sample-efficient recognition of human actions

arXiv.org Artificial Intelligence

Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present \textit{motion concepts}, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. Furthermore, we present Online Motion Concept Learning (OMCL), a new algorithm which learns novel motion concepts from action demonstrations and recognizes previously learned motion concepts. The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sample-efficient recognition of human actions.


Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

arXiv.org Artificial Intelligence

Autonomous Robots manuscript No. (will be inserted by the editor) Abstract We propose a novel online learning algorithm, Keywords Online learning · Place categorization · called SpCoSLAM 2.0, for spatial concepts and lexical Scalability · Semantic mapping · Lexical acquisition · acquisition with high accuracy and scalability. Previously,Unsupervised Bayesian probabilistic model we proposed SpCoSLAM as an online learning algorithm based on unsupervised Bayesian probabilistic model that integrates multimodal place categorization, 1 Introduction lexical acquisition, and SLAM. However, our previous algorithm had limited estimation accuracy owing Robots operating in various human environments must to the influence of the early stages of learning, and increased adaptively and sequentially acquire new categories for computational complexity with added training places and unknown words related to various places as data. Therefore, we introduce techniques such as fixedlag well as the map of the environment (Kostavelis and rejuvenation to reduce the calculation time while Gasteratos, 2015). It is desirable for robots to acquire maintaining an accuracy higher than that of the previous place categories and vocabulary autonomously based algorithm. The results show that, in terms of estimation on their experience because it is difficult to manually accuracy, the proposed algorithm exceeds the design spatial knowledge in advance. Related research previous algorithm and is comparable to batch learning. Our approach will contribute to the realization interest in recent years. However, conventional approaches of long-term spatial language interactions between in most of these studies are limited insofar as humans and robots.


Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements

arXiv.org Artificial Intelligence

This paper describes a computational model, called the Dirichlet process Gaussian mixture model with latent joints (DPGMM-LJ), that can find latent tree structure embedded in data distribution in an unsupervised manner. By combining DPGMM-LJ and a pre-existing body map formation method, we propose a method that enables an agent having multi-link body structure to discover its kinematic structure, i.e., body schema, from tactile information alone. The DPGMM-LJ is a probabilistic model based on Bayesian nonparametrics and an extension of Dirichlet process Gaussian mixture model (DPGMM). In a simulation experiment, we used a simple fetus model that had five body parts and performed structured random movements in a womb-like environment. It was shown that the method could estimate the number of body parts and kinematic structures without any pre-existing knowledge in many cases. Another experiment showed that the degree of motor coordination in random movements affects the result of body schema formation strongly. It is confirmed that the accuracy rate for body schema estimation had the highest value 84.6% when the ratio of motor coordination was 0.9 in our setting. These results suggest that kinematic structure can be estimated from tactile information obtained by a fetus moving randomly in a womb without any visual information even though its accuracy was not so high. They also suggest that a certain degree of motor coordination in random movements and the sufficient dimension of state space that represents the body map are important to estimate body schema correctly.


Estimation of Suitable Action to Realize Given Novel Effect with Given Tool Using Bayesian Tool Affordances

AAAI Conferences

We present the concept of Bayesian Tool Affordances as a solution to estimate the suitable action for the given tool to realize the given novel effects to the robot. We define Tool affordances as the “awareness within robot about the different kind of effects it can create in the environment using a tool”. It incorporates understanding the bi-directional association of executed Action, functionally relevant features of the Tool and the resulting effects. We propose Bayesian leaning of Tool Affordances for prediction, inference and planning capabilities while dealing with uncertainty, redundancy and irrelevant information using limited learning samples. The estimation results are presented in this paper to validate the proposed concept of Bayesian Tool Affordances.