Goto

Collaborating Authors

Knowledge Representations in Technical Systems -- A Taxonomy

arXiv.org Artificial Intelligence

The recent usage of technical systems in human-centric environments leads to the question, how to teach technical systems, e.g., robots, to understand, learn, and perform tasks desired by the human. Therefore, an accurate representation of knowledge is essential for the system to work as expected. This article mainly gives insight into different knowledge representation techniques and their categorization into various problem domains in artificial intelligence. Additionally, applications of presented knowledge representations are introduced in everyday robotics tasks. By means of the provided taxonomy, the search for a proper knowledge representation technique regarding a specific problem should be facilitated.


RoboBrain: Large-Scale Knowledge Engine for Robots

arXiv.org Artificial Intelligence

In this paper we introduce a knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks. Building such an engine brings with it the challenge of dealing with multiple data modalities including symbols, natural language, haptic senses, robot trajectories, visual features and many others. The \textit{knowledge} stored in the engine comes from multiple sources including physical interactions that robots have while performing tasks (perception, planning and control), knowledge bases from the Internet and learned representations from several robotics research groups. We discuss various technical aspects and associated challenges such as modeling the correctness of knowledge, inferring latent information and formulating different robotic tasks as queries to the knowledge engine. We describe the system architecture and how it supports different mechanisms for users and robots to interact with the engine. Finally, we demonstrate its use in three important research areas: grounding natural language, perception, and planning, which are the key building blocks for many robotic tasks. This knowledge engine is a collaborative effort and we call it RoboBrain.


Functional Object-Oriented Network: Construction & Expansion

arXiv.org Artificial Intelligence

We build upon the functional object-oriented network (FOON), a structured knowledge representation which is constructed from observations of human activities and manipulations. A FOON can be used for representing object-motion affordances. Knowledge retrieval through graph search allows us to obtain novel manipulation sequences using knowledge spanning across many video sources, hence the novelty in our approach. However, we are limited to the sources collected. To further improve the performance of knowledge retrieval as a follow up to our previous work, we discuss generalizing knowledge to be applied to objects which are similar to what we have in FOON without manually annotating new sources of knowledge. We discuss two means of generalization: 1) expanding our network through the use of object similarity to create new functional units from those we already have, and 2) compressing the functional units by object categories rather than specific objects. We discuss experiments which compare the performance of our knowledge retrieval algorithm with both expansion and compression by categories.


Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

arXiv.org Machine Learning

One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.


Hierarchical Skills and Skill-based Representation

AAAI Conferences

Autonomous robots demand complex behavior to deal with unstructured environments. To meet these expectations, a robot needs to address a suite of problems associated with long term knowledge acquisition, representation, and execution in the presence of partial information. In this paper, we address these issues by the acquisition of broad, domain general skills using an intrinsically motivated reward function. We show how these skills can be represented compactly and used hierarchically to obtain complex manipulation skills. We further present a Bayesian model using the learned skills to model objects in the world, in terms of the actions they afford. We argue that our knowledge representation allows a robot to both predict the dynamics of objects in the world as well as recognize them.