Goto

Collaborating Authors

Design and Implementation of Linked Planning Domain Definition Language

arXiv.org Artificial Intelligence

Planning is a critical component of any artificial intelligence system that concerns the realization of strategies or action sequences typically for intelligent agents and autonomous robots. Given predefined parameterized actions, a planning service should accept a query with the goal and initial state to give a solution with a sequence of actions applied to environmental objects. This paper addresses the problem by providing a repository of actions generically applicable to various environmental objects based on Semantic Web technologies. Ontologies are used for asserting constraints in common sense as well as for resolving compatibilities between actions and states. Constraints are defined using Web standards such as SPARQL and SHACL to allow conditional predicates. We demonstrate the usefulness of the proposed planning domain description language with our robotics applications.


Knowledge Processing for Autonomous Robot Control

AAAI Conferences

Successfully accomplishing everyday manipulation tasks requires robots to have substantial knowledge about the objects they interact with, the environment they operate in as well as about the properties and effects of the actions they perform. Often, this knowledge is implicitly contained in manually written control programs, which makes it hard for the robot to adapt to newly acquired information or to re-use knowledge in a different context. By explicitly representing this knowledge, control decisions can be formulated as inference tasks which can be sent as queries to a knowledge base. This allows the robot to take all information it has at query time into account to generate answers, leading to better flexibility, adaptability to changing situations, robustness, and the ability to re-use knowledge once acquired. In this paper, we report on our work towards a practical and grounded knowledge representation and inference system. The system is specifically designed to meet the challenges created by using knowledge processing techniques on autonomous robots, including specialized inference methods, grounding of symbolic knowledge in the robot's control structures, and the acquisition of the different kinds of knowledge a robot needs.


Cognitive Robotics Using the Soar Cognitive Architecture

AAAI Conferences

Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar’s original symbolic processing, which improves Soar abilities for control of robots. These extensions include mental imagery, episodic and semantic memory, reinforcement learning, and continuous model learning. This paper presents research in mobile robotics, relational and continuous model learning, and learning by situated, interactive instruction.


An Ontology-Based Symbol Grounding System for Human-Robot Interaction

AAAI Conferences

This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.


A Survey of Knowledge Representation and Retrieval for Learning in Service Robotics

arXiv.org Artificial Intelligence

Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics - datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.