Learning and deliberation are required to endow a robotwith the capabilities to acquire knowledge, perform a variety of tasks and interactions, and adapt to open-ended environments. This paper explores the notion of experience-based planning domains (EBPDs) for task-level learning and planning in robotics. EBPDs rely on methods for a robot to: (i) obtain robot activity experiences from the robot's performance; (ii) conceptualize each experience to a task model called activity schema; and (iii) exploit the learned activity schemata to make plans in similar situations. Experiences are episodic descriptions of plan-based robot activities including environment perception, sequences of applied actions and achieved tasks. The conceptualization approach integrates different techniques including deductive generalization, abstraction and feature extraction to learn activity schemata. A high-level task planner was developed to find a solution for a similar task by following an activity schema. In this paper, we extend our previous approach by integrating goal inference capabilities. The proposed approach is illustrated in a restaurant environment where a service robot learns how to carry out complex tasks.
Ayari, Naouel (University of Paris East Créteil) | Chibani, Abdelghani (University of Paris East Créteil) | Amirat, Yacine (University of Paris East Créteil) | Fried, Georges (University of Paris East Créteil)
To provide, anywhere and anytime, smart assistive services to people, cognitive robots and agents need to be endowed with advanced spatio-temporal knowledge representation and reasoning capabilities. In this paper, a semantic approach for cloud-assisted robotics integrating entities of the ambient environment is proposed. Its principle consists of advanced contextual knowledge representation and reasoning models based on the hybridization of metric, topological and semantic information. A scenario dedicated to the cognitive assistance of frail people is implemented and analyzed for validation purposes of the proposed approach.
Beeson, Patrick (TRACLabs Inc.) | Kortenkamp, David (TRACLabs Inc.) | Bonasso, R. Peter (TRACLabs Inc.) | Persson, Andreas (Orebro University) | Loutfi, Amy (Orebro University) | Bona, Jonathan P. (State University of New York, Buffalo)
This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.
Hofmann, Till (RWTH Aachen University) | Mataré, Victor (FH Aachen University for Applied Sciences) | Schiffer, Stefan (RWTH Aachen University, FH Aachen University for Applied Sciences) | Ferrein, Alexander (FH Aachen University for Applied Sciences) | Lakemeyer, Gerhard (RWTH Aachen University)
In this paper, we are concerned with making the execution of abstract action plans for robotic agents more robust. To this end, we propose to model the internals of a robot system and its ties to the actions that the robot can perform. Based on these models, we propose an online transformation of an abstract plan into executable actions conforming with system specifics. With our framework, we aim to achieve two goals. First, modeling the system internals is beneficial in its own right in order to achieve long term autonomy, system transparency, and comprehensibility. Second, separating the system details from determining the course of action on an abstract level leverages the use of planning for actual robotic systems.
Natural-language-facilitated human-robot cooperation (NLC) refers to using natural language (NL) to facilitate interactive information sharing and task executions with a common goal constraint between robots and humans. Recently, NLC research has received increasing attention. Typical NLC scenarios include robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation, and robot social accompany. However, a thorough review, that can reveal latest methodologies to use NL to facilitate human-robot cooperation, is missing. In this review, a comprehensive summary about methodologies for NLC is presented. NLC research includes three main research focuses: NL instruction understanding, NL-based execution plan generation, and knowledge-world mapping. In-depth analyses on theoretical methods, applications, and model advantages and disadvantages are made. Based on our paper review and perspective, potential research directions of NLC are summarized.