Rockel, Sebastian (University of Hamburg) | Neumann, Bernd (University of Hamburg) | Zhang, Jianwei (University of Hamburg) | Dubba, Sandeep Krishna Reddy (University of Leeds) | Cohn, Anthony G. (University of Leeds) | Konecny, Stefan (Örebro University) | Mansouri, Masoumeh (Örebro University) | Pecora, Federico (Örebro University) | Saffiotti, Alessandro (Örebro University) | Günther, Martin (University of Osnabrück) | Stock, Sebastian (University of Osnabrück) | Hertzberg, Joachim (University of Osnabrück) | Tome, Ana Maria (University of Aveiro ) | Pinho, Armando (University of Aveiro) | Lopes, Luis Seabra (University of Aveiro ) | Riegen, Stephanie von (HITeC e.V. ) | Hotz, Lothar (HITeC e.V.)
One way to improve the robustness and flexibility of robot performance is to let the robot learn from its experiences. In this paper, we describe the architecture and knowledge-representation framework for a service robot being developed in the EU project RACE, and present examples illustrating how learning from experiences will be achieved. As a unique innovative feature, the framework combines memory records of low-level robot activities with ontology-based high-level semantic descriptions.
Kasaei, S. Hamidreza (University of Aveiro) | Sock, Juil (Imperial College London) | Lopes, Luis Seabra (University of Aveiro) | Tome, Ana Maria (University of Aveiro) | Kim, Tae-Kyun (Imperial College London)
There is growing need for robots that can interact with people in everyday situations. For service robots, it is not reasonable to assume that one can pre-program all object categories. Instead, apart from learning from a batch of labelled training data, robots should continuously update and learn new object categories while working in the environment. This paper proposes a cognitive architecture designed to create a concurrent 3D object category learning and recognition in an interactive and open-ended manner. In particular, this cognitive architecture provides automatic perception capabilities that will allow robots to detect objects in highly crowded scenes and learn new object categories from the set of accumulated experiences in an incremental and open-ended way. Moreover, it supports constructing the full model of an unknown object in an on-line manner and predicting next best view for improving object detection and manipulation performance. We provide extensive experimental results demonstrating system performance in terms of recognition, scalability, next-best-view prediction and real-world robotic applications.
Karapinar, Sertac (Istanbul Technical University) | Sariel-Talay, Sanem (Istanbul Technical University) | Yildiz, Petek (Istanbul Technical University) | Ersen, Mustafa (Istanbul Technical University)
A cognitive robot may face failures during the execution of its actions in the physical world. In this paper, we investigate how robots can ensure robustness by gaining experience on action executions, and we propose a lifelong experimental learning method. We use Inductive Logic Programming (ILP) as the learning method to frame new hypotheses. ILP provides first-order logic representations of the derived hypotheses that are useful for reasoning and planning processes. Furthermore, it can use background knowledge to represent more advanced rules. Partially specified world states can also be easily represented in these rules. All these advantages of ILP make this approach superior to attribute-based learning approaches. Experience gained through incremental learning is used as a guide to future decisions of the robot for robust execution. The results on our Pioneer 3DX robot reveal that the hypotheses framed for failure cases are sound and ensure safety in future tasks of the robot.
We are implementing ADAPT, a cognitive architecture for a Pioneer mobile robot, to give the robot the full range of cognitive abilities including perception, use of natural language, learning and the ability to solve complex problems. Our perspective is that an architecture based on a unified theory of robot cognition has the best chance of attaining human-level performance. Existing work in cognitive modeling has accomplished much in the construction of such unified cognitive architectures in areas other than robotics; however, there are major respects in which these architectures are inadequate for robot cognition. This paper examines two major inadequacies of current cognitive architectures for robotics: the absence of support for true concurrency and for active perception.
actions map to the different levels at which experts reason. When considering a building at the foundations level, an expert Figure 2: Subcomponent Structure Primitive Actions with Abstract and may place the constraint that the foundations must be laid before the drains. This constraint is placed on the abstract LAYaction of class FOUNDATIONS, but it is to be followed by all the actions of the FOUNDATIONS subcomponents. Primitive Actions are only associated with components that do not have subcomponents and they correspond to the actions that will appear in the final construction plan for a building. Primitive actions may be related to components through both the Must and Infer relationship types. Dependency Modelling Like action knowledge, expert knowledge about dependency is organised around components. Figure 3 shows the relationship Under existing between the classes BEAM and DRAIN. The semantics are that an instance of class DRAIN will pass under an instance of class BEAM. Thus, the actions that lay the drain must be completed before work commences on the beam. This relationship can be expressed by placing the temporal ordering constraint Drain.abstract-action