If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Experience-based planning domains have been proposed to improve problem solving by learning from experience. They rely on acquiring and using task knowledge, i.e., activity schemata, for generating solutions to problem instances in a class of tasks. Using Three-Valued Logic Analysis (TVLA), we extend previous work to generate a set of conditions that determine the scope of applicability of an activity schema. The inferred scope is a bounded representation of a set of problems of potentially unbounded size, in the form of a 3-valued logical structure, which is used to automatically find an applicable activity schema for solving task problems. We validate this work in two classical planning domains.
Experience-based planning domains (EBPDs) have been recently proposed to improve problem solving by learning from experience. EBPDs provide important concepts for long-term learning and planning in robotics. They rely on acquiring and using task knowledge, i.e., activity schemata, for generating concrete solutions to problem instances in a class of tasks. Using Three-Valued Logic Analysis (TVLA), we extend previous work to generate a set of conditions as the scope of applicability for an activity schema. The inferred scope is a bounded representation of a set of problems of potentially unbounded size, in the form of a 3-valued logical structure, which allows an EBPD system to automatically find an applicable activity schema for solving task problems. We demonstrate the utility of our approach in a set of classes of problems in a simulated domain and a class of real world tasks in a fully physically simulated PR2 robot in Gazebo.
Kasaei, S. Hamidreza (University of Aveiro) | Sock, Juil (Imperial College London) | Lopes, Luis Seabra (University of Aveiro) | Tome, Ana Maria (University of Aveiro) | Kim, Tae-Kyun (Imperial College London)
There is growing need for robots that can interact with people in everyday situations. For service robots, it is not reasonable to assume that one can pre-program all object categories. Instead, apart from learning from a batch of labelled training data, robots should continuously update and learn new object categories while working in the environment. This paper proposes a cognitive architecture designed to create a concurrent 3D object category learning and recognition in an interactive and open-ended manner. In particular, this cognitive architecture provides automatic perception capabilities that will allow robots to detect objects in highly crowded scenes and learn new object categories from the set of accumulated experiences in an incremental and open-ended way. Moreover, it supports constructing the full model of an unknown object in an on-line manner and predicting next best view for improving object detection and manipulation performance. We provide extensive experimental results demonstrating system performance in terms of recognition, scalability, next-best-view prediction and real-world robotic applications.
Learning and deliberation are required to endow a robotwith the capabilities to acquire knowledge, perform a variety of tasks and interactions, and adapt to open-ended environments. This paper explores the notion of experience-based planning domains (EBPDs) for task-level learning and planning in robotics. EBPDs rely on methods for a robot to: (i) obtain robot activity experiences from the robot's performance; (ii) conceptualize each experience to a task model called activity schema; and (iii) exploit the learned activity schemata to make plans in similar situations. Experiences are episodic descriptions of plan-based robot activities including environment perception, sequences of applied actions and achieved tasks. The conceptualization approach integrates different techniques including deductive generalization, abstraction and feature extraction to learn activity schemata. A high-level task planner was developed to find a solution for a similar task by following an activity schema. In this paper, we extend our previous approach by integrating goal inference capabilities. The proposed approach is illustrated in a restaurant environment where a service robot learns how to carry out complex tasks.
Rockel, Sebastian (University of Hamburg) | Neumann, Bernd (University of Hamburg) | Zhang, Jianwei (University of Hamburg) | Dubba, Sandeep Krishna Reddy (University of Leeds) | Cohn, Anthony G. (University of Leeds) | Konecny, Stefan (Örebro University) | Mansouri, Masoumeh (Örebro University) | Pecora, Federico (Örebro University) | Saffiotti, Alessandro (Örebro University) | Günther, Martin (University of Osnabrück) | Stock, Sebastian (University of Osnabrück) | Hertzberg, Joachim (University of Osnabrück) | Tome, Ana Maria (University of Aveiro ) | Pinho, Armando (University of Aveiro) | Lopes, Luis Seabra (University of Aveiro ) | Riegen, Stephanie von (HITeC e.V. ) | Hotz, Lothar (HITeC e.V.)
One way to improve the robustness and flexibility of robot performance is to let the robot learn from its experiences. In this paper, we describe the architecture and knowledge-representation framework for a service robot being developed in the EU project RACE, and present examples illustrating how learning from experiences will be achieved. As a unique innovative feature, the framework combines memory records of low-level robot activities with ontology-based high-level semantic descriptions.